Monday, December 28, 2009
cpanel tutorial: how to increase mail send limit for domain
a different number than the system default using the
file /var/cpanel/maxemails.
Just add an entry like ‘domain.com = 100″. Now 100 is the maximum email per hour limit for domain.com.
But please make sure that you have executed the following script after updating the file /var/cpanel/maxemails.
/scripts/build_maxemails_config
refer:
http://blog.webhostinghelps.net/?cat=4
http://blog.webhostinghelps.net/?cat=26
http://blog.webhostinghelps.net/?cat=6
http://blog.webhostinghelps.net/?cat=86
http://blog.webhostinghelps.net/?cat=1
kernel boot time paramenters
* Troubleshoot system
* Hardware parameters that the kernel would not able to determine on its own
* Force kernel to override the default hardware parameters in order to increase performance
* Password and other recovery operations
The kernel command line syntax
name=value1,value2,value3…
Where,
* name : Keyword name, for example, init, ro, boot etc
Ten common Boot time parameters
init
This sets the initial command to be executed by the kernel. Default is to use /sbin/init, which is the parent of all processes.
To boot system without password pass /bin/bash or /bin/sh as argument to init
init=/bin/bash
single
The most common argument that is passed to the init process is the word 'single' which instructs init to boot the computer in single user mode, and not launch all the usual daemons
root=/dev/device
This argument tells the kernel what device (hard disk, floppy disk) to be used as the root filesystem while booting. For example following boot parameter use /dev/sda1 as the root file system:
root=/dev/sda1
If you copy entire partition from /dev/sda1 to /dev/sdb1 then use
root=/dev/sdb1
ro
This argument tells the kernel to mount root file system as read-only. This is done so that fsck program can check and repair a Linux file system. Please note that you should never ever run fsck on read/write file system.
rw
This argument tells the kernel to mount root file system as read and write mode.
panic=SECOND
Specify kernel behavior on panic. By default, the kernel will not reboot after a panic, but this option will cause a kernel reboot after N seconds. For example following boot parameter will force to reboot Linux after 10 seconds
panic=10
maxcpus=NUMBER
Specify maximum number of processors that an SMP kernel should make use of. For example if you have four cpus and would like to use 2 CPU then pass 2 as a number to maxcpus (useful to test different software performances and configurations).
maxcpus=2
debug
Enable kernel debugging. This option is useful for kernel hackers and developers who wish to troubleshoot problem
selinux [0|1]
Disable or enable SELinux at boot time.
* Value 0 : Disable selinux
* Value 1 : Enable selinux
raid=/dev/mdN
This argument tells kernel howto assembly of RAID arrays at boot time. Please note that When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time.
mem=MEMEORY_SIZE
This is a classic parameter. Force usage of a specific amount of memory to be used when the kernel is not able to see the whole system memory or for test. For example:
mem=1024M
The kernel command line is a null-terminated string currently up to 255 characters long, plus the final null. A string that is too long will be automatically truncated by the kernel, a boot loader may allow a longer command line to be passed to permit future kernels to extend this limit (H. Peter Anvin ).
Other parameters
initrd /boot/initrd.img
An initrd should be loaded. the boot process will load the kernel and an initial ramdisk; then the kernel converts initrd into a "normal" ramdisk, which is mounted read-write as root device; then /linuxrc is executed; afterwards the "real" root file system is mounted, and the initrd file system is moved over to /initrd; finally the usual boot sequence (e.g. invocation of /sbin/init) is performed. initrd is used to provide/load additional modules (device driver). For example, SCSI or RAID device driver loaded using initrd.
hdX =noprobe
Do not probe for hdX drive. For example, disable hdb hard disk:
hdb=noprobe
If you disable hdb in BIOS, Linux will still detect it. This is the only way to disable hdb.
ether=irq,iobase,[ARG1,ARG2],name
Where,
* ether: ETHERNET DEVICES
For example, following boot argument force probing for a second Ethernet card (NIC), as the default is to only probe for one (irq=0,iobase=0 means automatically detect them).
ether=0,0,eth1
How to begin the enter parameters mode?
You need to enter all this parameter at Grub or Lilo boot prompt. For example if you are using Grub as a boot loader, at Grub prompt press 'e' to edit command before booting.
1) Select second line
2) Again, press 'e' to edit selected command
3) Type any of above parameters.
refer:
http://www.cyberciti.biz/tips/linux-limiting-or-restricting-smp-cpu-activation-in-smp-mode.html
http://www.cyberciti.biz/tips/10-boot-time-parameters-you-should-know-about-the-linux-kernel.html
compression in smp+auditing
NOTE: If you are looking for a parallel BZIP2 that works on cluster machines, you should check out MPIBZIP2 which was designed for a distributed-memory message-passing architecture.
The pbzip2 program is a parallel version of bzip2 for use on shared memory machines. It provides near-linear speedup when used on true multi-processor machines and 5-10% speedup on Hyperthreaded machines. The output is fully compatible with the regular bzip2 data so any files created with pbzip2 can be uncompressed by bzip2 and vice-versa.
The default settings for pbzip2 will work well in most cases. The only switch you will likely need to use is -d to decompress files and -p to set the # of processors for pbzip2 to use if autodetect is not supported on your system, or you want to use a specific # of CPUs.
Example 1: pbzip2 -v myfile.tar
This example will compress the file "myfile.tar" into the compressed file "myfile.tar.bz2". It will use the autodetected # of processors (or 2 processors if autodetect not supported) with the default file block size of 900k and default BWT block size of 900k.
The program would report something like:
===================================================================
Parallel BZIP2 v1.0.5 - by: Jeff Gilchrist [http://compression.ca]
[Jan. 08, 2009] (uses libbzip2 by Julian Seward)
# CPUs: 2
BWT Block Size: 900k
File Block Size: 900k
-------------------------------------------
File #: 1 of 1
Input Name: myfile.tar
Output Name: myfile.tar.bz2
Input Size: 7428687 bytes
Compressing data...
Output Size: 3236549 bytes
-------------------------------------------
Wall Clock: 2.809000 seconds
===================================================================
Example 2: pbzip2 -b15vk myfile.tar
This example will compress the file "myfile.tar" into the compressed file "myfile.tar.bz2". It will use the autodetected # of processors (or 2 processors if autodetect not supported) with a file block size of 1500k and a BWT block size of 900k. The file "myfile.tar" will not be deleted after compression is finished.
The program would report something like:
===================================================================
Parallel BZIP2 v1.0.5 - by: Jeff Gilchrist [http://compression.ca]
[Jan. 08, 2009] (uses libbzip2 by Julian Seward)
# CPUs: 2
BWT Block Size: 900k
File Block Size: 1500k
-------------------------------------------
File #: 1 of 1
Input Name: myfile.tar
Output Name: myfile.tar.bz2
Input Size: 7428687 bytes
Compressing data...
Output Size: 3236394 bytes
-------------------------------------------
Wall Clock: 3.059000 seconds
===================================================================
Example 3: pbzip2 -p4 -r -5 -v myfile.tar second*.txt
This example will compress the file "myfile.tar" into the compressed file "myfile.tar.bz2". It will use 4 processors with a BWT block size of 500k. The file block size will be the size of "myfile.tar" divided by 4 (# of processors) so that the data will be split evenly among each processor. This requires you have enough RAM for pbzip2 to read the entire file into memory for compression. Pbzip2 will then use the same options to compress all other files that match the wildcard "second*.txt" in that directory.
The program would report something like:
===================================================================
Parallel BZIP2 v1.0.5 - by: Jeff Gilchrist [http://compression.ca]
[Jan. 08, 2009] (uses libbzip2 by Julian Seward)
# CPUs: 4
BWT Block Size: 500k
File Block Size: 1857k
-------------------------------------------
File #: 1 of 3
Input Name: myfile.tar
Output Name: myfile.tar.bz2
Input Size: 7428687 bytes
Compressing data...
Output Size: 3237105 bytes
-------------------------------------------
File #: 2 of 3
Input Name: secondfile.txt
Output Name: secondfile.txt.bz2
Input Size: 5897 bytes
Compressing data...
Output Size: 3192 bytes
-------------------------------------------
File #: 3 of 3
Input Name: secondbreakfast.txt
Output Name: secondbreakfast.txt.bz2
Input Size: 83531 bytes
Compressing data...
Output Size: 11832 bytes
-------------------------------------------
Wall Clock: 5.127381 seconds
===================================================================
Example 4: tar cf myfile.tar.bz2 --use-compress-prog=pbzip2 dir_to_compress/
Example 4: tar -c directory_to_compress/ | pbzip2 -vc > myfile.tar.bz2
This example will compress the data being given to pbzip2 via pipe from TAR into the compressed file "myfile.tar.bz2". It will use the autodetected # of processors (or 2 processors if autodetect not supported) with the default file block size of 900k and default BWT block size of 900k. TAR is collecting all of the files from the "directory_to_compress/" directory and passing the data to pbzip2 as it works.
The program would report something like:
===================================================================
Parallel BZIP2 v1.0.5 - by: Jeff Gilchrist [http://compression.ca]
[Jan. 08, 2009] (uses libbzip2 by Julian Seward)
# CPUs: 2
BWT Block Size: 900k
File Block Size: 900k
-------------------------------------------
File #: 1 of 1
Input Name:
Output Name:
Compressing data...
-------------------------------------------
Wall Clock: 0.176441 seconds
===================================================================
Example 5: pbzip2 -dv myfile.tar.bz2
This example will decompress the file "myfile.tar.bz2" into the decompressed file "myfile.tar". It will use the autodetected # of processors (or 2 processors if autodetect not supported). The switches -b, -r, and -1..-9 are not valid for decompression.
The program would report something like:
===================================================================
Parallel BZIP2 v1.0.5 - by: Jeff Gilchrist [http://compression.ca]
[Jan. 08, 2009] (uses libbzip2 by Julian Seward)
# CPUs: 2
-------------------------------------------
File #: 1 of 1
Input Name: myfile.tar.bz2
Output Name: myfile.tar
BWT Block Size: 900k
Input Size: 3236549 bytes
Decompressing data...
Output Size: 7428687 bytes
-------------------------------------------
Wall Clock: 1.154000 seconds
refer:
http://compression.ca/pbzip2/
----------------------------------------
Linux Setting processor affinity for a certain task or process
by nixcraft · 25 comments
When you are using SMP (Symmetric MultiProcessing) you might want to override the kernel's process scheduling and bind a certain process to a specific CPU(s).
But what is CPU affinity?
CPU affinity is nothing but a scheduler property that "bonds" a process to a given set of CPUs on the SMP system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity:
The scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. For example, application such as Oracle (ERP apps) use # of cpus per instance licensed. You can bound Oracle to specific CPU to avoid license problem. This is a really useful on large server having 4 or 8 CPUS
Setting processor affinity for a certain task or process using taskset command
taskset is used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with a given CPU affinity. However taskset is not installed by default. You need to install schedutils (Linux scheduler utilities) package.
Install schedutils
Debian Linux:
# apt-get install schedutils
Red Hat Enterprise Linux:
# up2date schedutils
OR
# rpm -ivh schedutils*
Under latest version of Debian / Ubuntu Linux taskset is installed by default using util-linux package.
The CPU affinity is represented as a bitmask, with the lowest order bit corresponding to the first logical CPU and the highest order bit corresponding to the last logical CPU. For example:
* 0x00000001 is processor #0 (1st processor)
* 0x00000003 is processors #0 and #1
* 0x00000004 is processors #2 (3rd processor)
To set the processor affinity of process 13545 to processor #0 (1st processor) type following command:
# taskset 0x00000001 -p 13545
If you find a bitmask hard to use, then you can specify a numerical list of processors instead of a bitmask using -c flag:
# taskset -c 1 -p 13545
# taskset -c 3,4 -p 13545
Where,
* -p : Operate on an existing PID and not launch a new task (default is to launch a new task)
---------------------------
Linux audit files to see who made changes to a file
by Vivek Gite · 24 comments
This is one of the key questions many new sys admin ask:
How do I audit file events such as read / write etc? How can I use audit to see who changed a file in Linux?
The answer is to use 2.6 kernel’s audit system. Modern Linux kernel (2.6.x) comes with auditd daemon. It’s responsible for writing audit records to the disk. During startup, the rules in /etc/audit.rules are read by this daemon. You can open /etc/audit.rules file and make changes such as setup audit file log location and other option. The default file is good enough to get started with auditd.
In order to use audit facility you need to use following utilities
=> auditctl - a command to assist controlling the kernel’s audit system. You can get status, and add or delete rules into kernel audit system. Setting a watch on a file is accomplished using this command:
=> ausearch - a command that can query the audit daemon logs based for events based on different search criteria.
=> aureport - a tool that produces summary reports of the audit system logs.
Note that following all instructions are tested on CentOS 4.x and Fedora Core and RHEL 4/5 Linux.
Task: install audit package
The audit package contains the user space utilities for storing and searching the audit records generate by the audit subsystem in the Linux 2.6 kernel. CentOS/Red Hat and Fedora core includes audit rpm package. Use yum or up2date command to install package
# yum install audit
or
# up2date install audit
Auto start auditd service on boot
# ntsysv
OR
# chkconfig auditd on
Now start service:
# /etc/init.d/auditd start
How do I set a watch on a file for auditing?
Let us say you would like to audit a /etc/passwd file. You need to type command as follows:
# auditctl -w /etc/passwd -p war -k password-file
Where,
* -w /etc/passwd : Insert a watch for the file system object at given path i.e. watch file called /etc/passwd
* -p war : Set permissions filter for a file system watch. It can be r for read, w for write, x for execute, a for append.
* -k password-file : Set a filter key on a /etc/passwd file (watch). The password-file is a filterkey (string of text that can be up to 31 bytes long). It can uniquely identify the audit records produced by the watch. You need to use password-file string or phrase while searching audit logs.
In short you are monitoring (read as watching) a /etc/passwd file for anyone (including syscall) that may perform a write, append or read operation on a file.
Wait for some time or as a normal user run command as follows:
$ grep 'something' /etc/passwd
$ vi /etc/passwd
Following are more examples:
File System audit rules
Add a watch on "/etc/shadow" with the arbitrary filterkey "shadow-file" that generates records for "reads, writes, executes, and appends" on "shadow"
# auditctl -w /etc/shadow -k shadow-file -p rwxa
syscall audit rule
The next rule suppresses auditing for mount syscall exits
# auditctl -a exit,never -S mount
File system audit rule
Add a watch "tmp" with a NULL filterkey that generates records "executes" on "/tmp" (good for a webserver)
# auditctl -w /tmp -p e -k webserver-watch-tmp
syscall audit rule using pid
To see all syscalls made by a program called sshd (pid - 1005):
# auditctl -a entry,always -S all -F pid=1005
How do I find out who changed or accessed a file /etc/passwd?
Use ausearch command as follows:
# ausearch -f /etc/passwd
OR
# ausearch -f /etc/passwd | less
OR
# ausearch -f /etc/passwd -i | less
Where,
* -f /etc/passwd : Only search for this file
* -i : Interpret numeric entities into text. For example, uid is converted to account name.
Output:
----
type=PATH msg=audit(03/16/2007 14:52:59.985:55) : name=/etc/passwd flags=follow,open inode=23087346 dev=08:02 mode=file,644 ouid=root ogid=root rdev=00:00
type=CWD msg=audit(03/16/2007 14:52:59.985:55) : cwd=/webroot/home/lighttpd
type=FS_INODE msg=audit(03/16/2007 14:52:59.985:55) : inode=23087346 inode_uid=root inode_gid=root inode_dev=08:02 inode_rdev=00:00
type=FS_WATCH msg=audit(03/16/2007 14:52:59.985:55) : watch_inode=23087346 watch=passwd filterkey=password-file perm=read,write,append perm_mask=read
type=SYSCALL msg=audit(03/16/2007 14:52:59.985:55) : arch=x86_64 syscall=open success=yes exit=3 a0=7fbffffcb4 a1=0 a2=2 a3=6171d0 items=1 pid=12551 auid=unknown(4294967295) uid=lighttpd gid=lighttpd euid=lighttpd suid=lighttpd fsuid=lighttpd egid=lighttpd sgid=lighttpd fsgid=lighttpd comm=grep exe=/bin/grep
Let us try to understand output
* audit(03/16/2007 14:52:59.985:55) : Audit log time
* uid=lighttpd gid=lighttpd : User ids in numerical format. By passing -i option to command you can convert most of numeric data to human readable format. In our example user is lighttpd used grep command to open a file
* exe="/bin/grep" : Command grep used to access /etc/passwd file
* perm_mask=read : File was open for read operation
So from log files you can clearly see who read file using grep or made changes to a file using vi/vim text editor. Log provides tons of other information. You need to read man pages and documentation to understand raw log format.
Other useful examples
Search for events with date and time stamps. if the date is omitted, today is assumed. If the time is omitted, now is assumed. Use 24 hour clock time rather than AM or PM to specify time. An example date is 10/24/05. An example of time is 18:00:00.
# ausearch -ts today -k password-file
# ausearch -ts 3/12/07 -k password-file
Search for an event matching the given executable name using -x option. For example find out who has accessed /etc/passwd using rm command:
# ausearch -ts today -k password-file -x rm
# ausearch -ts 3/12/07 -k password-file -x rm
Search for an event with the given user name (UID). For example find out if user vivek (uid 506) try to open /etc/passwd:
# ausearch -ts today -k password-file -x rm -ui 506
# ausearch -k password-file -ui 506
refer:
http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html
Sunday, December 27, 2009
file system related commands
-bash-2.05b# dumpe2fs -h /dev/ubd/0
dumpe2fs 1.35 (28-Feb-2004)
Filesystem volume name:
Last mounted on:
Filesystem UUID: 47ce1382-4487-40db-949a-ce0b22d70cd0
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal filetype needs_recovery sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 384768
Block count: 768256
Reserved block count: 38412
Free blocks: 226824
Free inodes: 248837
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16032
Inode blocks per group: 501
Filesystem created: Tue Jul 27 12:59:32 2004
Last mount time: Sun Dec 27 18:55:42 2009
Last write time: Sun Dec 27 18:55:42 2009
Mount count: 5
Maximum mount count: 20
Last checked: Sat Feb 12 17:56:04 2005
Check interval: 15552000 (6 months)
Next check after: Thu Aug 11 18:56:04 2005
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 20cb2afa-b1ca-4f4d-974f-e54d63b4e3ff
Journal backup: inode blocks
The basic recovery process
In this section we will go step-by-step through the data recovery process and describe the tools, and their options, in detail. We start by listing a directory below.
[abe@abe-laptop test]$ ls -al
total 27
drwxrwxr-x 2 abe abe 4096 2008-03-29 17:48 .
drwx------ 71 abe abe 4096 2008-03-29 17:47 ..
-rwxr--r-- 1 abe abe 42736 2008-03-29 17:47 weimaraner1.jpg
In the listing above we can see that there is a file named weimaraner1.jpg in the test directory. This is a picture of my dog. I don't want to delete it. I like my dog.
[abe@abe-laptop test]$ rm -f *
Here we can see I am deleting it. Whoops! Sorry buddy. Let's gather some basic information about the system so we can begin the recovery process.
[abe@abe-laptop test]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 71G 14G 53G 21% /
/dev/sda1 99M 19M 76M 20% /boot
tmpfs 1007M 12K 1007M 1% /dev/shm
/dev/sdb1 887M 152M 735M 18% /media/PUBLIC
Here we see that the full path to the test directory (which is /home/abe/test) is part of the / filesystem, represented by the device file /dev/sda2.
[abe@abe-laptop test]$ su -
Password:
[root@abe-laptop ~]# debugfs /dev/sda2
Using su to gain root access, we can start the debugfs program giving it the target of /dev/sda2. The debugfs program is an interactive file system debugger that is installed by default with most common Linux distributions. This program is used to manually examine and change the state of a filesystem. In our situation, we're going to use this program to determine the inode which stored information about the deleted file and to what block group the deleted file belonged.
debugfs 1.40.4 (31-Dec-2007)
debugfs: cd /home/abe/test
debugfs: ls -d
1835327 (12) . 65538 (4084) .. <1835328> (4072) weimaraner1.jpg
After debugfs starts, we cd into /home/abe/test and run the ls -d command. This command shows us all deleted entries in the current directory. The output shows us that we have one deleted entry and that its inode number is 1835328 -- that is, the number between the angular brackets.
debugfs: imap <1835328>
Inode 1835328 is part of block group 56
located at block 1835019, offset 0x0f80
The next command we want to run is imap, giving it the inode number above so we can determine to which block group the file belonged. We see by the output that it belonged to block group 56.
debugfs: stats
[...lots of output...]
Blocks per group: 32768
[...lots of output...]
debugfs: q
Running the stats command will generate a lot of output. The only data we are interested in from this list, however, is the number of blocks per group. In this case, and most cases, it’s 32768. Now we have enough data to be able to determine the specific set of blocks in which the data resided. We're done with debugfs now, so we type q to quit.
refer:
http://www.securityfocus.com/infocus/1902
debugfs: dump <2048262> /home/jake/recovery.file
Especially if you can't unmount the file system containing the deleted data, debugfs is a less comfortable, but usable alternative if it is already installed on your system. (If you have to install it, you can use the more comfortable e2undel as well.) Just try a
/sbin/debugfs device
Replace device by your file system, e.g. /dev/hda1 for the first partition on your first IDE drive. At the "debugfs:" prompt, enter the command
lsdel
After some time, you will be presented a list of deleted files. You must identify the file you want to recover by its owner (2nd column), size (4th column), and deletion date. When found, you can write the data of the file via
dump
The inode_number is printed in the 1st column of the "lsdel" command. The file filename should reside on a different file system than the one you opened with debugfs. This might be another partition, a RAM disk or even a floppy disk.
Repeat the "dump" command for all files that you want to recover; then quit debugfs by entering "q".
refer:http://e2undel.sourceforge.net/recovery-howto.html
Disable ext3 boot-time check with tune2fs
by Ryan
on October 26, 2008
The ext3 file system forces an fsck once it has been mounted a certain number of times. By default this maximum mount count is usually set between 20-30. On many systems such as laptops which can be rebooted quite often this can quickly become a problem. To turn off this checking you can use the tune2fs command.
The tune2fs command utility operates exclusively on ext2/ext3 file systems.
To run these commands you must run the command as root or use sudo. You must also make sure that your filesystem is unmounted before making any changes. If you are doing this on your root partition the best solution is to use a LiveCD.
You can run tune2fs on the ext3 partition with the ‘-l‘ option to view what your current and maximum mount count is set to currently.
tune2fs -l /dev/sda1
...
Mount count: 2
Maximum mount count: 25
...
To turn off this check set the maximum count to 0 with the ‘-c‘ option.
# tune2fs -c 0 /dev/sda1
If you do not want to completely disable the file system checking, you can also increase the maximum count.
# tune2fs -c 100 /dev/sda1
-------
debugfs: params
Open mode: read-only
Filesystem in use: /dev/ubd/0
recover deleted files in ext3 FS
Download the source code from: http://ext3grep.googlecode.com/files/ext3grep-0.9.0.tar.gz or you can download them through svn access. Follow the steps below for the installation:
mkdir ext3grep
svn checkout http://ext3grep.googlecode.com/svn/trunk/ ext3grep
cd ext3grep
./configure -prefix=/opt/ext3grep # Make sure that it does not get installed in
the affected partition
make
make install
The Basics of the ext3 File system:
Let’s take a look at how the basic ext3 file system uses ext3grep. Ext3 is an ext2 file system with the journaling option. Journaling is nothing but keeping track of the transactions, so that in case of a crash, the files may be recovered from a previous state. All transaction information are passed to the journaling block device layer (JDB), which is independent of the ext3 file system.
The ext3 partition consists of a set of groups which are created during disk formatting. Each group consists of a super block, a group descriptor, a block bitmap, an i-node bitmap, an i-node table and data blocks. A simple layout can be specified as follows:
,---------+---------+---------+---------+---------+---------,
| Super | FS | Block | Inode | Inode | Data |
| block | desc. | bitmap | bitmap | table | blocks |
`---------+---------+---------+---------+---------+---------'
You can get the total number of groups in the particular partition using the following command:
./ext3grep /dev/hda2 --superblock | grep 'Number of groups'
Number of groups: 24
Each group consists of a set of fixed size blocks which could be of 4096, 2048 or 1024 bytes in size.
Some of the basic terminology associated with the ext3 file system are:
Superblock:
Superblock is a header that tells the kernel about the layout of the file system. It contains information about the block size, block-count and several such details. The first superblock is the one that is used when the file system is mounted.
To get information related to the blocks per group, use the command:
/opt/ext3grep/bin/ext3grep /dev/hda2 --superblock | grep 'blocks per group'
Number of blocks per group: 32768
To get the block size details from the superblock, use the command:
/opt/ext3grep/bin/ext3grep /dev/hda5 --superblock|grep size
Block size: 4096
Fragment size: 4096
You can get a complete list of the superblock details using the command:
/opt/ext3grep/bin/ext3grep /dev/hda5 --superblock
Group Descriptor:
The next block is the group descriptor which stores information of each group. Within each group descriptor, is a pointer to the table of i-nodes and the allocation bitmaps for the i-nodes and data blocks.
Allocation Bitmap:
An allocation bitmap is a list of bits describing the block and the i-nodes which are used so that the allocation of files can be done efficiently.
I-nodes:
Each file is associated with one i-node. It contains various information about the files. The data of the files are not stored in the i-node as such, but it points to the location of the data on the disk (data structure to file).
I-nodes are stored in the i-node tables. The command: df -i will give you the total number of i-nodes in the partition and the command ls -i filename will give you the i-node number of the respective file.
df -i | grep /dev/hda5
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/hda5 18233952 33671 18200281 1% /
-------------------------------------------------
ll -i ext3grep
inode no permission owner group size in bytes date filename
6350788 -rwxr-xr-x 1 root root 2765388 Oct 5 23:49 ext3grep
Directories:
In the ext3 file system, each directory is a file. This directory uses an i-node and this i-node contains all the information about the contents of the directory. Each file has a list of directory entries and each entry associates one file name with one i-node number. You can get the directory i-node information using the command:
ll -id bin
6350787 drwxr-xr-x 2 root root 4096 Oct 5 23:49 bin
Superblock Recovery:
Sometimes the superblock gets corrupted and all the data information of that particular group is lost. In this case we can recover the superblock using the alternate superblock backup.
First, list the backup superblock
dumpe2fs -h /dev/hda5
Primary superblock at 0, Group descriptors at 1-5
Backup superblock at 32768, Group descriptors at 32769-32773
Backup superblock at 98304, Group descriptors at 98305-98309
Backup superblock at 163840, Group descriptors at 163841-163845
Backup superblock at 229376, Group descriptors at 229377-229381
Backup superblock at 294912, Group descriptors at 294913-294917
Next, find the position of backup superblock.
Usually the block size of ext3 will be 4096 bytes, unless defined manually during file system creation.
position= backup superblock *4
32768*4=131072
Now, mount the file system using an alternate superblock.
mount -o sb=131072 /dev/hda5 /main
The ext3grep is a simple tool that can aid anyone who would have accidentally deleted a file on an ext3 file system, only to later realize that they required it.
Some important commands for the partition
Find the number of group to which a particular i-node belongs.
The number of i-nodes per group can be found using ext3grep described below:
group = (inode_number - 1) / inodes_per_group
To find the block to which the i-node belongs, use the command:
/opt/ext3grep/bin/ext3grep /dev/hda2 --inode-to-block 272
Inode 272 resides in block 191 at offset 0x780.
To find the journal i-node of the drive:
/opt/ext3grep/bin/ext3grep /dev/hda2 --superblock | grep 'Inode number of
journal file'
Inode number of journal file: 8
The Recovery Process
In the recovery process the first thing to do is to list the files of the particular disk. You can use the command:
/opt/ext3grep/bin/ext3grep /dev/hda2 --dump-names
Before working on the recovery process make sure that you have unmounted the partition.
To Recover all files:
The following command will recover all the files to a new directory RESTORED_FILES which is in the current working directory. The current working directory should be a new drive.
/opt/ext3grep/bin/ext3grep /dev/hda2 --restore-all
After this, you will have a copy of all the files in the directory RESTORED_FILES .
To Recover a Single File:
If you want to recover a single file, then find the i-node corresponding to the directory that contains that file. For example, if I accidentally lost a file named backup.sql which was in /home2. First I need to find its i-node:
ll -id /home2/
2 drwxr-xr-x 5 root root 4096 Aug 27 09:21 /home2/
Here the first entry ‘2′ is the i-node of /home2. Now I can use ext3grep to list the contents of /home2.
/opt/ext3grep/bin/ext3grep /dev/hda2 --ls --inode 2
The first block of the directory is 683. Inode 2 is directory “”.
Directory block 683:
.-- File type in dir_entry (r=regular file, d=directory, l=symlink)
| .-- D: Deleted ; R: Reallocated
Index Next | I-node | Deletion time Mode File name
==========+==========+----------------data-from-inode------+-----------+=========
0 1 d 2 drwxr-xr-x .
1 2 d 2 drwxr-xr-x ..
2 3 d 11 drwx------ lost+found
3 4 d 144001 drwxr-xr-x testfol
4 6 r 13 rrw-r--r-- aba.txt
5 6 d 112001 D 1219344156 Thu Aug 21 14:42:36 2008 drwxr-xr-x db
6 end d 176001 drwxr-xr-x log
7 end r 12 D 1219843315 Wed Aug 27 09:21:55 2008 rrw-r--r-- backup.sql
Here, we see that the file backup.sql is already deleted. I can recover it using ext3grep through two methods.
Recovery using the file name:
You can recover the file by providing the path of the file to the ext3grep tool. In my case /home2 was added as a separate partition. So I should give the path of the file as simply backup.sql, since it is in root directory of that partition.
umount /home2
/opt/ext3grep/bin/ext3grep /dev/hda2 --restore-file backup.sql
Loading journal descriptors... sorting... done
The oldest inode block that is still in the journal, appears to be from
1217936328 = Tue Aug 5 07:38:48 2008
Number of descriptors in journal: 1315; min / max sequence numbers: 203 / 680
Loading hda2.ext3grep.stage2... done
Restoring backup.sql
Ensure that the file has been recovered to the folder “RESTORED_FILES”
ll -d RESTORED_FILES/backup.sql
-rw-r--r-- 1 root root 49152 Dec 26 2006 RESTORED_FILES/backup.sql
Recovering using the i-node information.:
You can recover the file also by using the i-node information of the file. The i-node number can be obtained using the command:
/opt/ext3grep/bin/ext3grep /dev/hda2 --ls --inode 2
------------------------------------
7 end r 12 D 1219843315 Wed Aug 27 09:21:55 2008 rrw-r--r-- backup.sql
Here the i-node number is 12 and you can restore the file by issuing the following command:
/opt/ext3grep/bin/ext3grep /dev/hda2 --restore-inode 12
Loading journal descriptors... sorting... done
The oldest i-node block that is still in the journal, appears to be from
1217936328 = Tue Aug 5 07:38:48 2008
Number of descriptors in journal: 1315; min / max sequence numbers: 203 / 680
Restoring inode.12
mv RESTORED_FILES/inode.12 backup.sql
ll -h backup.sql
-rw-r--r-- 1 root root 48K Dec 26 2006 backup.sql
To Recover files based on time:
Sometimes there can be a conflict where the ext3grep tool detects a lot of old files that were removed, but have the same name. In this case you have to use the “–after” option. In addition, you will also have to provide a Unix time stamp to recover the file. The Unix time stamp can be obtained from the following link: http://www.onlineconversion.com/unix_time.htm.
For example, if I would like to recover all the files that were deleted after Wed Aug 27 05:20:00 2008, the command used should be as follows:
/opt/ext3grep/bin/ext3grep /dev/hda2 --restore-all --after=1219828800
Only show/process deleted entries if they are deleted on or after Wed Aug 27 05:20:00 2008.
Number of groups: 23
Minimum / maximum journal block: 689 / 17091
Loading journal descriptors... sorting... done
The oldest inode block that is still in the journal, appears to be from
1217936328 = Tue Aug 5 07:38:48 2008
Number of descriptors in journal: 1315; min / max sequence numbers: 203 / 680
Writing output to directory RESTORED_FILES/
Loading hda2.ext3grep.stage2... done
Restoring aba.txt
Restoring backup.sql
You can also use the ‘–before’ option to get a file before that date.
/opt/ext3grep/bin/ext3grep /dev/hda2 --restore-all --before=1219828800
You can recover files between a set of dates combining both the above options. For example, in order to recover a file between 12/12/2007 and 12/9/2008, I need to use a command as follows:
/opt/ext3grep/bin/ext3grep /dev/hda2 --restore-all --after=1197417600 --before=1228780800
To List the Correct hard links
A recovery of the files can cause a lot of hard link related issues. To find out the hard linked files, you can use the command:
/opt/ext3grep/bin/ext3grep /dev/hda2 --show-hardlinks
After this, remove the unwanted hard linked files which are duplicates.
To List the Deleted files.
You can use the following command to list the deleted files.
/opt/ext3grep/bin/ext3grep /dev/hda2 --deleted
Reference
bobcares.com
http://www.xs4all.nl/~carlo17/howto/undelete_ext3.html
Friday, December 25, 2009
iptables command
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent \
--set
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent \
--update --seconds 60 --hitcount 4 -j DROP
The --state flag takes a comma seperated list of connection states as an argument, by using "--state NEW" as we did we make sure that only new connections are managed by the module.
The --set parameter in the first line will make sure that the IP address of the host which initiated the connection will be added to the "recent list", where it can be tested and used again in the future i.e. in our second rule.
The second rule is where the magic actually happens. The --update flag tests whether the IP address is in the list of recent connections, in our case each new connection on port 22 will be in the list because we used the --set flag to add it in the preceeding rule.
Once that's done the --seconds flag is used to make sure that the IP address is only going to match if the last connection was within the timeframe given. The --hitcount flag works in a similar way - matching only if the given count of connection attempts is greater than or equal to the number given.
Together the second line will DROP an incoming connection if:
* The IP address which initiated the connection has previously been added to the list and
* The IP address has sent a packet in the past 60 seconds and
* The IP address has sent more than 4 packets in total.
You can adjust the numbers yourself to limit connections further, so the following example will drop incoming connections which make more than 2 connection attempts upon port 22 within ten minutes:
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent \
--set
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent \
--update --seconds 600 --hitcount 2 -j DROP
If you wish to test these rules you can script a number of connection attempts from an external host with the netcat package.
refer:http://www.debian-administration.org/articles/187
refer:http://kevin.vanzonneveld.net/techblog/article/block_brute_force_attacks_with_iptables/
restrict port 80 usage for each ip to not more than 20/min
/sbin/iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW -m recent --set --name HTTP
/sbin/iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW -m recent --update --seconds 60 --hitcount 20 --rttl --name HTTP -j DROP
or
iptables -I INPUT -p tcp --dport 80 -m state --state NEW -m limit --limit 3/min -j ACCEPT
refer:http://www.dd-wrt.com/wiki/index.php/Iptables_command
use of cut command
cut -d ":" -f1,7 /etc/passwd # cuts fields 1 and 7 from /etc/passwd
cut -d ":" -f 1,6- /etc/passwd # cuts fields 1, 6 to the end from /etc/passwd
The default delimiter is TAB. If space is used as a delimiter, be sure to put it in quotes (-d " ").
Note: Another way to specify blank (or other shell-sensitive character) is to use \ -- the following example prints the second field of every line in the file /etc/passwd
% cut -f2 -d\ /etc/passwd | more
refer:http://www.softpanorama.org/Tools/cut.shtml
Thursday, December 24, 2009
kernel recompile
1. Go to http://www.kernel.org and download the latest stable kernel
2. Change user to root
su
3. Copy the downloaded kernel to your /usr/src directory:
cp linux-2.4.19.tar.gz /usr/src/
4. Uncompress the kernel.
tar -zxvf linux-2.4.19.tar.gz
5. Change to the linux-2.4.19 directory
cd linux-2.4.19
6. Make mrproper (This will erase any .config file) This cleans out the configuration files and any object files an older version might have.
make mrproper
The next step is optional, depending on if you want to keep your old configuration or base your new kernel on your old configuration and add the new options found in the new kernel.
OPTIONAL: Copy over the old configuration file. (assumes it is an i686)
cp -p /usr/src/linux-2.4.19/configs/kernel-2.4.18-i686.config .config
7. Complete configuration by one of these four options:
make oldconfig
This will ask you if you want to add in the new options from the kernel by selecting y/n/m.
make xconfig
(uses a GUI configuration) or
make menuconfig
(uses a terminal configuration based on curses) or
make config
You would need to edit the .config in order to select what options you want and then run make config to make the configuration file.
xconfig and menuconfig have a help option which is nice if you are unsure of what option you are turning on/off. make config and make oldconfig DO NOT have this help menu option.
8. Make the dependencies, which insures all things, like include files, are in place.
make dep
9. Make your bzImage
make bzImage
10. Make your modules
make modules
11. Copy the image over to /boot.
cp /usr/src/linux-2.4.19/arch/i386/boot/bzImage /boot/vmlinuz-2.4.19
12. Install the modules
make modules_install
13. Copy the new System.map over to /boot
cp /usr/src/linux-2.4.19/System.map /boot/System.map-2.4.19
14. Change back to the /usr/src directory
cd ..
15. At this point (for Red Hat) I remove the linux-2.4 symlink
rm linux-2.4
16. Then I make a new symlink to my new kernel directory.
ln -s linux-2.4.19 linux-2.4
17. Add the new kernel in the configuration file. For example, in grub.conf add:
title Red Hat Linux (2.4.19)
root (hd0,1)
kernel /boot/vmlinuz-2.4.19 ro root=/dev/hda2 hdd=ide-scsi
If your /boot is on its own partition please remove /boot part of the kernel location. Your last line in grub should read (If your /boot is on it's own partition)
kernel /vmlinuz-2.4.19 ro root=/dev/hda2 hdd=ide-scsi
18. Edit grub.conf
vi /etc/grub.conf
19. Add the new kernel in the configuration file
Example: grub.conf
title Red Hat Linux (2.4.19)
root (hd0,1)
kernel /boot/vmlinuz-2.4.19 ro root=/dev/hda2 hdd=ide-scsi
Blue: This is the title that Grub will show in the Splash Screen. It will say
Red Hat Linux (2.4.19)
Red: This is essentially for grub's benefit. It is where grub is installed. (since I am dual booting Windows 2000 this is hd0,1, not hd0,.)
Green: this is where the kernel Image is located and what device root is (/dev/hda2) The hdd=ide-scsi is for scsi emulation (In my case I need it for my IDE CDRW, ATAPI Zip Drive, and USB Smartmedia Reader)
20. Reboot.
reboot
reference: http://www.justlinux.com/nhf/Compiling_Kernels/20_Steps_to_a_New_Kernel_with_Grub.html
Sunday, December 20, 2009
LVM commands
Dynaical Access Storage Device
a striped set of DASDs for Linux on System z. The following steps create a striped volume with stripe size 64k on the DASDs /dev/dasdv1 and /dev/dasdw1. We assume that the used DASDs have already been formatted and partitioned.
1. Create "physical volumes" with the command pvcreate
pserver10:~ # pvcreate /dev/dasd/56c5 /dev/dasd/56c6
pvcreate -- physical volume "/dev/dasdv1" successfully created
pvcreate -- physical volume "/dev/dasdw1" successfully created
2. Create a "volume group" with the command vgcreate
pserver10:~ # vgcreate myvolgroup /dev/dasdv1 /dev/dasdw1
vgcreate -- INFO: using default physical extent size 4MB
vgcreate -- INFO: maximum logical volume size is 255.99 Gigabyte
vgcreate -- doing automatic backup of volume group "myvolgroup"
vgcreate -- volume group "myvolgroup" successfully created and activated
3. Get the maximum number of extents with the command vgdisplay
pserver10:~ # vgdisplay myvolgroup
--- Volume group ---
VG Name myvolgroup
VG Access read/write
VG Status available/resizable
VG # 0
MAX LV 256
Cur LV 0
Open LV 0
MAX LV Size 255.99GB
Max PV 256
Cur PV 2
Act PV 2
VG Size 13.74GB
PE Size 4MB
Total PE 3518
Alloc PE / Size 0 / 0
Free PE / Size 3518 / 13.74GB
VG UUID hNoJPC-N3a0-g7md-dK64-PwJ7-T1De-Y0jF7V
4. Create the "logical volume" with the command lvcreate
lvcreate --name mylvolume --stripes 2 --stripesize 64 --extents 3518 myvolgroup
lvcreate -- doing automatic backup of "myvolgroup"
lvcreate -- logical volume "/dev/myvolgroup/mylvolume" successfully created
5. Create a file system on the created logical volume
pserver10:~ # mke2fs -j /dev/myvolgroup/mylvolume
mke2fs 1.28 (31-Aug-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1802240 inodes, 3602432 blocks
180121 blocks (5.00%) reserved for the super user
First data block=0
110 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
If you want to learn more about the commands described here, please have a look at the man pages or the complete LVM-HOWTO (http://tldp.org/HOWTO/LVM-HOWTO/).
Note that the first step of the LVM-HOWTO (http://tldp.org/HOWTO/LVM-HOWTO/):
1. Set the partition system id to 0x8e on /dev/sdc1 and /dev/sde1.
is obsolete for DASD devices on Linux on System z (but valid for SCSI disks)
To reduce an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you want to extend): Replace /dev/VolGroup00/LogVol01 with your HD partition.
1.
Disable swapping for the associated logical volume:
# swapoff -v /dev/VolGroup00/LogVol01
2.
Reduce the LVM2 logical volume by 512 MB:
# lvm lvreduce /dev/VolGroup00/LogVol01 -L -512M
3.
Format the new swap space:
# mkswap /dev/VolGroup00/LogVol01
4.
Enable the extended logical volume:
# swapon -va
5.
Test that the logical volume has been reduced properly:
# cat /proc/swaps or # free
---------------------------------
9.1. Initializing disks or disk partitions
Before you can use a disk or disk partition as a physical volume you will have to initialize it:
For entire disks:
*
Run pvcreate on the disk:
# pvcreate /dev/hdb
This creates a volume group descripter at the start of disk.
*
If you get an error that LVM can't initialize a disk with a partition table on it, first make sure that the disk you are operating on is the correct one. If you are very sure that it is, run the following:
Warning DANGEROUS
The following commands will destroy the partition table on the disk being operated on. Be very sure it is the correct disk.
# dd if=/dev/zero of=/dev/diskname bs=1k count=1
# blockdev --rereadpt /dev/diskname
For partitions:
*
Set the partition type to 0x8e using fdisk or some other similar program.
*
Run pvcreate on the partition:
# pvcreate /dev/hdb1
This creates a volume group descriptor at the start of the /dev/hdb1 partition.
9.2. Creating a volume group
Use the 'vgcreate' program:
# vgcreate my_volume_group /dev/hda1 /dev/hdb1
NOTE: If you are using devfs it is essential to use the full devfs name of the device rather than the symlinked name in /dev. so the above would be:
# vgcreate my_volume_group /dev/ide/host0/bus0/target0/lun0/part1 \
/dev/ide/host0/bus0/target1/lun0/part1
You can also specify the extent size with this command if the default of 32MB is not suitable for you with the '-s' switch. In addition you can put some limits on the number of physical or logical volumes the volume can have.
9.3. Activating a volume group
After rebooting the system or running vgchange -an, you will not be able to access your VGs and LVs. To reactivate the volume group, run:
# vgchange -a y my_volume_group
9.4. Removing a volume group
Make sure that no logical volumes are present in the volume group, see later section for how to do this.
Deactivate the volume group:
# vgchange -a n my_volume_group
Now you actually remove the volume group:
# vgremove my_volume_group
9.5. Adding physical volumes to a volume group
Use 'vgextend' to add an initialized physical volume to an existing volume group.
# vgextend my_volume_group /dev/hdc1
^^^^^^^^^ new physical volume
9.6. Removing physical volumes from a volume group
Make sure that the physical volume isn't used by any logical volumes by using then 'pvdisplay' command:
# pvdisplay /dev/hda1
--- Physical volume ---
PV Name /dev/hda1
VG Name myvg
PV Size 1.95 GB / NOT usable 4 MB [LVM: 122 KB]
PV# 1
PV Status available
Allocatable yes (but full)
Cur LV 1
PE Size (KByte) 4096
Total PE 499
Free PE 0
Allocated PE 499
PV UUID Sd44tK-9IRw-SrMC-MOkn-76iP-iftz-OVSen7
If the physical volume is still used you will have to migrate the data to another physical volume.
Then use 'vgreduce' to remove the physical volume:
# vgreduce my_volume_group /dev/hda1
9.7. Creating a logical volume
Decide which physical volumes you want the logical volume to be allocated on, use 'vgdisplay' and 'pvdisplay' to help you decide.
To create a 1500MB linear LV named 'testlv' and its block device special '/dev/testvg/testlv':
# lvcreate -L1500 -ntestlv testvg
To create a 100 LE large logical volume with 2 stripes and stripesize 4 KB.
# lvcreate -i2 -I4 -l100 -nanothertestlv testvg
If you want to create an LV that uses the entire VG, use vgdisplay to find the "Total PE" size, then use that when running lvcreate.
# vgdisplay testvg | grep "Total PE"
Total PE 10230
# lvcreate -l 10230 testvg -n mylv
This will create an LV called mylv filling the testvg VG.
9.8. Removing a logical volume
A logical volume must be closed before it can be removed:
# umount /dev/myvg/homevol
# lvremove /dev/myvg/homevol
lvremove -- do you really want to remove "/dev/myvg/homevol"? [y/n]: y
lvremove -- doing automatic backup of volume group "myvg"
lvremove -- logical volume "/dev/myvg/homevol" successfully removed
9.9. Extending a logical volume
To extend a logical volume you simply tell the lvextend command how much you want to increase the size. You can specify how much to grow the volume, or how large you want it to grow to:
# lvextend -L12G /dev/myvg/homevol
lvextend -- extending logical volume "/dev/myvg/homevol" to 12 GB
lvextend -- doing automatic backup of volume group "myvg"
lvextend -- logical volume "/dev/myvg/homevol" successfully extended
will extend /dev/myvg/homevol to 12 Gigabytes.
# lvextend -L+1G /dev/myvg/homevol
lvextend -- extending logical volume "/dev/myvg/homevol" to 13 GB
lvextend -- doing automatic backup of volume group "myvg"
lvextend -- logical volume "/dev/myvg/homevol" successfully extended
will add another gigabyte to /dev/myvg/homevol.
After you have extended the logical volume it is necessary to increase the file system size to match. how you do this depends on the file system you are using.
By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you don't need to worry about specifying the same size for each of the two commands.
1.
ext2
Unless you have patched your kernel with the ext2online patch it is necessary to unmount the file system before resizing it.
# umount /dev/myvg/homevol/dev/myvg/homevol
# resize2fs /dev/myvg/homevol
# mount /dev/myvg/homevol /home
If you don't have e2fsprogs 1.19 or later, you can download the ext2resize command from ext2resize.sourceforge.net and use that:
# umount /dev/myvg/homevol/dev/myvg/homevol
# resize2fs /dev/myvg/homevol
# mount /dev/myvg/homevol /home
For ext2 there is an easier way. LVM ships with a utility called e2fsadm which does the lvextend and resize2fs for you (it can also do file system shrinking, see the next section) so the single command
# e2fsadm -L+1G /dev/myvg/homevol
is equivalent to the two commands:
# lvextend -L+1G /dev/myvg/homevol
# resize2fs /dev/myvg/homevol
Note Note
You will still need to unmount the file system before running e2fsadm.
2.
reiserfs
Reiserfs file systems can be resized when mounted or unmounted as you prefer:
*
Online:
# resize_reiserfs -f /dev/myvg/homevol
*
Offline:
# umount /dev/myvg/homevol
# resize_reiserfs /dev/myvg/homevol
# mount -treiserfs /dev/myvg/homevol /home
3.
xfs
XFS file systems must be mounted to be resized and the mount-point is specified rather than the device name.
# xfs_growfs /home
9.10. Reducing a logical volume
Logical volumes can be reduced in size as well as increased. However, it is very important to remember to reduce the size of the file system or whatever is residing in the volume before shrinking the volume itself, otherwise you risk losing data.
1.
ext2
If you are using ext2 as the file system then you can use the e2fsadm command mentioned earlier to take care of both the file system and volume resizing as follows:
# umount /home
# e2fsadm -L-1G /dev/myvg/homevol
# mount /home
If you prefer to do this manually you must know the new size of the volume in blocks and use the following commands:
# umount /home
# resize2fs /dev/myvg/homevol 524288
# lvreduce -L-1G /dev/myvg/homevol
# mount /home
2.
reiserfs
Reiserfs seems to prefer to be unmounted when shrinking
# umount /home
# resize_reiserfs -s-1G /dev/myvg/homevol
# lvreduce -L-1G /dev/myvg/homevol
# mount -treiserfs /dev/myvg/homevol /home
3.
xfs
There is no way to shrink XFS file systems.
Saturday, December 19, 2009
monitoring commands
The top program provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.
Commonly Used Hot Keys
The top command provides several useful hot keys:
Hot Key Usage
t Displays summary information off and on.
m Displays memory information off and on.
A Sorts the display by top consumers of various system resources. Useful for quick identification of performance-hungry tasks on a system.
f Enters an interactive configuration screen for top. Helpful for setting up top for a specific task.
o Enables you to interactively select the ordering within top.
r Issues renice command.
k Issues kill command.
z Turn on or off color/mono
#2: vmstat - System Activity, Hardware and System Information
The command vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.
# vmstat 3
Display Memory Utilization Slabinfo
# vmstat -m
Get Information About Active / Inactive Memory Pages
# vmstat -a
#3: w - Find Out Who Is Logged on And What They Are Doing
w command displays information about the users currently on the machine, and their processes.
# w username
# w vivek
#4: uptime - Tell How Long The System Has Been Running
The uptime command can be used to see how long the server has been running. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
# uptime
Output:
18:02:41 up 41 days, 23:42, 1 user, load average: 0.00, 0.00, 0.00
1 can be considered as optimal load value. The load can change from system to system. For a single CPU system 1 - 3 and SMP systems 6-10 load value might be acceptable.
#5: ps - Displays The Processes
ps command will report a snapshot of the current processes. To select all processes use the -A or -e option:
# ps -A
Sample Outputs:
PID TTY TIME CMD
1 ? 00:00:02 init
2 ? 00:00:02 migration/0
3 ? 00:00:01 ksoftirqd/0
4 ? 00:00:00 watchdog/0
5 ? 00:00:00 migration/1
6 ? 00:00:15 ksoftirqd/1
....
.....
4881 ? 00:53:28 java
4885 tty1 00:00:00 mingetty
4886 tty2 00:00:00 mingetty
4887 tty3 00:00:00 mingetty
4888 tty4 00:00:00 mingetty
4891 tty5 00:00:00 mingetty
4892 tty6 00:00:00 mingetty
4893 ttyS1 00:00:00 agetty
12853 ? 00:00:00 cifsoplockd
12854 ? 00:00:00 cifsdnotifyd
14231 ? 00:10:34 lighttpd
14232 ? 00:00:00 php-cgi
54981 pts/0 00:00:00 vim
55465 ? 00:00:00 php-cgi
55546 ? 00:00:00 bind9-snmp-stat
55704 pts/1 00:00:00 ps
ps is just like top but provides more information.
Show Long Format Output
# ps -Al
To turn on extra full mode (it will show command line arguments passed to process):
# ps -AlF
To See Threads ( LWP and NLWP)
# ps -AlFH
To See Threads After Processes
# ps -AlLm
Print All Process On The Server
# ps ax
# ps axu
Print A Process Tree
# ps -ejH
# ps axjf
# pstree
Print Security Information
# ps -eo euser,ruser,suser,fuser,f,comm,label
# ps axZ
# ps -eM
See Every Process Running As User Vivek
# ps -U vivek -u vivek u
Set Output In a User-Defined Format
# ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
# ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
# ps -eopid,tt,user,fname,tmout,f,wchan
Display Only The Process IDs of Lighttpd
# ps -C lighttpd -o pid=
OR
# pgrep lighttpd
OR
# pgrep -u vivek php-cgi
Display The Name of PID 55977
# ps -p 55977 -o comm=
Find Out The Top 10 Memory Consuming Process
# ps -auxf | sort -nr -k 4 | head -10
Find Out top 10 CPU Consuming Process
# ps -auxf | sort -nr -k 3 | head -10
#6: free - Memory Usage
The command free displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.
# free
Sample Output:
total used free shared buffers cached
Mem: 12302896 9739664 2563232 0 523124 5154740
-/+ buffers/cache: 4061800 8241096
Swap: 1052248 0 1052248
=> Related: :
1. Linux Find Out Virtual Memory PAGESIZE
2. Linux Limit CPU Usage Per Process
3. How much RAM does my Ubuntu / Fedora Linux desktop PC have?
#7: iostat - Average CPU Load, Disk Activity
The command iostat report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS).
# iostat
Sample Outputs:
Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009
avg-cpu: %user %nice %system %iowait %steal %idle
3.50 0.09 0.51 0.03 0.00 95.86
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 22.04 31.88 512.03 16193351 260102868
sda1 0.00 0.00 0.00 2166 180
sda2 22.04 31.87 512.03 16189010 260102688
sda3 0.00 0.00 0.00 1615 0
=> Related: : Linux Track NFS Directory / Disk I/O Stats
#8: sar - Collect and Report System Activity
The sar command is used to collect, report, and save system activity information. To see network counter, enter:
# sar -n DEV | more
To display the network counters from the 24th:
# sar -n DEV -f /var/log/sa/sa24 | more
You can also display real time usage using sar:
# sar 4 5
Sample Outputs:
Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009
06:45:12 PM CPU %user %nice %system %iowait %steal %idle
06:45:16 PM all 2.00 0.00 0.22 0.00 0.00 97.78
06:45:20 PM all 2.07 0.00 0.38 0.03 0.00 97.52
06:45:24 PM all 0.94 0.00 0.28 0.00 0.00 98.78
06:45:28 PM all 1.56 0.00 0.22 0.00 0.00 98.22
06:45:32 PM all 3.53 0.00 0.25 0.03 0.00 96.19
Average: all 2.02 0.00 0.27 0.01 0.00 97.70
=> Related: : How to collect Linux system utilization data into a file
#9: mpstat - Multiprocessor Usage
The mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:
# mpstat -P ALL
Sample Output:
Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009
06:48:11 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
06:48:11 PM all 3.50 0.09 0.34 0.03 0.01 0.17 0.00 95.86 1218.04
06:48:11 PM 0 3.44 0.08 0.31 0.02 0.00 0.12 0.00 96.04 1000.31
06:48:11 PM 1 3.10 0.08 0.32 0.09 0.02 0.11 0.00 96.28 34.93
06:48:11 PM 2 4.16 0.11 0.36 0.02 0.00 0.11 0.00 95.25 0.00
06:48:11 PM 3 3.77 0.11 0.38 0.03 0.01 0.24 0.00 95.46 44.80
06:48:11 PM 4 2.96 0.07 0.29 0.04 0.02 0.10 0.00 96.52 25.91
06:48:11 PM 5 3.26 0.08 0.28 0.03 0.01 0.10 0.00 96.23 14.98
06:48:11 PM 6 4.00 0.10 0.34 0.01 0.00 0.13 0.00 95.42 3.75
06:48:11 PM 7 3.30 0.11 0.39 0.03 0.01 0.46 0.00 95.69 76.89
=> Related: : Linux display each multiple SMP CPU processors utilization individually.
#10: pmap - Process Memory Usage
The command pmap report memory map of a process. Use this command to find out causes of memory bottlenecks.
# pmap -d PID
To display process memory information for pid # 47394, enter:
# pmap -d 47394
Sample Outputs:
47394: /usr/bin/php-cgi
Address Kbytes Mode Offset Device Mapping
0000000000400000 2584 r-x-- 0000000000000000 008:00002 php-cgi
0000000000886000 140 rw--- 0000000000286000 008:00002 php-cgi
* mapped: 933712K total amount of memory mapped to files
* writeable/private: 4304K the amount of private address space
* shared: 768000K the amount of address space this process is sharing with others
#11 and #12: netstat and ss - Network Statistics
The command netstat displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. ss command is used to dump socket statistics. It allows showing information similar to netstat. See the following resources about ss and netstat commands:
* ss: Display Linux TCP / UDP Network and Socket Information
* Get Detailed Information About Particular IP address Connections Using netstat Command
#13: iptraf - Real-time Network Statistics
The iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others. It can provide the following info in easy to read format:
* Network traffic statistics by TCP connection
* IP traffic statistics by network interface
* Network traffic statistics by protocol
* Network traffic statistics by TCP/UDP port and by packet size
* Network traffic statistics by Layer2 address
#14: tcpdump - Detailed Network Traffic Analysis
The tcpdump is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS, enter:
# tcpdump -i eth1 'udp port 53'
To display all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets, enter:
# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
To display all FTP session to 202.54.1.5, enter:
# tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20'
To display all HTTP session to 192.168.1.5:
# tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http'
Use wireshark to view detailed information about files, enter:
# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80
#15: strace - System Calls
Trace system calls and signals. This is useful for debugging webserver and other server problems. See how to use to trace the process and see What it is doing.
#16: /Proc file system - Various Kernel Statistics
/proc file system provides detailed information about various hardware devices and other Linux kernel information. See Linux kernel /proc documentations for further details. Common /proc examples:
# cat /proc/cpuinfo
# cat /proc/meminfo
# cat /proc/zoneinfo
# cat /proc/mounts
17#: Nagios - Server And Network Monitoring
Nagios is a popular open source computer system and network monitoring application software. You can easily monitor all your hosts, network equipment and services. It can send alert when things go wrong and again when they get better. FAN is "Fully Automated Nagios". FAN goals are to provide a Nagios installation including most tools provided by the Nagios Community. FAN provides a CDRom image in the standard ISO format, making it easy to easilly install a Nagios server. Added to this, a wide bunch of tools are including to the distribution, in order to improve the user experience around Nagios.
18#: Cacti - Web-based Monitoring Tool
Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices. It can provide data about network, CPU, memory, logged in users, Apache, DNS servers and much more. See how to install and configure Cacti network graphing tool under CentOS / RHEL.
#19: KDE System Guard - Real-time Systems Reporting and Graphing
KSysguard is a network enabled task and system monitor application for KDE desktop. This tool can be run over ssh session. It provides lots of features such as a client/server architecture that enables monitoring of local and remote hosts. The graphical front end uses so-called sensors to retrieve the information it displays. A sensor can return simple values or more complex information like tables. For each type of information, one or more displays are provided. Displays are organized in worksheets that can be saved and loaded independently from each other. So, KSysguard is not only a simple task manager but also a very powerful tool to control large server farms.
Bounce: Additional Tools
A few more tools:
* nmap - scan your server for open ports.
* lsof - list open files, network connections and much more.
* ntop web based tool - ntop is the best tool to see network usage in a way similar to what top command does for processes i.e. it is network traffic monitoring software. You can see network status, protocol wise distribution of traffic for UDP, TCP, DNS, HTTP and other protocols.
* Conky - Another good monitoring tool for the X Window System. It is highly configurable and is able to monitor many system variables including the status of the CPU, memory, swap space, disk storage, temperatures, processes, network interfaces, battery power, system messages, e-mail inboxes etc.
* GKrellM - It can be used to monitor the status of CPUs, main memory, hard disks, network interfaces, local and remote mailboxes, and many other things.
* vnstat - vnStat is a console-based network traffic monitor. It keeps a log of hourly, daily and monthly network traffic for the selected interface(s).
* htop - htop is an enhanced version of top, the interactive process viewer, which can display the list of processes in a tree form.
* mtr - mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.
wtmp+btmp
lastb | Shows failed login attempts. This command requires the file /var/log/btmp to exist in order to work. Type "touch /var/log/btmp" to begin logging to this file. |
wtmp: records all logins and logouts. Its format is exactly like utmp except that a null user name indicates a logout on the associated terminal.
The /var/log/wtmp is a file on Unix-like systems that keeps track of all logins and logouts to the system. It is defined in the Filesystem Hierarchy Standard 2.3.
Friday, December 18, 2009
plesk webmail access issue
Webmail is not working for my domain. http://webmail.domain.com shows domain default page from plesk.
16 04 2008It’s because webmail support is not enabled in the mail preference.
Go through “Domains–>domain.com–>Mail–>Preference” and enable webmail for the domain. It should fix the issue.
http://instacarma.com/blog/technical/how-to-redirect-webmail-domain-tld-to-domain-tldwebmail/
-----------------------------
Issue :
webmail.domain.tld does not work.
Fix :
1. Create an ‘A’ record for the sub-domain ‘webmail’ in the DNS zone file. It should look like :
domain.com. IN A XX.XX.XXX.XXX
localhost.domain.com. IN A 127.0.0.1
domain.com. IN MX 0 domain.com.
mail IN CNAME domain.com.
www IN CNAME domain.com.
ftp IN A XX.XX.XXX.XXX
cpanel IN XX.XX.XXX.XXX
whm IN A XX.XX.XXX.XXX
webmail IN A XX.XX.XXX.XXX
webdisk IN A XX.XX.XXX.XXX
2. Put the following code inside .htaccess in the sub-domain folder ( virtual sub-domain which doesn’t have an entry in httpd.conf
RewriteEngine on
RewriteCond %{HTTP_HOST} ^webmail.domain.com$ [OR]
RewriteCond %{HTTP_HOST} ^www.webmail.domain.com$
RewriteRule ^.*$ “http\:\/\/domain\.com\/webmail” [R=301,L]
-------------------
http://kb.parallels.com/en/950
Sometimes when I access http://webmail.domain.tld I get an empty page without any errors, sometimes it's loaded fine. When I access http://webmail.domain.tld/horde/test.php I get error: "Fatal error: session_start(): Failed to initialize storage module: user (path: /tmp) in /usr/share/psa-horde/test.php on line 14"
Article ID: 950Last Review: Mar,24 2009Author: Bezborodova Anastasiya
Last updated by: Bezborodova Anastasiya APPLIES TO:
- Plesk 8.x for Linux/Unix
Resolution
Change option "session.save_path" in /etc/php.ini (/usr/local/psa/apache/conf/php.ini for FreeBSD) and restart apache to apply the new configuration:
;session.save_path = /tmp
session.save_path = /var/lib/php/session
------------------------------
***
http://1uthavi.adadaa.com/2009/11/04/wildcard-dns-webmail-atmail/
---------------------
####
http://forum.parallels.com/showthread.php?t=91870
http://rackerhacker.com/2007/08/10/using-wildcard-subdomains-in-plesk/
http://kb.parallels.com/en/1380
http://forum.parallels.com/showthread.php?t=74996
====
http://www.mailenable.com/kb/Content/Article.asp?ID=me020501
======
http://www.bodhost.com/web-hosting/linux-plesk-vps/
http://knowledgelayer.softlayer.com/questions/378/Common+Problems+with+Horde+Webmail+using+Plesk
http://kb.parallels.com/en/940
http://forum.parallels.com/tags.php?tag=webmail&prl_f=208
http://forum.parallels.com/showthread.php?t=89455
Understanding UNIX / Linux filesystem Inodes
-----------------------------------
The inode (index node) is a fundamental concept in the Linux and UNIX filesystem. Each object in the filesystem is represented by an inode. But what are the objects? Let us try to understand it in simple words. Each and every file under Linux (and UNIX) has following attributes:
=> File type (executable, block special etc)
=> Permissions (read, write etc)
=> Owner
=> Group
=> File Size
=> File access, change and modification time (remember UNIX or Linux never stores file creation time, this is favorite question asked in UNIX/Linux sys admin job interview)
=> File deletion time
=> Number of links (soft/hard)
=> Extended attribute such as append only or no one can delete file including root user (immutability)
=> Access Control List (ACLs)
All the above information stored in an inode. In short the inode identifies the file and its attributes (as above) . Each inode is identified by a unique inode number within the file system. Inode is also know as index number.
inode definition
An inode is a data structure on a traditional Unix-style file system such as UFS or ext3. An inode stores basic information about a regular file, directory, or other file system object.
How do I see file inode number?
You can use ls -i command to see inode number of file$ ls -i /etc/passwd
Sample Output
32820 /etc/passwd
You can also use stat command to find out inode number and its attribute:$ stat /etc/passwd
Output:
File: `/etc/passwd'
Size: 1988 Blocks: 8 IO Block: 4096 regular file
Device: 341h/833d Inode: 32820 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2005-11-10 01:26:01.000000000 +0530
Modify: 2005-10-27 13:26:56.000000000 +0530
Change: 2005-10-27 13:26:56.000000000 +0530
Inode application
Many commands used by system administrators in UNIX / Linux operating systems often give inode numbers to designate a file. Let us see he practical application of inode number. Type the following commands:$ cd /tmp
$ touch \"la*
$ ls -l
Now try to remove file "la*
You can't, to remove files having created with control characters or characters which are unable to be input on a keyboard or special character such as ?, * ^ etc. You have to use inode number to remove file. This is fourth part of "Understanding UNIX/Linux file system, continue
inode value increase
-------------------------------------------------
Trouble Shooting for high load php driven site
July 22nd, 2009Hello, everyone,
Time pasts so fast, it has been over 2 years since last blog post. Now we are back and will use this blog regularly. It’s good way to communicate with each other, as you, LiteSpeed users, may also wonder what we are doing.
Our daily life at LiteSpeed is very busy. Trouble shooting problematic server is a routine: if it is LiteSpeed related, we usually will fix it in a day or two and have a pre-release build for a particular client; many times it is not LiteSpeed bug, but configuration/tuning related.
We would like to post here and hopefully benefit others. Back to today’s story, one client report urgently:
> Server Load is very high
> Server configration : Opteron 8350 16 cores , 8gb ram , 300gb SA-SCSI 15.000 RPM .
> Look at top command output : Cpu idle %80 and MYSQL cpu very low %10 because database is a INNODB and very stable optimized.
> But load averega too high. and I can see Litespeed admin console — Realtime Stats too many EAWaitQ process (300-500 )
Logged to that server, did the following steps
- Top shows at the moment Server load is 75.
- Check what php is busy with.
- ps -ef | grep php
- find a php process with long CPU, for e.g. 2767.
- strace -p 2767
- from output, it is spending most of time cleaning up session files.
- Look at php session files, under /tmp/sess_*, too many cannot be displayed by ls directly.
- find . | grep sess_ | wc, return count > 190K
- check df�, enough space left on file system.
- Check phpinfo output, session auto start is off, no problem.
- Move session path to tempfs:
create a directory under /dev /shm/phpsess
modify .htaccess under public_html/ of the problematic client, add php saved path /dev/shm/phpsess
- In a few minutes, got session write error.
- Hit the limit of files that can be created under one directory of tempfs
- use 2 directory level of php session files. (see details here)
- In a few minutes, got error again: “No space left on device”.
- df -k , only use 1% of file system
- df -i, usage is 100%, used up all the inode.
- Increase the inode limit:
mount -o remount,nr_inodes=1G /dev/shm
modify /etc/fstab mount, so next time server reboot will keep the change.
- Server load is around 3, no more errors.
Wednesday, December 16, 2009
ftp access in browser
For example, if my User name was jess12 and my password was bosox67, the FTP browser syntax would be:
ftp://jess12: bosox67@ftp.xyz.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it
In some cases, the User name includes a domain name such as jess12@xyz.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it . In these situations, you would type:
ftp:// jess12@xyz.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it : bosox67@ftp.xyz.commysql schema
There are several ways to see the names and values of system variables:
-
To see the values that a server will use based on its compiled-in defaults and any option files that it reads, use this command:
mysqld --verbose --help
-
To see the values that a server will use based on its compiled-in defaults, ignoring the settings in any option files, use this command:
mysqld --no-defaults --verbose --help
To see the current values used by a running server, use the
SHOW VARIABLES
statement.
the information schema is an ANSI standard set of read-only views which provide information about all of the tables, views, columns, and procedures in a database. It can be used as a source of the information which some databases make available through non-standard commands, such as the SHOW command of MySQL and the DESCRIBE command of Oracle.
----------------------------------------------------------------------
INFORMATION_SCHEMA
provides access to database metadata.
Metadata is data about the data, such as the name of a database or table, the data type of a column, or access privileges. Other terms that sometimes are used for this information are data dictionary and system catalog.
INFORMATION_SCHEMA
is the information database, the place that stores information about all the other databases that the MySQL server maintains. Inside INFORMATION_SCHEMA
there are several read-only tables. They are actually views, not base tables, so there are no files associated with them.
In effect, we have a database named INFORMATION_SCHEMA
, although the server does not create a database directory with that name. It is possible to select INFORMATION_SCHEMA
as the default database with a USE
statement, but it is possible only to read the contents of tables. You cannot insert into them, update them, or delete from them.
--------------------------------------------------------
18.2.2. Stored Routines and MySQL Privileges
Beginning with MySQL 5.0.3, the grant system takes stored routines into account as follows:
The
CREATE ROUTINE
privilege is needed to create stored routines.The
ALTER ROUTINE
privilege is needed to alter or drop stored routines. This privilege is granted automatically to the creator of a routine if necessary, and dropped from the creator when the routine is dropped.The
EXECUTE
privilege is required to execute stored routines. However, this privilege is granted automatically to the creator of a routine if necessary (and dropped from the creator when the routine is dropped). Also, the defaultSQL SECURITY
characteristic for a routine isDEFINER
, which enables users who have access to the database with which the routine is associated to execute the routine.If the
automatic_sp_privileges
system variable is 0, theEXECUTE
andALTER ROUTINE
privileges are not automatically granted to and dropped from the routine creator.The creator of a routine is the account used to execute the
CREATE
statement for it. This might not be the same as the account named as theDEFINER
in the routine definition.
The server manipulates the mysql.proc
table in response to statements that create, alter, or drop stored routines. It is not supported that the server will notice manual manipulation of this table.
---------------------------------------------------
22.4.15: Is there a way to view all stored procedures and stored functions in a given database?
Yes. For a database named dbname
, use this query on the INFORMATION_SCHEMA.ROUTINES
table:
SELECT ROUTINE_TYPE, ROUTINE_NAME
FROM INFORMATION_SCHEMA.ROUTINES
-----------------------------------------
http://www.futhark.ch/mysql/114.html
**********************************
Implementing the Stored Routine Debugger
Our Stored Routine Debugger first of all needs its own database where it keeps all it's runtime information, the debugging output and of course the stored routines used to implement the debugger itself.
CREATE DATABASE `srdb` DEFAULT CHARACTER SET utf8 COLLATE utf8_bin;
It's also good practice to give it its own user account. So nobody else can temper with the debugger's internals. The debugger of course needs full data manipulation rights (SELECT, INSERT, UPDATE and DELETE) on all of its tables as well as EXECUTE rights to call its own stored routines. We also provide it with the CREATE ROUTINE and ALTER ROUTINE rights, so its easy for us to update the debuggers code by just logging in as the srdb user and re-create its stored routines with additional functionality.
GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE, CREATE ROUTINE, ALTER ROUTINE
ON srdb.* TO 'srdb' IDENTIFIED BY 'srdb_password';
Any user you want to allow to work with the debugger just needs EXECUTE rights on the srdb database and nothing else. We will encapsulate all of the debugger functionality in stored routines and give those routines that need to be called by users the SQL SECURITY of the DEFINER (which will be the above created user srdb).
GRANT EXECUTE ON srdb.* TO 'debugger_user';
----------------------------
WHERE ROUTINE_SCHEMA='dbname
';
For more information, see Section 19.14, “The INFORMATION_SCHEMA ROUTINES
Table”.