Monday, October 27, 2008

Moving a volume group to another system

Moving a volume group to another system

It is quite easy to move a whole volume group to another system if, for example, a user department acquires a new server. To do this we use the vgexport and vgimport commands.

vgexport/vgimport is not necessary to move drives from one system to another. It is an administrative policy tool to prevent access to volumes in the time it takes to move them.

Unmount the file system

First, make sure that no users are accessing files on the active volume, then unmount it
# unmount /mnt/design/users

Mark the volume group inactive

Marking the volume group inactive removes it from the kernel and prevents any further activity on it.
# vgchange -an design
vgchange -- volume group "design" successfully deactivated


Export the volume group

It is now necessary to export the volume group. This prevents it from being accessed on the ``old'' host system and prepares it to be removed.
# vgexport design
vgexport -- volume group "design" successfully exported


When the machine is next shut down, the disk can be unplugged and then connected to it's new machine

Import the volume group


When plugged into the new system it becomes /dev/sdb so an initial pvscan shows:
# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sdb1" is in EXPORTED VG "design" [996 MB / 996 MB free]
pvscan -- inactive PV "/dev/sdb2" is in EXPORTED VG "design" [996 MB / 244 MB free]
pvscan -- total: 2 [1.95 GB] / in use: 2 [1.95 GB] / in no VG: 0 [0]


We can now import the volume group (which also activates it) and mount the file system.

If you are importing on an LVM 2 system, run:
# vgimport design
Volume group "vg" successfully imported

If you are importing on an LVM 1 system, add the PVs that need to be imported:
# vgimport design /dev/sdb1 /dev/sdb2
vgimport -- doing automatic backup of volume group "design"
vgimport -- volume group "design" successfully imported and activated

Activate the volume group

You must activate the volume group before you can access it.
# vgchange -ay design

Mount the file system

# mkdir -p /mnt/design/users
# mount /dev/design/users /mnt/design/users


The file system is now available for use.

Saturday, October 25, 2008

Recover Corrupted Partition From A Bad Superblock

How can I Recover a bad superblock from a corrupted ext3 partition to get back my data? I'm getting following error:

/dev/sda2: Input/output error
mount: /dev/sda2: can't read superblock


How do I fix this error?

A. Linux ext2/3 filesystem stores superblock at different backup location so it is possible to get back data from corrupted partition.

WARNING! Make sure file system is UNMOUNTED.


If your system will give you a terminal type the following command, else boot Linux system from rescue disk (boot from 1st CD/DVD. At boot: prompt type command linux rescue).

Mount partition using alternate superblock

Find out superblock location for /dev/sda2:

# dumpe2fs /dev/sda2 | grep superblock

Sample output:
Primary superblock at 0, Group descriptors at 1-6
Backup superblock at 32768, Group descriptors at 32769-32774
Backup superblock at 98304, Group descriptors at 98305-98310
Backup superblock at 163840, Group descriptors at 163841-163846
Backup superblock at 229376, Group descriptors at 229377-229382
Backup superblock at 294912, Group descriptors at 294913-294918
Backup superblock at 819200, Group descriptors at 819201-819206
Backup superblock at 884736, Group descriptors at 884737-884742
Backup superblock at 1605632, Group descriptors at 1605633-1605638
Backup superblock at 2654208, Group descriptors at 2654209-2654214
Backup superblock at 4096000, Group descriptors at 4096001-4096006
Backup superblock at 7962624, Group descriptors at 7962625-7962630
Backup superblock at 11239424, Group descriptors at 11239425-11239430
Backup superblock at 20480000, Group descriptors at 20480001-20480006
Backup superblock at 23887872, Group descriptors at 23887873-23887878


Now check and repair a Linux file system using alternate superblock # 32768:

# fsck -b 32768 /dev/sda2


Sample output:

fsck 1.40.2 (12-Jul-2007)
e2fsck 1.40.2 (12-Jul-2007)
/dev/sda2 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong for group #241 (32254, counted=32253).
Fix? yes

Free blocks count wrong for group #362 (32254, counted=32248).
Fix? yes

Free blocks count wrong for group #368 (32254, counted=27774).
Fix? yes
..........
/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sda2: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks


Now try to mount file system using mount command:

# mount /dev/sda2 /mnt


You can also use superblock stored at 32768 to mount partition, enter:

# mount sb={alternative-superblock} /dev/device /mnt
# mount sb=32768 /dev/sda2 /mnt


Try to browse and access file system:

# cd /mnt
# mkdir test
# ls -l
# cp file /path/to/safe/location


You should always keep backup of all important data including configuration files.

Creating a swap partition

For swap area give at least twice computer memory size, ex. +512M.
Default partition type is LINUX native, to change a Linux partition to a
swap partition with fdisk:

l # show partition
t 1 # change partition type for partition 1
Hex code: 82 # set type to Linux swap
w # write partition table

The new swap partion is set using the below listed commands to create
and enable the swap partition.

# mkswap -c /dev/hda2 size_in_blocks

# swapon /dev/hda

To enable all swap partitions:

# swapon -a

Check swap space usage with /usr/bin/top, /usr/bin/free and defined
partitions with /sbin/swapon (see Section 10.4) as follows:

$ /sbin/swapon -s

Filename Type Size Used Priority
/dev/hda5 partition 2048248 0 -1
/dev/hda6 partition 2048248 0 -2

Finally add the new swap partition to /etc/fstab:

/dev/hda14 / ext2 defaults 1 1
/dev/hda1 /boot ext2 defaults 1 2
/dev/hda13 /home ext2 defaults 1 2
/dev/cdrom /mnt/cdrom iso9660 noauto,owner,ro 0 0
/dev/hda5 swap swap defaults 0 0
/dev/hda6 swap swap defaults 0 0
/dev/fd0 /mnt/floppy ext2 noauto,owner 0 0
none /proc proc defaults 0 0
none /dev/pts devpts gid=5,mode=620 0 0
#
nodea:/fsa /fsa nfs rw,soft,bg 0 0
nodea:/fsb /fsb nfs ro,soft,bg 0 0

N.B. With RH7.x file systems in /etc/fstab are listed by label instead
of using partition number, in the above example /etc/fstab
entry /dev/hda1 for the /boot file system, is mapped to LABEL=/boot for
RH7.x.

10 boot time parameters you should know about the Linux kernel

The Linux kernel accepts boot time parameters as it starts to boot system. This is used to inform kernel about various hardware parameter. You need boot time parameters:

* Troubleshoot system
* Hardware parameters that the kernel would not able to determine on its own
* Force kernel to override the default hardware parameters in order to increase performance
* Password and other recovery operations

The kernel command line syntax

name=value1,value2,value3…

Where,
name : Keyword name, for example, init, ro, boot etc

Ten common Boot time parameters

init


This sets the initial command to be executed by the kernel. Default is to use /sbin/init, which is the parent of all processes.
To boot system without password pass /bin/bash or /bin/sh as argument to init
init=/bin/bash

single

The most common argument that is passed to the init process is the word 'single' which instructs init to boot the computer in single user mode, and not launch all the usual daemons

root=/dev/device

This argument tells the kernel what device (hard disk, floppy disk) to be used as the root filesystem while booting. For example following boot parameter use /dev/sda1 as the root file system:
root=/dev/sda1

If you copy entire partition from /dev/sda1 to /dev/sdb1 then use
root=/dev/sdb1

ro

This argument tells the kernel to mount root file system as read-only. This is done so that fsck program can check and repair a Linux file system. Please note that you should never ever run fsck on read/write file system.

rw

This argument tells the kernel to mount root file system as read and write mode.

panic=SECOND


Specify kernel behavior on panic. By default, the kernel will not reboot after a panic, but this option will cause a kernel reboot after N seconds. For example following boot parameter will force to reboot Linux after 10 seconds
panic=10

maxcpus=NUMBER

Specify maximum number of processors that an SMP kernel should make use of. For example if you have four cpus and would like to use 2 CPU then pass 2 as a number to maxcpus (useful to test different software performances and configurations).
maxcpus=2

debug

Enable kernel debugging. This option is useful for kernel hackers and developers who wish to troubleshoot problem

selinux [0|1]


Disable or enable SELinux at boot time.
Value 0 : Disable selinux
Value 1 : Enable selinux

raid=/dev/mdN

This argument tells kernel howto assembly of RAID arrays at boot time. Please note that When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time.

mem=MEMEORY_SIZE

This is a classic parameter. Force usage of a specific amount of memory to be used when the kernel is not able to see the whole system memory or for test. For example:
mem=1024M

The kernel command line is a null-terminated string currently up to 255 characters long, plus the final null. A string that is too long will be automatically truncated by the kernel, a boot loader may allow a longer command line to be passed to permit future kernels to extend this limit (H. Peter Anvin ).

Other parameters

initrd /boot/initrd.img

An initrd should be loaded. the boot process will load the kernel and an initial ramdisk; then the kernel converts initrd into a "normal" ramdisk, which is mounted read-write as root device; then /linuxrc is executed; afterwards the "real" root file system is mounted, and the initrd file system is moved over to /initrd; finally the usual boot sequence (e.g. invocation of /sbin/init) is performed. initrd is used to provide/load additional modules (device driver). For example, SCSI or RAID device driver loaded using initrd.

hdX =noprobe

Do not probe for hdX drive. For example, disable hdb hard disk:
hdb=noprobe

If you disable hdb in BIOS, Linux will still detect it. This is the only way to disable hdb.

ether=irq,iobase,[ARG1,ARG2],name


Where,
ether: ETHERNET DEVICES

For example, following boot argument force probing for a second Ethernet card (NIC), as the default is to only probe for one (irq=0,iobase=0 means automatically detect them).
ether=0,0,eth1
How to begin the enter parameters mode?

You need to enter all this parameter at Grub or Lilo boot prompt. For example if you are using Grub as a boot loader, at Grub prompt press 'e' to edit command before booting.

1) Select second line
2) Again, press 'e' to edit selected command
3) Type any of above parameters.

See an example of "recovering grub boot loader password", for more information. Another option is to type above parameters in grub.conf or lilo.conf file itself.

See the complete list of Linux kernel parameters i.e. /usr/src/linux/Documentation/kernel-parameters.txt file.

Install Debian Etch on a Software Raid 1 with SATA disks

New server, S-ATA disks, the hardware RAID controller is not really supported. So we decided to use a software RAID 1 for the two disks. Here are the steps to create the RAID:

1)Boot the Debian Etch installer.

2)If the installation comes to "Partition method", use "Manual".

3)In the following menu, scroll to your first disk and hit enter: the partitionier asks you, if you want to create an empty partition table. Say "yes". (Hint: this will erase your existing data, if any.)

4)The partitioner is back in the disk overview, scroll one line downwards over the line with "FREE SPACE" and hit enter.

5)Create a partition with the size you need, but remember the size and the logical type.

6)In the "Partition settings" menu, go to "Use as" and hit enter.

7)Change the type to "physical volume for RAID".

8)Finish this partition with "Done setting up the partition".

9)Create other partitions on the same disk, if you like.

10)Now repeat all the steps from the first disk for the second disk.

11)After this, you should have at least two disks with the same partition schema and all partitions (beside swap) should be marked for RAID use.

12)Now look at the first menu entry in the partitioner menu, there is a new line: "Configure software RAID". Go into this menu.

13)Answer the question, if you like to write the changes, with "Yes".

14)Now pick "Create MD device".

15)Use RAID1 and give the number of active and spare devices (2 and 0 in our case).

16)In the following menu, select the same device number on the first and second disk and Continue.

17)Repeat this step for every two devices until you are done. Then use "Finish" from the Multidisk configuration options.

18)You are back in the partitioner menu and now you see one ore more new partitions named as "Software RAID Device". You can use this partitions like any normal partition and continue installing your system.

How to compile and install a new Linux kernel

Configure, build, and install

Be especially cautious when messing around with the kernel. Back up all of your files, and have a working bootable recovery floppy disk or CD-ROM nearby. Learn how to install a kernel on a system that doesn't matter. You've been warned. This is obviously a very short guide; only use in conjunction with a more thorough guide such as The Linux Kernel HOWTO

1. Download the latest kernel from kernel.org

The kernel comes as a 20 to 30 MB tar.gz or tar.bz2 file. It will decompress to about 200 MB and during the later compilation you will need additional space.

Example:
wget http://www.kernel.org/pub/linux/kernel/v2.4/linux-2.4.19.tar.gz
tar zxvf linux-2.4.19.tar.gz
cd linux-2.4.19


2. Configure the kernel options


This is where you select all the features you want to compile into the kernel (e.g. SCSI support, sound support, networking, etc.)

make menuconfig

* There are different ways to configure what you want compiled into the kernel; if you have an existing configuration from an older kernel, copy the old .config file to the top level of your source and use make oldconfig instead of menuconfig. This oldconfig process will carry over your previous settings, and prompt you if there are new features not covered by your earlier .config file. This is the best way to 'upgrade' your kernel, especially among relatively close version numbers. Another possibility is make xconfig for a graphical version of menuconfig, if you are running X.

3. Make dependencies

After saving your configuration above (it is stored in the ".config" file) you have to build the dependencies for your chosen configuration. This takes about 5 minutes on a 500 MHz system.

make dep


4. Make the kernel

You can now compile the actual kernel. This can take about 15 minutes to complete on a 500 MHz system.

make bzImage

The resulting kernel file is "arch/i386/boot/bzImage"

5. Make the modules

Modules are parts of the kernel that are loaded on the fly, as they are needed. They are stored in individual files (e.g. ext3.o). The more modules you have, the longer this will take to compile:

make modules

6. Install the modules

This will copy all the modules to a new directory, "/lib/modules/a.b.c" where a.b.c is the kernel version

make modules_install

* In case you want to re-compile...
If you want to re-configure the kernel from scratch and re-compile it, you must also issue a couple "make" commands that clean intermediate files. Note that "make mrproper" deletes your .config file. The complete process is:

make mrproper
make menuconfig
make dep
make clean
make bzImage
make modules
make modules_install


* Installing and booting the new kernel

For the remainder of this discussion, I will assume that you have LILO installed on your boot sector. Throughout this process, always have a working bootable recovery floppy disk, and make backups of any files you modify or replace. A good trick is to name all new files with -a.b.c (kernel version suffix) instead of overwriting files with the same name, although this is not shown in the example that follows.
On most Linux systems, the kernels are stored in the /boot directory. Copy your new kernel to that location and give it a unique name.

Example:
cp arch/i386/boot/bzImage /boot/vmlinuz-2.4.19


There is also a file called "System.map" that must be copied to the same boot directory.
cp System.map /boot
Now you are ready to tell LILO about your new kernel. Edit "/etc/lilo.conf" as per your specific needs. Typically, your new entry in the .conf file will look like this:

image = /boot/vmlinuz-2.4.19
label = "Linux 2.4.19"


Make sure the image points to your new kernel. It is recommended you keep your previous kernel in the file; this way, if the new kernel fails to boot you can still select the old kernel from the lilo prompt.
Tell lilo to read the changes and modify your boot sector:

lilo -v


Read the output carefully to make sure the kernel files have been found and the changes have been made. You can now reboot.

Summary of important files created during kernel build:
.config (kernel configuration options, for future reference)
arch/i386/boot/bzImage (actual kernel, copy to /boot/vmlinuz-a.b.c)
System.map (map file, copy to /boot/System.map)
/lib/modules/a.b.c (kernel modules)