Install a minimal and configure with RAID-1, with a small RAID-1 partition for the basic system and a large RAID-1 partition for the LVM stuff. If you're installing on fresh disks, partition the disks as Linux Raid (code FD in fdisk).
The disk layout will look as follows, where md0 and md1 are RAID-1 devices:
+--------+----------------------+ Disk 1 | md0 | md1 | +--------+----------------------+
+--------+----------------------+ Disk 2 | md0 | md1 | +--------+----------------------+
The large RAID-1 partition isn't started and it can't be done on a running system (you'll see "invalid argument" or "device busy" errors). So boot from a Knoppix CD or similar, and type:
# mdadm --assemble /dev/md1 /dev/hda2 /dev/hdb2 # mdadm --detail /dev/md1
It should show: state: dirty, no-errors or with later versions, just clean. Then reboot from the harddisk again and type:
# mdadm --detail /dev/md1
It should show State : clean, resyncing.
LVM looks like this:
+-------------------------------+ PV | md1 | <-- All hardware devices +-------------------------------+ VG | | <-- All hardware devices together +-------+--------+--------------+ LV | 2G | 1.5G | 10G | <-- The volume group cut up +-------+--------+--------------+
First create a physical volume, check the result:
# pvcreate /dev/md1 # pvdisplay
Then create a volume group and check the result:
# vgcreate vg0 /dev/md1 # vgdisplay
Now create a logical volume of 3G and call it 'vps1':
# lvcreate -L3G -nvps1 vg0 # lvdisplay
Now the device /dev/vg0/vps1 can be formatted using mkfs.ext3 et al, used as a block device for Xen, or something else.
If you are short of disk space, just use lvextend and resize2fs to add some space. If you're using a 2.6 kernel and ext3, this can be done with a running kernel and mounted file system.
If a logical volume is too big, you can shrink it, too. However, you'll first need to unmount the file system. Then the partition must become visible as a device:
# kpartx -a /dev/vg0/lv0
Now the partition can be shrunk:
# resize2fs /dev/mapper/lv0p1 2G
Finally the logical volume can be shrunk:
# lvresize -L 2G /dev/vg0/lv0
I'm running with two SATA disks. Check your chipset with lspci and then look up whether your kernel supports hotplugging or just warmplugging. I'm using the Intel ICH7 chipset which does not use the ahci drivers but the ata_piix one. That automatically means no hotplugging and in case of the ICH7, warmplugging (but for other chipsets neither is supported).
The steps to take are as follows.
First check the attached drives:
$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: ST3160812AS Rev: 3.AD Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: ST3160812AS Rev: 3.AD Type: Direct-Access ANSI SCSI revision: 05
Note the results. In a minute, we will remove and add the first or second disk, and we want to see whether it correctly came back.
Now check which physical drive is actually scsi0 or scsi1 using dmesg. Look for the lines:
sd 0:0:0:0: Attached scsi disk sda sd 1:0:0:0: Attached scsi disk sdb
Apparently, they're attached nicely in the expected sequence. Note this and continue.
To remove the first drive from the RAID array, first set it to 'failed' and then remove it. Since I'm running three RAID 1 partitions on two drives, I need to remove one partition from each array.
# mdadm --fail /dev/md0 /dev/sdb1 # mdadm --fail /dev/md1 /dev/sdb2 # mdadm --fail /dev/md2 /dev/sdb3
# mdadm --remove /dev/md0 /dev/sdb1 # mdadm --remove /dev/md1 /dev/sdb2 # mdadm --remove /dev/md2 /dev/sdb3
Check the results for each array with mdadm --detail /dev/mdX
Now, warm-unplug the second disk:
# echo 1 > "/sys/class/scsi_device/1:0:0:0/device/delete"
If your bash shell says cannot overwrite existing file, then do the following:
# "/sys/class/scsi_device/1:0:0:0/device/" # echo 1 >| delete
The drive can now be replaced physically. After that, warm-plug the second disk, which is device 0 on host 1 in my case.
# echo 0 0 0 > /sys/class/scsi_host/host1/scan
See if it came back:
# cat /proc/scsi/scsi
The result of the above command should be the same as when we started. Add the disk back in the raid arrays, one after another:
# mdadm /dev/md0 --add /dev/sdb1 \ && mdadm /dev/md1 --add /dev/sdb2 \ && mdadm /dev/md2 --add /dev/sdb3 &
Monitor the process with something like:
# watch mdadm --detail /dev/md0