raid 1 linux - now with lvm

home | blog | Terrible people and places | Covid-19 links | Teh Internet | guest blog |rants | placeholder | political | projects | Gwen and Liam | Citadel patched | Tools | Scouts



install mdadm

fdisk the devices and make partitions on each and set the type to FD (linux raid autodetect)

Reboot to re-sync (not really needed, but see that the devices ended up where you thought.)

mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/device1partitionX /dev/device2partitionX

Check the status of the raid before you create the file system

watch cat /proc/mdstat

(i.e. make sure the "rebuild" finishes before doing anything more)

Pepper in a good config so you can survive a reboot!

(probably make backups of existing configs first!)
run mkconf > /etc/mdadm.conf
(on Debian and Ubuntu):
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

If you are using LVM, you  might want to double check the UUID's
compare using:
blkid
ls -al /dev/disk/by-uuid
and vol_id

Some versions of Ubuntu and Debian have issues with UUID's being wrong.

After raid is ok - verify with a reboot!... create your file system (or skip for now to the next step to do LVM):

mkfs.ext3 -m0 /dev/md0
-m0 as the super user should already have 5% of the root drive :-)
(if you forget and want to reset it later - probably don't want to do this with the lvm on, but I have not tried it)
tune2fs -m 0 /dev/md0

------- (or better yet, create LVM on top to allow you to grow storage): -----------

pvcreate /dev/md0
vgcreate vg0 /dev/md0
Note check the -s switch to vgcreate if using lvm1 - you should not be this late in the game though, use lvm2!

Check with:
vgdisplay vg0

Now create a logical volume to make the size of partition you want:

lvcreate -L xG -n lv0 vg0 (leave room for snapshots!)
Where x is the size in GB you want available to the new partition in the big pool of space - for full size see vgdisplay above.
and lv0 will be the name of the "partition" in the pool of space made up of the raid set(s).

If you just want to use all the space - but you probably don't if you want to create snapshots!---
lvcreate -l 100%FREE -n lv0 vg0
Some older versions of lvcreate don't have that option so you need to use the PE number from the vgdisplay vg0 output.


Check the new volumes properties to make sure you used the space the way you wanted with the -L flag.

lvdisplay /dev/vg0/lv0

No need to reserve space for root on another storage mountpoint eh?

mkfs.ext3 -m0 /dev/vg0/lv0

Now mount up and have fun!

mkdir /funspace
mount -t ext3 /dev/vg0/lv0 /funspace/

Probably better to use UUID's so if the order of drives change, things will just work:

(ls -al /dev/disk/by-uuid/) will give the symlinks to the real or pseudo devices (raid et all).

Restart udev to rescan, also vol_id command can be used.

mdadm --detail --scan > /etc/mdadm.conf
Prepend the output with:
DEVICE partitions

Also,
(probably add lines for email notification etc... See mdadm.conf man page for details)
- Ubuntu / Debian probably uses /etc/mdadm/mdadm.conf

Later, you can add additional raid group storage (i.e. /dev/md1, /dev/md2) and add to the pool of storage via the following steps:

fdisk the newly added disk and make a big partition (set type Linux raid autodetect id=fd) and then do the above steps again...

mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/device1partitionX /dev/device2partitionX

Followed by adding it to the existing physical volume and finally the volume group:

pvcreate /dev/md1
vgextend vg0 /dev/md1

You should now have room to extend the filesystem. If you are using ext3, search for the tool:

resize2fs

Here is a better writeup than I have -

http://www.debian-administration.org/articles/424

http://michael-prokop.at/blog/2006/08/01/ext3-online-resizing/

To simply move in to a new lvm partition and redo your old set up, try this:

cp -R -p /* /newfs 
chroot /newfs
change /etc/fstab
change /boot/grub/menu.1st
(reinstall grub on the new root device)
grub-install /dev/newdevice

Oh, it might be good to email on failure using this:

mdadm --monitor --mail=sysadmin --delay=300 /dev/md0 &
For servers without smtp, try this:

the file for sending email is here: smtpmail.py

(credits for the script (smtpmail.py) go to http://ubuntuforums.org/member.php?u=293606 Kevlaur in the Ubuntu forums.

mdadm --monitor --scan --oneshot --test --program /home/kevin/smtpmail.py
After testing that you got the email, run this and also add it to rc.local or similar user added scripts to start at boot.
mdadm --monitor --scan --daemonise --delay 120 --program /root/smtpmail.py


On Debian and Ubuntu, add this line to /etc/mdadm/mdadm.conf:
PROGRAM /root/smtpmail.py
or Exim will eat the email if it only delivers locally!

You probably have not gotten this far, so this won't be of use to most folks that skim howto's:

Keep track of all devices and serial numbers (hell, put the hardware configs in version control!):

hwinfo > post-raid-hwinfo

Note: hwinfo is not useful after the device has gone off-line due to failing hardware, so get it while it works!

You should be able to find the serial numbers for drives in /dev/disk/by-id/blahhhhhhh(S#) with the serial at the end of the string, not sure if all drive show up this way, but Segate does currently with sata drives.


For Slackware 13.0 64 bit, here is a slack pakage version of hwinfo
hwinfo-13.57-x86_64-2mch.tgz
Here is the stuff to build it again if need be
hwinfo.tgz

Of course you can deduce which drive if you only have a single failure, but better to be safe!




[æ]