====== Raid ====== The tool we use to do raid is called mdadm. Some useful commands are: * Show information about this raid device sudo mdadm --detail /dev/md * Show information about raid state, according to the kernel cat /proc/mdstat ===== General Raid Operations ===== ==== Drive needs to be readded ==== sudo mdadm --manage /dev/md0 --re-add /dev/ ==== Add new Drive ==== add the new disk to the collection and grow the array mdadm --add /dev/md0 /dev/sd[X] mdadm --grow /dev/md0 --raid-devices=4 # if there were three disks in the array before, and you've just added the fourth. ==== Start the Array ==== mdadm --assemble --scan mdadm -R /dev/md0 ==== Swap a Drive ==== To swap out a disk with a new one in the array run: mdadm /dev/md0 --add /dev/ --fail /dev/ --remove /dev/ ==== Adding an Array to mdadm.conf ==== mdadm --examine --scan >> /etc/mdadm.conf ==== Permanently remove a drive from an array ==== mdadm /dev/md1 --remove mdadm --zero-superblock ===== Optimizing the Filesystem ===== Note: this only applies to striping such as raid 0, 5, 6. To ensure maximum speeds from the array do the following. * Use to obtain the 'RAID stride' and 'RAID stripe width' dumpe2fs * Run the following command and note the 'Chunk Size' sudo mdadm --detail * https://gryzli.info/2015/02/26/calculating-filesystem-stride_size-and-stripe_width-for-best-performance-under-raid/ * Stride size = [RAID chunk size] / [Filesystem block size] * Stripe width = [ Stride size ] * [ Number of data-bearing disks] sudo tune2fs -E stride=128,stripe_width=512 /dev/mapper/zoidberg-stuff ===== Adding drive to Raid Array with GPT Partiton ===== * We need a bios boot partition and an Raid partition $ parted /dev/sdx (parted) mklabel gpt (parted) mkpart primary 1049KB 3146KB (parted) set 1 bios_grub on (parted) mkpart primary 3146kB 2000GB (parted) print free Model: ATA WDC WD20EARX-00Z (scsi) Disk /dev/sdd: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 17.4kB 1049kB 1031kB Free Space 1 1049kB 3146kB 2097kB primary bios_grub 2 3146kB 2000GB 2000GB primary raid 2000GB 2000GB 73.2kB Free Space * Next you need to install grub to the disk. $ grub-install --recheck /dev/sdx Installation finished. No error reported. * Finally we need to add the drive to the array. $ sudo mdadm /dev/md0 --manage --add /dev/sdd2 mdadm: added /dev/sdx2 $ sudo mdadm --detail --misc /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Mar 31 13:09:17 2012 Raid Level : raid5 Array Size : 3906776064 (3725.79 GiB 4000.54 GB) Used Dev Size : 1953388032 (1862.90 GiB 2000.27 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun May 13 03:40:38 2012 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : zoidberg:0 (local to host zoidberg) UUID : a69274c7:b1b48b1f:7ef9d7c9:87f3b729 Events : 58 Number Major Minor RaidDevice State 3 8 34 0 active sync /dev/sdc2 1 8 2 1 active sync /dev/sda2 2 8 18 2 active sync /dev/sdb2 4 8 50 - spare /dev/sdx2 * At this point the disk is a hot spare. To expand the array onto it you'll need to run the following. mdadm --grow /dev/md0 --raid-devices=4 # if there were three disks in the array before and you now want to use four. ===== GRUB ===== ==== Reinstall Grub with raid and lvm ==== This is a guide to reinstall grub on an lvm and raid setup. My example includes a raid5 of 3 disks: /dev/sda /dev/sdb /dev/sdc Let's start :) * Run through the live disk install until you reach "Detect Disks" * Mount the root filesystem mount /dev/mapper/-root /mnt * If you have a separate boot and usr partitions you will also need to do mount /dev/mapper/-boot /mnt/boot mount /dev/mapper/-usr /mnt/usr * Bind mount the important filesystems for i in /dev /dev/pts /proc /sys; do mount --bind $i /mnt$i; done * Chroot into your now mounted install chroot /mnt /bin/bash * Ensure the assembled array matches your old mdadm.conf cat /etc/mdadm/mdadm.conf | grep ARRAY mdadm --assemble --scan * Ensure all you drives are listed in /boot/grub/devices.map (hd0) /dev/disk/by-id/ata- (hd1) /dev/disk/by-id/ata- (hd2) /dev/disk/by-id/ata- * run update-grub to generate a new grub.cfg update-grub * Run grub-install on each disk to reinstall grub onto them. This should ensure that any of the disks can boot the machine grub-install /dev/sda grub-install /dev/sdb grub-install /dev/sdc * Finally, just to be nice, unmount all the filesystems for i in /sys /proc /dev/pts /dev /mnt/boot /mnt/usr /mnt; do sudo umount /mnt$i; done * Cross you fingers and reboot reboot ==== Extra Stuff ==== === Debugging === * Call grub-install with --debug * Call grub-mkimage with --verbose * Call grub-mkdevicemap with --verbose * Call grub-mkconfig by calling it via bash like so: ''/bin/bash -x grub-mkconfig'' === Building and installing your own grub image === * This is the process grub-install should be doing for you. grub-mkimage -O i386-pc --output=/boot/grub/core.img --prefix="(-root)/boot/grub" \ biosdisk ext2 mdraid raid raid5rec lvm * The names at the end are modules found is ''/usr/lib/grub/i386-pc/'' * To install the image on /dev/sda run grub-setup --verbose --directory=/boot/grub --device-map=/boot/grub/device.map /dev/sda ==== Speed up rebuid ==== * Increase the target sync speed when the array is in use.sudo sysctl -w dev.raid.speed_limit_min=50000 * Increase the read ahead.sudo blockdev --setra 65536 /dev/md1 * Increase the cache size to 32MiBsudo bash -c 'echo 32768 > /sys/block/md1/md/stripe_cache_size' ==== Bugs ==== * When resizing some arrays you may get [ 4780.580972] md/raid:md0: reshape: not enough stripes. Needed 512 [ 4780.597961] md: couldn't update array info. -28 in which case run echo 600 > /sys/block/md0/md/stripe_cache_size