User Tools

Site Tools


public:techstuff:raid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:techstuff:raid [2012/08/21 13:51] – Deleted by PageMove plugin nathanpublic:techstuff:raid [2020/04/25 13:05] (current) – external edit 127.0.0.1
Line 1: Line 1:
 +====== Raid ======
 +The tool we use to do raid is called mdadm. Some useful commands are:
 +  * Show information about this raid device
  
 +  sudo mdadm --detail /dev/md<device number>
 +
 +  * Show information about raid state, according to the kernel
 +
 +  cat /proc/mdstat
 +
 +===== General Raid Operations =====
 +
 +==== Drive needs to be readded ====
 +
 +  sudo mdadm --manage /dev/md0 --re-add /dev/<disk>
 +
 +==== Add new Drive ====
 +
 +add the new disk to the collection and grow the array
 +
 +  mdadm --add /dev/md0 /dev/sd[X]
 +  mdadm --grow /dev/md0 --raid-devices=4 # if there were three disks in the array before, and you've just added the fourth.
 +
 +==== Start the Array ====
 +
 +  mdadm --assemble --scan
 +  mdadm -R /dev/md0
 +
 +==== Swap a Drive ====
 +
 +To swap out a disk with a new one in the array run:
 +  mdadm /dev/md0 --add /dev/<new> --fail /dev/<old> --remove /dev/<old>
 +
 +==== Adding an Array to mdadm.conf ====
 +
 +  mdadm --examine --scan >> /etc/mdadm.conf
 +
 +==== Permanently remove a drive from an array ====
 +
 +  mdadm /dev/md1 --remove <raid element>
 +  mdadm --zero-superblock <raid element>
 +  
 +
 +===== Optimizing the Filesystem =====
 +
 +Note: this only applies to striping such as raid 0, 5, 6.
 +
 +To ensure maximum speeds from the array do the following.
 +
 +  * Use to obtain the 'RAID stride' and 'RAID stripe width'
 +
 +  dumpe2fs <filesystem device>
 +
 +  * Run the following command and note the 'Chunk Size'
 +
 +  sudo mdadm --detail <raid device>
 +
 +  * https://gryzli.info/2015/02/26/calculating-filesystem-stride_size-and-stripe_width-for-best-performance-under-raid/
 +    * Stride size = [RAID chunk size] / [Filesystem block size]
 +    * Stripe width = [ Stride size ] * [ Number of data-bearing disks]
 +
 +  sudo tune2fs -E stride=128,stripe_width=512 /dev/mapper/zoidberg-stuff
 +
 +===== Adding drive to Raid Array with GPT Partiton =====
 +
 +  * We need a bios boot partition and an Raid partition
 +
 +<code>
 +$ parted /dev/sdx
 +(parted) mklabel gpt
 +(parted) mkpart primary 1049KB 3146KB
 +(parted) set 1 bios_grub on
 +(parted) mkpart primary 3146kB 2000GB
 +(parted) print free                                                    
 +Model: ATA WDC WD20EARX-00Z (scsi)
 +Disk /dev/sdd: 2000GB
 +Sector size (logical/physical): 512B/4096B
 +Partition Table: gpt
 +
 +Number  Start   End     Size    File system  Name     Flags
 +        17.4kB  1049kB  1031kB  Free Space
 +      1049kB  3146kB  2097kB               primary  bios_grub
 +      3146kB  2000GB  2000GB               primary  raid
 +        2000GB  2000GB  73.2kB  Free Space
 +</code>
 +
 +  * Next you need to install grub to the disk.
 +
 +<code>
 +$ grub-install --recheck /dev/sdx
 +Installation finished. No error reported.
 +</code>
 +
 +  * Finally we need to add the drive to the array.
 +
 +<code>
 +$ sudo mdadm /dev/md0 --manage --add /dev/sdd2
 +mdadm: added /dev/sdx2
 +$ sudo mdadm --detail --misc /dev/md0
 +/dev/md0:
 +        Version : 1.2
 +  Creation Time : Sat Mar 31 13:09:17 2012
 +     Raid Level : raid5
 +     Array Size : 3906776064 (3725.79 GiB 4000.54 GB)
 +  Used Dev Size : 1953388032 (1862.90 GiB 2000.27 GB)
 +   Raid Devices : 3
 +  Total Devices : 4
 +    Persistence : Superblock is persistent
 +
 +    Update Time : Sun May 13 03:40:38 2012
 +          State : clean
 + Active Devices : 3
 +Working Devices : 4
 + Failed Devices : 0
 +  Spare Devices : 1
 +
 +         Layout : left-symmetric
 +     Chunk Size : 512K
 +
 +           Name : zoidberg: (local to host zoidberg)
 +           UUID : a69274c7:b1b48b1f:7ef9d7c9:87f3b729
 +         Events : 58
 +
 +    Number   Major   Minor   RaidDevice State
 +                   34        0      active sync   /dev/sdc2
 +                    2        1      active sync   /dev/sda2
 +                   18        2      active sync   /dev/sdb2
 +
 +                   50        -      spare   /dev/sdx2
 +</code>
 +
 +  * At this point the disk is a hot spare. To expand the array onto it you'll need to run the following.
 +
 +<code>mdadm --grow /dev/md0 --raid-devices=4 # if there were three disks in the array before and you now want to use four.</code>
 +
 +===== GRUB =====
 +
 +==== Reinstall Grub with raid and lvm ====
 +
 +This is a guide to reinstall grub on an lvm and raid setup. My example includes a raid5 of 3 disks:
 +  
 +  /dev/sda
 +  /dev/sdb
 +  /dev/sdc
 +
 +Let's start :)
 +
 +  * Run through the live disk install until you reach "Detect Disks"
 +  * Mount the root filesystem
 +
 +  mount /dev/mapper/<hostname>-root /mnt
 +
 +  *  If you have a separate boot and usr partitions you will also need to do
 +
 +  mount /dev/mapper/<hostname>-boot /mnt/boot
 +  mount /dev/mapper/<hostname>-usr /mnt/usr
 +
 +  * Bind mount the important filesystems
 +
 +  for i in /dev /dev/pts /proc /sys; do mount --bind $i /mnt$i; done
 +
 +  * Chroot into your now mounted install
 +
 +  chroot /mnt /bin/bash
 +
 +  * Ensure the assembled array matches your old mdadm.conf
 +
 +  cat /etc/mdadm/mdadm.conf | grep ARRAY
 +  mdadm --assemble --scan
 +
 +  * Ensure all you drives are listed in /boot/grub/devices.map
 +
 +  (hd0)  /dev/disk/by-id/ata-<somedisk>
 +  (hd1)  /dev/disk/by-id/ata-<somedisk>
 +  (hd2)  /dev/disk/by-id/ata-<somedisk>
 +
 +  * run update-grub to generate a new grub.cfg
 +
 +  update-grub
 +
 +  * Run grub-install on each disk to reinstall grub onto them. This should ensure that any of the disks can boot the machine
 +
 +  grub-install /dev/sda
 +  grub-install /dev/sdb
 +  grub-install /dev/sdc
 +
 +  * Finally, just to be nice, unmount all the filesystems
 +
 +  for i in /sys /proc /dev/pts /dev /mnt/boot /mnt/usr /mnt; do sudo umount /mnt$i; done
 +
 +  * Cross you fingers and reboot
 +
 +  reboot
 +
 +==== Extra Stuff ====
 +
 +=== Debugging ===
 +
 +  * Call grub-install with --debug
 +  * Call grub-mkimage with --verbose
 +  * Call grub-mkdevicemap with --verbose
 +  * Call grub-mkconfig by calling it via bash like so: ''/bin/bash -x grub-mkconfig''
 +
 +=== Building and installing your own grub image ===
 +
 +  * This is the process grub-install should be doing for you.
 +
 +  grub-mkimage -O i386-pc --output=/boot/grub/core.img --prefix="(<hostname>-root)/boot/grub" \
 +  biosdisk ext2 mdraid raid raid5rec lvm
 +
 +  * The names at the end are modules found is ''/usr/lib/grub/i386-pc/''
 +
 +  * To install the image on /dev/sda run
 +
 +  grub-setup --verbose --directory=/boot/grub --device-map=/boot/grub/device.map /dev/sda
 +
 +==== Speed up rebuid ====
 +
 +  * Increase the target sync speed when the array is in use.<code>sudo sysctl -w dev.raid.speed_limit_min=50000</code>
 +
 +  * Increase the read ahead.<code>sudo blockdev --setra 65536 /dev/md1</code>
 +
 +  * Increase the cache size to 32MiB<code>sudo bash -c 'echo 32768 > /sys/block/md1/md/stripe_cache_size'</code>
 +
 +==== Bugs ====
 +
 +  * When resizing some arrays you may get <code>[ 4780.580972] md/raid:md0: reshape: not enough stripes.  Needed 512
 +[ 4780.597961] md: couldn't update array info. -28</code> in which case run <code>echo 600 > /sys/block/md0/md/stripe_cache_size</code>