Rebuilding Software Raid

Rebuilding an array currently requires us to manually rebuild the partition table on the fresh hard drive. We need to know which drive is still active and which one is the new one. To see this let's run the following command:

# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[0]       104320 blocks [2/1] [U_]   md1 : active raid1 sdb2[0]       1052160 blocks [2/1] [U_]   md2 : active raid1 sdb3[0]       243039232 blocks [2/1] [U_]   unused devices: <none>

To read this look at the first managed disk, md0. If you are adding back in a partition on a drive that is not empty you may have to keep track of which drive different ones are on. For the current purposes we will assume that we are installing a fresh, unpartitioned drive.

We need to see what the current drive is partitioned as, so we can dulpicate the same partition table on the new drive:

# fdisk -l /dev/sdb   Disk /dev/sdb: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes      Device Boot      Start         End      Blocks   Id  System /dev/sdb1   *           1          13      104391   fd  Linux raid autodetect /dev/sdb2              14         144     1052257+  fd  Linux raid autodetect /dev/sdb3             145       30401   243039352+  fd  Linux raid autodetect

... and just to verify that /dev/sda is blank ...

# fdisk -l /dev/sda   Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes      Device Boot      Start         End      Blocks   Id  System

Now we need to edit the partition table on /dev/sda to match exactly what we see on /dev/sdb.

For the cylinder start/stop values just refer to the existing partition table. It says "Start" and "End" values for each partition. If you just copy these exactly as the fdisk -l for that drive outputs it will create them exactly the same for you.

We need to set the boot partition as bootable or this drive won't be very useful if the other dies

Now we need to set the partition type to 'fd' for all of the partitions which is hex for a linux raid partition.

Let's look at the partition table we created, it should be identical to the one above from the existing hard drive.

If it looks good then we use w to write to the disk and exit

I usually run 'cat /proc/mdstat' again so I can see the partitions and compare as I add the newly created partitions back into the raid array.

Now we have to add in each of the partitions back into the managed disks, one at a time. I run 'cat /proc/mdstat' again after each addition to make sure it worked.

The last step is to update/refresh the grub configuration on both drives. These steps need to be taken on the main drive (eg. /dev/sda not /dev/sda1) of each member of the RAID array:

NOTE: the device name changes, but the grub values (hd0) do not. This ensures that the drives are detected properly in a failure situation.

And that is it. We are rebuilding as you can see. In this case it should take almost an hour and a half to rebuild the largest partition, with the smaller ones done almost as fast as you can type the commands.

Unable to render {include} The included page could not be found.