Skip to content

Tag: backups

Breaking Arrays, Moving Data, LVM Good for something

Who knew LVM would be good for something. Well, maybe, I’ll know for sure sometime tomorrow, or late tonight, if it works it’ll be great, if it doesn’t. I’ll be damn glad I backed up these drives.

Yeah, so back to LVM. I always wondered if creating an LVM volume over the top of an MD raid volume was a good idea or if it wasn’t just adding extra overhead. And EXT4 partition can be extended without the help of LVM and so can an MD raid device. So why add the extra layer in there.

pvmove

That’s why.

Breaking Arrays, Making Arrays

Wanting to avoid the “blow it away and restore from backup” strategy, especially since WD Caviar Greens are so damn slow compared to just about everything else, I decided the best course of action would be to split the existing unresizable md array and create a new second one. Something like….

mdadm /dev/md0 --fail /dev/sdb1
mdadm /dev/md0 --remove /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices /dev/sdb1 missing

The end result, 2 degraded but fully functional md arrays. One still hosting the data volume group with my home logical volume, and one with a big empty disk.

The trick now is to move the data.

LVM Really is Good for Something

The question of how to move the data stumped me for a bit. I could create a new volume group (VG), or at least a new logical volume in the same data VG I already had, format it, and rsync the data across. Of course then I would have to edit at least my /etc/fstab and to get things pointed to the right place. The alternative that came up as I was digging though the LVM documentation is a nifty function called pvmove (8) that will move the physical extents of an LVM from one physical drive to another in a volume group (or to multiple drives in a volume group if needed). Moreover, as best as I can interpret the docs, it does this in a way that’s safe to do with the system online.

All told, for my system, the process looked something like this…

vgextend data /dev/md1
pvmove /dev/md0 /dev/md1

Now it’s back to the waiting game. It’ll be 5 or 6 hours before the pvmove is complete, then I have to tear down the md0 raid array and add the /dev/sdd device that’s left in md0 to md1. That will necessitate a 6, or so, hour re-sync. After which, I’ll reboot, make sure md1 becomes md0 and everything is found properly. Then it should hopefully be a short task of expanding the logical volume from 1.5TB to 2TB and then the EXT4 file-system inside of it. If not, well I’ll be damn glad I made that 6-hour long backup, wont I?

MD, RAID10, ARRRRRrrrrrrgggghhhh!!!!

Normally the complexity of doing something in Linux doesn’t bother me. Arcane and convoluted commands don’t scare me, they never really have; they just take some getting use to. The problem I have is when the command, or the underlying system is only half implemented.

My current project has been replacing a pair of 1.5TB WD Caviar Greens with 2TB Hitachi 5k3000s. Yes I see the irony in replacing WD drives with drives made by a company that just sold their drive division to WD. On the up side 500GB more space nets me enough space to backup the rest of the computers on the network and still have as much free space as I had before, which was running down anyway; oh and the Hitachi’s are faster too.

Replacing the drives in the RAID array has gone smoothly enough using the following procedure:

  1. Fail the disk to remove using mdadm /dev/md0 --fail /dev/sdX#
  2. Remove the disk from the array using mdadm /dev/md0 --remove /dev/sdX#
  3. Power down the machine (hot swap is coming in a future upgrade)
  4. Swap the physical drives
  5. Bring the machine back up
  6. Add the new drive to the array using mdadm /dev/md0 --add /dev/sdX#
  7. Let it re-sync.

I’ve done this for 2 1.5TB Greens, one that was failing and one that’s now going to become a proper backup target.

Now that I have two 2TB drives in there, I want to use them, and that means extending the md group to the full size of the array. So far as I can tell, that should be a simple…

mdadm -G /dev/md0 --size=max

…but, apparently that’s not the case if the array is configured as RAID10. RAID10, which gives the performance of RAID0 with redundancy of being able to lose a disk, which IMO is perfect for slow 5K RPM disks. MD even has a nice feature where the RAID10 array can be created in a partial 2-disk configuration then extended to the full 4+ disk configuration later. In the “partial” mode, it behaves exactly like a RAID-1 array.

Which brings me to the meat of this rant. I can re-size a RAID1 array, I can convert a RAID 1 array to 5, 6, or even 0. However, mdadm can’t re-size a RAID10 array, even if it’s running in what amounts to RAID1 mode, or convert it to RAID1 or any other RAID level for that matter.

Sigh…

Now it’s off to back up the damn thing, kill it rebuild it, and restore everything…. At least I’ll know if my backup procedure works.