Skip to content

Tag: LVM

Finally Done with this RAID project…

Three days of screwing around and as far as I can tell I’ve successfully moved all my data from one array to another while keeping the machine and data online the whole time–other than the few minutes of reboots to remove and replace hardware. Not bad for a SOHO file & web devel server.

Expanding things…

After the move was completed, and the array was re-syncing I expanded the LVM logical volume and the file system inside of it. Expanding the LV was simple and fast using lvextend (8) and the following command…

root@host:# lvextend data /dev/md#

It should take a couple of seconds while it allocates the new extents and returns and that’s done.

Expanding the ext4 FS takes a bit longer, but as long as it’s being extended and not contracted, it can be done while the FS is online and mounted.

root@host:# resize2fs /file/system/mount/point

Adding the -P to resize2fs (8) would be handy, if it works I didn’t try it, adding progress information as the resize is done.

/dev/md127, wtf?

Rebooting to remove the old, now archival, hard drive and bringing the system back up raised an interesting head scratcher. The MD device that should have come up as MD0, came up as MD127. The /etc/mdadm.conf file looked correct, but it wasn’t putting the device where it should have been in /dev. The fix seems to be rebuilding the intiramfs…

root@host:# update-initramfs -u

With that done and a reboot the md device shows up as md0 like it’s suppose to.

Additionally md apparently now supports assigning arrays descriptive names, like "hostname:1" and the array shows up under /dev/md/ as that name in addition to the regular /dev/md* device.

A note on the Hitachi 5k3000s

I went with these drives based on this post on the Backblaze blog. Well, with the caveat that I’m still leery of 3TB drives, so I’m using 2TB drives. I don’t even think that the board or the SATA controllers I have in this box support >2TB drives (though the update later this month will).

More interestingly, the Hitachi 5k3000s are 512-byte sector drives, not 4K sector “advance format” drives. While the 4K sectors do have advantages on large drives in terms of insuring data integrity they also become somewhat fun trying to partition and align around. Partitions have to aligned to 4K boundaries (fdisk, as well as Windows’ partition tools, align to 1M (2048- 512-byte sectors) when DOS compatibility mode is disabled), on top of that you have to be careful where the MD device places the metadata information, as that can shift the FS alignment as well. And for that matter, I have no idea what kind of overhead LVM adds in terms of alignment. In short, 4K drives are, IMO, still something of a mess, and probably will be fore sometime.

One nice thing is I’m seeing about 2x the performance of these Hitachi drives than I was with WD Greens, even though I believe they were properly aligned. Benchmarks show the drives I have can do 140MB/s on the other tracks, I don’t get that yet, but I’m hopeful that a new system with a faster CPU (Xeon E3-1220) and more modern SATA controllers (not the ancient SiI3114 on this Tyan Thunder K8W) will get me closer to that.

The Brass Tacks: What I learned

  • madam’s support for RAID 10 is lacking compared to all the other levels
  • LVM, especially pvmove (8) was more useful than just having a realizable volumes.
  • Planning storage, especially growth is a pain in the rear, when you can’t just throw a ton of disks at it

Breaking Arrays, Moving Data, LVM Good for something

Who knew LVM would be good for something. Well, maybe, I’ll know for sure sometime tomorrow, or late tonight, if it works it’ll be great, if it doesn’t. I’ll be damn glad I backed up these drives.

Yeah, so back to LVM. I always wondered if creating an LVM volume over the top of an MD raid volume was a good idea or if it wasn’t just adding extra overhead. And EXT4 partition can be extended without the help of LVM and so can an MD raid device. So why add the extra layer in there.

pvmove

That’s why.

Breaking Arrays, Making Arrays

Wanting to avoid the “blow it away and restore from backup” strategy, especially since WD Caviar Greens are so damn slow compared to just about everything else, I decided the best course of action would be to split the existing unresizable md array and create a new second one. Something like….

mdadm /dev/md0 --fail /dev/sdb1
mdadm /dev/md0 --remove /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices /dev/sdb1 missing

The end result, 2 degraded but fully functional md arrays. One still hosting the data volume group with my home logical volume, and one with a big empty disk.

The trick now is to move the data.

LVM Really is Good for Something

The question of how to move the data stumped me for a bit. I could create a new volume group (VG), or at least a new logical volume in the same data VG I already had, format it, and rsync the data across. Of course then I would have to edit at least my /etc/fstab and to get things pointed to the right place. The alternative that came up as I was digging though the LVM documentation is a nifty function called pvmove (8) that will move the physical extents of an LVM from one physical drive to another in a volume group (or to multiple drives in a volume group if needed). Moreover, as best as I can interpret the docs, it does this in a way that’s safe to do with the system online.

All told, for my system, the process looked something like this…

vgextend data /dev/md1
pvmove /dev/md0 /dev/md1

Now it’s back to the waiting game. It’ll be 5 or 6 hours before the pvmove is complete, then I have to tear down the md0 raid array and add the /dev/sdd device that’s left in md0 to md1. That will necessitate a 6, or so, hour re-sync. After which, I’ll reboot, make sure md1 becomes md0 and everything is found properly. Then it should hopefully be a short task of expanding the logical volume from 1.5TB to 2TB and then the EXT4 file-system inside of it. If not, well I’ll be damn glad I made that 6-hour long backup, wont I?