Skip to content

Category: Computers

Breaking Arrays, Moving Data, LVM Good for something

Who knew LVM would be good for something. Well, maybe, I’ll know for sure sometime tomorrow, or late tonight, if it works it’ll be great, if it doesn’t. I’ll be damn glad I backed up these drives.

Yeah, so back to LVM. I always wondered if creating an LVM volume over the top of an MD raid volume was a good idea or if it wasn’t just adding extra overhead. And EXT4 partition can be extended without the help of LVM and so can an MD raid device. So why add the extra layer in there.

pvmove

That’s why.

Breaking Arrays, Making Arrays

Wanting to avoid the “blow it away and restore from backup” strategy, especially since WD Caviar Greens are so damn slow compared to just about everything else, I decided the best course of action would be to split the existing unresizable md array and create a new second one. Something like….

mdadm /dev/md0 --fail /dev/sdb1
mdadm /dev/md0 --remove /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices /dev/sdb1 missing

The end result, 2 degraded but fully functional md arrays. One still hosting the data volume group with my home logical volume, and one with a big empty disk.

The trick now is to move the data.

LVM Really is Good for Something

The question of how to move the data stumped me for a bit. I could create a new volume group (VG), or at least a new logical volume in the same data VG I already had, format it, and rsync the data across. Of course then I would have to edit at least my /etc/fstab and to get things pointed to the right place. The alternative that came up as I was digging though the LVM documentation is a nifty function called pvmove (8) that will move the physical extents of an LVM from one physical drive to another in a volume group (or to multiple drives in a volume group if needed). Moreover, as best as I can interpret the docs, it does this in a way that’s safe to do with the system online.

All told, for my system, the process looked something like this…

vgextend data /dev/md1
pvmove /dev/md0 /dev/md1

Now it’s back to the waiting game. It’ll be 5 or 6 hours before the pvmove is complete, then I have to tear down the md0 raid array and add the /dev/sdd device that’s left in md0 to md1. That will necessitate a 6, or so, hour re-sync. After which, I’ll reboot, make sure md1 becomes md0 and everything is found properly. Then it should hopefully be a short task of expanding the logical volume from 1.5TB to 2TB and then the EXT4 file-system inside of it. If not, well I’ll be damn glad I made that 6-hour long backup, wont I?

MD, RAID10, ARRRRRrrrrrrgggghhhh!!!!

Normally the complexity of doing something in Linux doesn’t bother me. Arcane and convoluted commands don’t scare me, they never really have; they just take some getting use to. The problem I have is when the command, or the underlying system is only half implemented.

My current project has been replacing a pair of 1.5TB WD Caviar Greens with 2TB Hitachi 5k3000s. Yes I see the irony in replacing WD drives with drives made by a company that just sold their drive division to WD. On the up side 500GB more space nets me enough space to backup the rest of the computers on the network and still have as much free space as I had before, which was running down anyway; oh and the Hitachi’s are faster too.

Replacing the drives in the RAID array has gone smoothly enough using the following procedure:

  1. Fail the disk to remove using mdadm /dev/md0 --fail /dev/sdX#
  2. Remove the disk from the array using mdadm /dev/md0 --remove /dev/sdX#
  3. Power down the machine (hot swap is coming in a future upgrade)
  4. Swap the physical drives
  5. Bring the machine back up
  6. Add the new drive to the array using mdadm /dev/md0 --add /dev/sdX#
  7. Let it re-sync.

I’ve done this for 2 1.5TB Greens, one that was failing and one that’s now going to become a proper backup target.

Now that I have two 2TB drives in there, I want to use them, and that means extending the md group to the full size of the array. So far as I can tell, that should be a simple…

mdadm -G /dev/md0 --size=max

…but, apparently that’s not the case if the array is configured as RAID10. RAID10, which gives the performance of RAID0 with redundancy of being able to lose a disk, which IMO is perfect for slow 5K RPM disks. MD even has a nice feature where the RAID10 array can be created in a partial 2-disk configuration then extended to the full 4+ disk configuration later. In the “partial” mode, it behaves exactly like a RAID-1 array.

Which brings me to the meat of this rant. I can re-size a RAID1 array, I can convert a RAID 1 array to 5, 6, or even 0. However, mdadm can’t re-size a RAID10 array, even if it’s running in what amounts to RAID1 mode, or convert it to RAID1 or any other RAID level for that matter.

Sigh…

Now it’s off to back up the damn thing, kill it rebuild it, and restore everything…. At least I’ll know if my backup procedure works.

Wordpress is a Pig and Dreamhost’s VPSes aren’t configured for it.

I’ve been fighting with this for quite sometime. I moved to a VPS over a year ago in hopes of a more stable Dreamhost experience, and for a while it was. Then about 9 months ago my site started crashing in out of the blue. I’d be chugging along just fine then, “blam!”, site down. I started aggressively caching things with WP Super Cache, then W3 Total Cache. It helped a little, but ultimately things just got more and more unstable. About 2 months ago I gave up on using PhpMyAdmin when I needed to do SQL stuff, simply because it was an instacrash for my VPS. About 3 weeks ago, I had enough and decided it was time to seriously track down the problem.

To make a long story short, Wordpress, is a massive memory hog. I’m pushing on average 30MB before I even start loading plugins. That’s not a lot if you have a 8GB server dedicated to nothing but pushing wordpress but, 30MB is 5-10% of a small VPS. I’ve gone though all the Wordpress tuning guides I can find. I’ve manually cleaned up the database. Nothing really helps. Of course if the server was configured for the load it has, the problem would be considerably smaller.

Which brings us back to Dreamhost’s VPS. They say it’s designed to scale with the RAM that’s allocated to it. Sure, maybe if you’re running static HTML pages. In which case the 69 concurrent clients configured on a 400MB server would get ~6MB a piece which is just barely enough for Apache to serve static HTML. Even then it doesn’t really work out, since there’s non Apache overhead. In fact, now that I think about it, by default under full load, Apache is configured in such a way that it can easily exceed a VPS’s memory allotment just serving static content. šŸ˜®

Then comes mod_fcgi. By default it’s configured to allow 20 instances per process class (I’ll come back to this), and the Apache default is 1000.

What are process classes and why are they important. Process classes are spawned by the same executable and share a common virtual host and identity. For example, if my virtual host for cult-of-tech spawns a CGI process, that process can’t be used by another virtual host on my server. Now here’s the kicker. When wordpress gets going and everything is loaded, that 30MB+ of Wordpress and all the PHP overhead + whatever space you allot for caching (XCacahe, APC) is how big the fcgi istance will be. In my case, that means each php.cgi instance is 60-70MB. On a 400MB server, that means once you spawn 5-6 php processes you’ve used up the entirety of your VPS’s memory and, again, blam!

The kicker though, is Dreamhost’s overly aggressive memory manager on their VPSes. Instead of killing off processes, or for that matter special casing it and just restarting Apache if it’s running, the watchdog merely kills off the VPS. Well it may do more, because it can take 10 minutes for the VPS to come back up unless you manually reboot it.

Interestingly enough the answer to all of this is not to simply throw money at it. In the process of troubleshooting this I temporary pushed my memory limits up, and even at 600MB or 800MB the config would still allow enough processes to cause the server to crash. For that matter my development server, which has a ton of RAM available, can comfortably do many of the things that was causing my VPS to crash, without exceeding a 200MB memory foot print.

Simply put, there’s no reason a lightly trafficked Wordpress site should require more than 300MB, maybe 400MB, but certainly not 600MB to simply stay upright. At least not with a properly configured server behind it.

The moral of the story is:

  • Wordpress is a memory pig, and they need to seriously consider a couple of releases focusing entirely on performance and lowering the memory footprint.
  • Dreamhost’s configuration for Apache and mod_fcgi on their VPSes is overly generous for small servers and needs to be curtailed to more reasonable numbers.
  • Dreamhost’s VPS memory watch dog is aggressive, and naive, and will take down a server in a hard to quickly recover way to insure it doesn’t use more resources than the client is paying for.

And what am I doing about this?

I’ve curtailed my Apache and mod_fcgi configs to more reasonable settings.

I’ve set mod_fcgi’s MaxProcesses directive to floor( (400 - typical_process_size) / typical_process size) and my Apache MaxClients to floor(((typ-cgi-process size * 2) - 20) /5). I won’t be anymore specific than that, because what will actually work while still being performant, varies based on site, software, traffic, caching, and number of virtual hosts.

Window’s Registry Editor Needs an Address Bar

I don’t use the registry editor all that much, but when I do, I’m almost always going to some specific spot not browsing around the tree. I should be able to type HKLM\blah\blah\blah and have the registry editor open up to that part of the hive right away. Need I say more?

Arrrgh, I’m just going to complain…

…because I’m too fed up to do anything else about it.

For the past several months I’ve had no end of fun with my VPS. This site, and more importantly, my photography site pointsinfocus.com have been experiencing random downtime. Worst yet, I can almost always bring down the VPS simply by making a post on Points in Focus.

The thing is, I can’t find any damn reason this should be happening. I’ve been monitoring memory usage on my development server, running identical plugins and software and I never exceed my VPS’s memory limits, I don’t even get close. Even running top on the VPS nets nothing intelligent, always showing several 10s of MB of memory free at least, then blam, I get disconnected as the VPS’s processes are killed, VPS is inaccessable. Only by the time that happens, there’s no good reason why, so far as I can tell. The VPS never exceeds the allocated/available RAM, and CPU load is neglegable as well.

I get the feeling that somehow something, even though I can’t figure out what, is exceeding some instantaneous limit, and triggering the process killer. But given the tools available and the fact that the after the processes are killed there’s no way to get access to the /proc/PID/status to get the high water mark either it’s kind of frustrating to try and solve. Of course, even increasing the VPS’s available RAM doesn’t seem to help.

You know, I get the whole idea of VPSes and memory limits, but seriously if you’re going to kill people’s processes because they hit their limits, then the least you could do is provide them with some information about what triggered the kill. Better yet, would be to make the limit “soft” enough that they could actually see what’s actually going on with the tools available.

Sigh….

Someday I’ll have to open a ticket with Dreamhost about this, but not today…

Wordpess add Page Stats to the Admin Bar

I find it handy, especially when developing and tuning temples and plugins to, to get some info about how the page performed.

Previous to Wordpress 3, I’d define a constant in the wp-settings.php file on my development server and then have my theme check to see if that was set and if it was, insert a small fixed position div that contained memory, DB query, and page render statistics. It worked, but it wasn’t real elegant.

Wordpress 3 introduced the concept of the admin bar. Available to all logged in users, the admin bar does provide a continent way to handle and display some simple page stats.

Currently the plugin displays 3 stats, max memory used, number of DB queries reported by $wpdb, and page render time and will appear for all logged in users.

Download: cot-admin-bar-stats.zip

Magicka

I’m not a big gamer, and when I game, I tend to spend way to much time playing then get totally burned out. Which is likely to be what’s going to happen again, this time with Magicka from Paradox Interactive. Then I’ll scream and cry for a week while my fingers recover.

What is Magicka?

Well in short it’s a fantasy RPG that doesn’t take it self, or the whole genera, to seriously. It starts in a Harry Potteresque castle, with you, the obviously n00b wizard being given a quest by Vlad (who’s not a vampire, or so he insists) to save some kingdom.

Quick now, run along and get to your saving the world party, before the other wizards eat all the cheese.

The voice acting is good bit over the top, in a language best described as being one bork-bork-bork away from Swedish Chef.

The game’s core mechanic is an multi-element casting system that’s almost one to many borks past complicated. You have 8 elements: earth, water, cold, fire, electricity, life, shield, and necromancy. Each type has it’s own “style” of attack, from electricity’s chain lightning, to earth’s hurling of boulders. Some types cancel each other out when combined, othersĀ  combine into a proton beam of death that would toast even the puffyist Stay Puff man.

With 5 elemental slots in a spell, there’s plenty to play with and learn. Moreover, in many cases strength is increased by using more elements. For example, 1 fire makes a nice little flame thrower, but 5 and you’ve got mount Vesuvius coming out of your staff.

That’s the fun part.

The not so fun part is trying to cast them in combat. Movement is Diablo style point and click. Quite a bit of a departure if you’re use to WASD style games at this point. Elements are conjured with q, w, e, r, a, s, d, and f. Right clicking, casts a directional spell, shift+right click casts an area effect spell, middle-click casts on you, shift-left click swings your sword, but shift-left click with a spell ready casts it on your sword.

Confused yet?

But wait there’s more.

The keys that summon spells can be remapped but not the buttons that cast them. Have a logitech mouse with the middle button set to switch between clicks and freewheeling, better remap that shit or you’ll never be able to heal yourself.

Equipment: thar be dragons here.

Wizards are equipped with a sword and a staff, which is probably why they drown in anything more than ankle deep water. Though there’s no inventory, which makes the whole drowning in more than ankle deep water even more bizarre (must be the velour robes). The real mystery is why the guy in Minecraft, can hold almost 2000 cubic meters of iron can swim without problems but can’t cast a spell to save his life, while a wizard with no inventory what so ever sinks faster than the rock dropped in the lake next to him.

No wizard, is properly equipped without a sword, and Magicka draws on the best of the best. Excalibur makes an early appearance. Though since you’re clearly not King Author, you’ll have to settle for smashing people with the bolder it’s stuck in instead of slicing them to bits, perhaps that’s not a bad tradeoff, actually.

This isn’t a game about loot. Though there’s an inventory key, pressing it brings up a more detailed description of what you’re holding. There’s no money or loot, the spells you learn are listed under your tiny little health bar. Unfortunately, if you haven’t learned the spell in game, you can’t use it, even if you’ve memorized the keystrokes. This is a bit of a bother in the arena where it would be real handy to have haste before you get smashed into oblivion.

The lack of any inventory is actually slightly maddening, when weapons and staffs provide standard RPG style buffs and special abilities, but you can’t actually put them away without swapping them for one on the ground. That awesome staff of healing, that keeps you and your shield healed, while awesome, also heals the guy attacking you and anybody within a good bit of range as well, and you can’t put it away for a while to fight. (Not that you should since clearly you should be acting as a ranged character in this case. Which brings us to another not that you could, since fights become little more than insane free-for-alls.)

The one good thing is that the developers have provided an arena of sorts where you can hone your combat prowess against hoards of angry mobs.

In short, combat boils down to spamming left, right and middle mouse buttons, while flailing on the element keys. Hoping the whole time you don’t light yourself on fire instead of laying a perimeter of landmines in front of the volcano defensive wall. All the while running away like the underpowered pansy you really are when compared to a forest troll or dragon.

Will all the game has going for it–including an appearance by Knights who say Knee like druids summing shrubberies–the game has a huge swath of glitches, bugs, and poor design decisions.

The save system in the Legend of Zelda: Twilight Princess killed the game for me. Not being able to save at any point is quite possibly the single worst design decision that can be made by any developer. And I can’t think of anything worse in terms of lazyness. Ya, Magicka does it that way.

Levels are interspersed with checkpoints where you resume when you die, but otherwise you must complete a whole level in a sitting. Fortunately, levels don’t take very long to complete so it’s not that big of a deal for the time sensitive gamer.

Compounding matters is the apparent lack of any form of enemy scaling, neither in number or difficulty. A single player game appears to have the same number of head-ripping-off trolls that exist in a multiplayer game.

If you watch David “X”‘s game play video on YouTube, he and his friend easily handle Jormungandr. In single player I get pwned in 30 seconds. No ability to receive heals or raises makes for a rather interesting, if one sided, fight.

Judging by my play through so far, I’m going with single player is harder than multiplayer by a large margin. It probably doesn’t help that the controls keep prompting me to light myself on fire.

There’s a point where killing yourself in new and interesting ways is amusing, and there’s a point where you just want to move on. Right now, I’m quickly approaching the latter.

Magicka is yet another digital download only game. That’s both a bad thing and a good thing in my opinion. The good is you don’t have to go buy a box. The bad, you have to wait for it to download; and if you don’t have steam installed, you have to wait for that to download, and then you have to wait for the damn patches to download.

I WANT TO PLAY MY DAMN GAME ALREADY!!!!!!!

All told I spent 4 hours–yes I’ll admit my internet connection is not as awesome as it should be–waiting for the bloody game to download and install before I could get into playing it.

One final note, Paradox, is apparently not known for releasing stable, well-behaved games at release. Magicka is full of it’s share of glitches and bugs; so much so that it was unplayable for many on release. It’s still buggy, and there are still problems, but Paradox appears to be trying to patch them as quickly as they can. I haven’t run into any game ending bugs, but I have run into a few rather annoying glitches.

So be warned, this game could be a buggy mess, or it may not be.

That said, Steam has just informed me that I’ve played a total of 117 minutes already and I only got the game before going to bed last night/this morning. For $9.99, it’s well on it’s way to being a better buy than a movie ticket IMO.

Buy Magicka from Amazon

Logic Lost: Used Game Sales Hurt Developers

Ars Technica is reporting on a Penny Arcade article that’s reporting on a blog post by Computer and Video games.com on how buying used games hurts the game developers.

Okay first things first, I ain’t go no dog in the horse race. I’m not a gamer, I’m not in their market to start with. Well that’s not completely true, I do play some games, at this point their becoming rather antiquated games, but new game releases that I have any interest in are few and far between.

What bothers me, why I even bring this whole thing up, is that apparently somewhere some large portion of the population stopped participating in the logical world and started off on some other past.

The argument that game developers are making is this.

  1. Games cost money to make and support for online multiplayer play.
  2. That money comes from the sales of the game
  3. There are no secondary income sources in the gaming industry, like concerts; game releases are like a movie’s theater release with no DVD to follow.
  4. Ergo when people sell games, they hurt the developer because the money that trades hands doesn’t support the developer’s ability to keep the game’s servers online. (I assume nobody gives a shit about single player games anymore, now that we have the magical internet.)

The problem with this is well 4 does not follow 3. When the seller relinquishes their game disk they also relinquish the ability to play the game. This is what all that DRM that’s suppose to keep people from pirating games is for, you don’t get to keep a copy to play when you give the media away. Because of all that fun DRM, there’s no increase in the number of players. As far as the game companies know, the original user’s IP address changed–yes I know it’s more complicated than that with accounts and what not.

The point is, from the developer’s perspective the costs to run the server for some period of time into the future was paid for when the game was originally sold. There’s no difference in costs between:

  1. The original player playing online for 3 years
  2. The original player playing online for 1 year then selling the game to another buyer who plays online for 2 years.
  3. The original player playing online for 1 year, the selling to someone else who plays online for 1 year, then selling to a third party who plays online for 1 year.

The point is, so long as the play time online is the same, the costs are the same.

The failure in reasoning is really maddening.

However, what might be the most telling from all of this is what it means about the way the developers budget for multilayer capacity. What the developers are really admitting is that they are, in the long term, over selling their servers. That is they don’t really expect anybody to play the game for 3 years, they expect people to play the game for 6 months and then go play the next game they release.

Of course this problem is mostly their doing as well. Since multiple player play is now, almost exclusively, handled though a developer/publisher owned server, ostensibly for piracy reasons. The cost of running those servers falls exclusively on the shoulders of the developers.

By comparison, games that have stand alone servers shift the burden of hosting the multilayer games onto the community. If the community is big enough to support continued access some years down the road, the community, not the developer/publisher, will find a way to foot the bills for those multilayer games.

By tightening their noose around how people can play online they cut their own necks when it comes to who foots the bill for online play.

Broken Features are Worse Than No Features

When it comes to software providing a feature or function that’s brokenĀ  is worse than not providing the feature at all.

Let me weave the story of broken functionality from this weekend.

Google’s Apps for Domains provides 2 ways to reset your administrative password, either you have an alternative address configured and a check-box checked allowing Apps for Domains to reset your password using that address. Alternatively, if you missed that configuration option you can go though an alternative process that involves creating a DNS record with a value provided by Google to verify that you own the domain.

From what I’ve gathered, the functionality is suppose to work like this; you create a CNAME record with a value provided by Google and point that at google.com. Google’s system is then supposed to look up that CNAME record and verify that it points to google’s domain. If it does, the instructions for resetting your password are sent to the email address you provide when you go though the recovery process.

The problem is, the system is broken right now.

What should happen, and what’s actually happened are two different things. In reality I’ve received 2 automated responses form Google indicating that the system couldn’t verify the CNAME record. After searching the Google Apps support forums it turns out that the CNAME method is currently broken and that Google is aware of the issue but hasn’t bothered updating anything on their site to note that or temporarily disable the function.

Since the functionality was present and failed in the same way one would expect it to fail if you had simply configured something wrong the result is spinning your wheels with no results. If Google had disabled the functionality or provided a link to their support request system in the resulting email, I could have at least opened a ticket after the first failure instead of going though the process again double checking everything.

the least they could have done was provided a link to their support ticketing system. The worst possible thing to do to a customer is make it look like their spinning their wheels. In this case at a minimum the auto-reply email for a failure should include a way to open a ticket on the issue instead of just sending you back to the same process with a generic error.

The moral? If you’re a developer and you realize that some functionality you’re providing isn’t functional, disable it until you’ve fixed it. If that means you have to deal with more support tickets for a while so be it. This is even more important in a customer service situation like resetting a password.

Needless to say, if you use Google Apps for Domains and your administrator account doesn’t have a secondary email address that can be used for resetting the password, set that up post haste. On top of that, it might not be a bad idea to create a second administrator account with a long random character password that’s stored in a safe place. With a second admin you could use that to log in and reset the primary admin account’s password as well.

Win 7 and Old Hardware

Windows 7 never ceases to amaze me. I recently brought up a Win 7 Home pro box using, some pretty archaic by modern standards hardware. While not the most stellar performer, it does surf the web well enough and considering that was it’s intended mission I’d say it’s been successful.

Actual specs are:

  • Intel P4 2.4GHz Northwood Core (overclocked to 2.6GHz)
  • Asus P4P800 (865PE chipset) motherboard
    • 3COM 3C940 Giga-e LOM
    • On board ADI AD1985 Audio
    • On board VIA 6309 firewire controller
    • SATA via IHC5R
    • 2 UDMA 133 ports via VIA 6410
  • 1.5GB of DDR RAM in dual channel mode (2x 512MB sticks 2x 256MB sticks)
  • nVidia Geforce FX 5700LE
  • 500GB Western Digital SATA Caviar drive

I was utterly impress that not only did everything work, but everything was detected and worked right out of the box. Even installing to the SATA controller which was always a problem for Windows XP. Though I guess I really shouldn’t be so surprised by 9 years of OS development.

None the less, the real concern was performance. Of which the machine scores a blister, okay not really, 3.2 on the Windows Performance index. The limiting factor actually being the CPU.

I don’t know what I found to be more surprising, the fact that all the old hardware worked under the new OS or that the system is just as usable with more features and better security as XP SP3 was on the same hardware.