Category: sysadmin


On virtualization, my basement, and Garageband

March 5th, 2008 — 12:27am

In my house there are four people, and four computers, some fixed in place and some laptops, and I think it’s the case that all the necessary technology exists to make this situation a great deal less awkward and fiddly than it is today. One should expect to be able to migrate an active session, including running applications and data, from the iMac upstairs to the laptop over there; the family’s data should be centrally and commonly accessible, with a home directory available everywhere. So where do we stand?

The good news is that the computing industry has, in many ways, caught up with where IBM was 30 years ago, and so we have cheap and ubiquitous virtualization. But we haven’t generally reached the understanding they had, that it’s much more useful a metaphor to consider an “operating system” a hosting environment for applications, rather than as the literal and original definition as the interface layer between hardware and applications. In theory, virtualization renders that distinction obvious and transparent, but it’s obscured by the fact that we’re still running home computers that are conceptually the same as they were 25 years ago, and, whether Windows or Mac, we’re still bound to the notion of the hardware as significant; it takes about a minute spent with Windows to recognize that it’s obsessively about the hardware and your interactions with it — little USB icons, and hard drive icons, and a constant need to care about the components in the ugly box on the desk.

Until fairly recently it still felt obscenely profligate to indulge the idea of “virtual appliances” — applications bundled in a virtual machine, pre-configured and ready to run in a private copy of the operating system — at least, obscene to those of us who’d spent formative years struggling to shoe-horn applications into shared servers that, even if not overtaxed in physical resources, were inevitably rendered a mess by the necessary intricate configuration management needed to keep the myriad applications and configurations from stepping on one another. Ten years ago, it was perfectly reasonable for a half-dozen web developers to work concurrently on a single desktop-grade machine with a half-gig of RAM, given some mildly fancy footwork with “virtual host” configurations in DNS, and in Apache, and in Tomcat… it was never pretty, but in the best cases it managed to work. So it’s been hard to adjust to the notion that the overhead of even a lightweight OS distribution, replicated for each application, could ever be less than gross inefficiency. But the distributions get lighter (see Ubuntu Jeos, “Just Enough OS”) and more to the point the machines have grown so massive, so quickly, that it’s a false economy to quibble about the cost of partitioning a server’s applications into virtualized appliances. Solaris’s Zones, which provide the maximally lightweight implementation of this notion by virtualizing the OS around a common kernel, rather than virtualizing the hardware stack, make this economics plain — a typical machine can host hundreds if not thousands of zones at trivial incremental cost. So it’s a lazy or shortsighted administrator indeed, at this point, that resorts to spending time figuring out how to make applications coexist, given ample solutions for isolating them in clean OS instances, from hardware virtualization (Sun’s LDOMs, IBM’s LPARs) to software hypervisors (VMWare/Xen/KVM/etc/etc/etc/) to OS virtualization (Solaris zones, Linux virtual servers). (Thus it’s all the more ironic that the worst cases I’ve seen, in the last few years, of Unix servers with configuration management nightmares, with over a decade’s accumulated cruft of configured applications interdependent on ancient versions of tools nobody remembers installing, are inevitably AIX machines on IBM p-series machines, which support hardware virtualization and thus could have avoided the problem years before a Linux/x86 machine had a comparable solution.)

At any rate, there’s no mystery as to what we can expect to see in the next few years — desktop-grade computers with more cores than we know what to do with, enough RAM to cache an HD movie, and virtualization tools that approximate VMWare ESX’s all-out stance. So how’s this all help my kid and his iMac? Well, first: why wouldn’t any interactive session be likely to occur in a VM, given technology that can hot-migrate a running VM from one host to another? On a gigabit network, transferring an entire running VM image from upstairs to downstairs still shouldn’t take more than a few minutes; and after 10G Ethernet becomes commonplace (and how long could that take — a few years at most) the wait would cease to matter. So freeze your Garageband VM session upstairs, and retrieve it downstairs, on the laptop; close the laptop and take it to the coffee shop. From that view, the traditional approach of switching usersm as in Windows and OS X, is symptom of the familiar historical configuration management problem — why should I and my son share the same Applications folder, just because we both sit at the same terminal? Why should my tools, and my entire OS configuration, not float from box to box?

Of course, given a dozen cores and a dozen GB of RAM, a single machine could, in raw horsepower, serve even a very large family using thin clients of some kind or other. But this is complementary to the notion of portable VMs floating around the house, not contradictory — in normal use, everything could run on the basement 16-core monster, and only migrate to the laptop when heading over to the library.

Anyhow, bafflingly, the biggest barrier I can think of to reaching this point in the next half-decade is, bizarrely, the simple fact of Apple’s restrictions on virtualizing Mac OS X, a problem purely of license rather than technological. If one were willing to inflict Linux or Solaris on one’s family, such scenarios are probably reachable soon, but as long as OS X only runs on native hardware, the floating-VM notion will have to wait for Apple to catch up.

Comment » | media, sysadmin

An OpenID, via WordPress/phpMyId, on Dreamhost

December 10th, 2007 — 12:35am

Returning to this venue, after long hiatus: hi!

Today’s topic: So You Want To Get You One Of Them OpenIDs. And, you’re the sort of rugged DIY nerdo who hosts his own sites; and, you’re the sort of cheapskate who uses Dreamhost to do it. And, in a further creepy emulation of me, you run your own WordPress, *and* you have stumbled through just enough text on OpenID to understand that you were stymied enough to google for a page very much like this one. In which case: hi!

The specific case I’m addressing is that you host your own WordPress, perhaps named similarly to http://andy.boyko.net/, and you’d prefer, for whatever misguided reason, to use that same fine URI as your OpenID, and that furthermore you’re not afraid of 15 minutes of fiddling for its own sake. This will *not* help, in any way that I can discern, if you want to allow people visiting your WordPress installation to log in with their own OpenIDs in order to comment. I gather there are WordPress plugins to help you achieve that; I surely haven’t tried them yet (because, really, my focus at this moment is on making life easier for me, not any of you) but I imagine those plugins would work in tandem with what you do in the following steps.

If you’re not using Dreamhost, maybe you just want a general explanation on what to do, such as those provided by Sam Ruby or Simon Willison. This explanation is mostly lifted from their work, modulo the Dreamhost-isms.

But so we’ll assume this:

  • you’re a Dreamhost user, running your own WordPress instance under its own domain or subdomain (e.g. ‘myblog.com’, ‘andy.boyko.net’)
  • you understand Dreamhost’s Panel sufficiently to add a new domain
  • you have shell access
  • you only care about a one-person solution
  • you’re not afraid

Do these things:

  • In the Dreamhost panel, create a new subdomain on your blog’s domain, which will host the phpMyID tool, which is the secret sauce here, providing you with an “OpenID provider”, if I get the jargon right. Note that you won’t use the name of this subdomain that you choose here directly, though it will appear buried in HTML tags in your site. Given a WordPress instance at andy.boyko.net, I chose to create the subdomain ‘openid.andy.boyko.net’ for this purpose, though its name needn’t relate to your WordPress’s URL. I’ll refer to this new OpenID provider subdomain as openid.yourblog.domain.
  • Get a copy of phpMyID (version 0.7 at this writing; newer versions may invalidate some of this instruction) and unpack the .tar.gz file into your home directory, resulting in ~/phpMyID-0.7/
  • You need three files from the unpacked phpMyID package in the newly created directory for the OpenID subdomain, ~/openid.yourblog.domain/:

    cp ~/phpMyID-0.7/MyID.php ~/openid.yourblog.domain/
    cp ~/phpMyID-0.7/MyID.config.php ~/openid.yourblog.domain/index.php
    cp ~/phpMyID-0.7/htaccess ~/openid.yourblog.domain/.htaccess
  • Edit the .htaccess file, and uncomment the first of the three provided solutions — since PHP runs as a CGI on Dreamhost, you need mod_rewrite trickery to overcome some problem or over. Just accept it.
  • Follow the phpMyID README configuration, so that you create a new MD5 hash for your password, and update the index.php file accordingly with your new name and the resulting password hash. (Create the hash as instructed in the README, through ‘openssl md5‘.)
  • To preclude a baffling HTTP redirection loop later, trust the impossibly wise Sam Ruby, and add this line:

    'idp_url' => 'http://openid.yourblog.domain/',

    to the $GLOBALS['profile'] hash, along with the username, password, and realm.
  • Follow the README’s advices and test the installation of phpMyID, which at this point should be substantially complete, by visiting http://openid.yourblog.domain/ and logging in. Apparently, it is not unreasonable to be confident that this is OK despite not being SSL-encrypted, because of the use of digested authentication. Go with that. Prove that you’re able to log yourself in with the password you provided. You’re now done fiddling with the installation of phpMyID, and you can leave this new subdomain alone.
  • Well, before you leave it alone, take one more peek at the index.php configuration, and because you’re a savvy self-starting soul, and you realize the implications of the $GLOBALS['sreg'] array, you might as well populate it with as much boilerplate personal info (e.g. full name, nickname, location) as you’re comfortable automatically transmitting to various Web-two-dot-zero entrepreneur types; minimal testing suggests those sites will helpfully pull that data in for you when you establish a new account after having logged in via OpenID.
  • Now, head over to your WordPress, and bolt this new OpenID provider into it, by editing your chosen template (via ‘Presentation’/’Theme Editor’). Specifically, crack open the ‘header’ and, right before the closing </head> tag, insert this voodoo boilerplate:

    <link rel="openid.server" href="http://openid.yourblog.domain/">
    <link rel="openid.delegate" href="http://openid.yourblog.domain/">

    The wise Mr. Ruby suggests that, by adding the idp_url config above, the second (visibly redundant) line becomes unnecessary, but I’m too lazy to even bother eliding it. Note that there’s apparently a WordPress plugin that achieves the same one-or-two-line patch without you having to hand-tweak the HTML head, which might be preferable, but I haven’t investigated.

Anyhow, upon saving that change to your header, you should find that by simply providing the URI for your WordPress installation to the various ‘Web-two-dot-zero’ sites that offer an OpenID login option, those sites will do the right thing, reading the “link rel=” tag and as a result contacting your new minimal phpMyID-based OpenID provider. And, apparently, this is all OK.

Good luck.

2 comments » | sysadmin

Back to top