On virtualization, my basement, and Garageband

In my house there are four people, and four computers, some fixed in place and some laptops, and I think it’s the case that all the necessary technology exists to make this situation a great deal less awkward and fiddly than it is today. One should expect to be able to migrate an active session, including running applications and data, from the iMac upstairs to the laptop over there; the family’s data should be centrally and commonly accessible, with a home directory available everywhere. So where do we stand?

The good news is that the computing industry has, in many ways, caught up with where IBM was 30 years ago, and so we have cheap and ubiquitous virtualization. But we haven’t generally reached the understanding they had, that it’s much more useful a metaphor to consider an “operating system” a hosting environment for applications, rather than as the literal and original definition as the interface layer between hardware and applications. In theory, virtualization renders that distinction obvious and transparent, but it’s obscured by the fact that we’re still running home computers that are conceptually the same as they were 25 years ago, and, whether Windows or Mac, we’re still bound to the notion of the hardware as significant; it takes about a minute spent with Windows to recognize that it’s obsessively about the hardware and your interactions with it — little USB icons, and hard drive icons, and a constant need to care about the components in the ugly box on the desk.

Until fairly recently it still felt obscenely profligate to indulge the idea of “virtual appliances” — applications bundled in a virtual machine, pre-configured and ready to run in a private copy of the operating system — at least, obscene to those of us who’d spent formative years struggling to shoe-horn applications into shared servers that, even if not overtaxed in physical resources, were inevitably rendered a mess by the necessary intricate configuration management needed to keep the myriad applications and configurations from stepping on one another. Ten years ago, it was perfectly reasonable for a half-dozen web developers to work concurrently on a single desktop-grade machine with a half-gig of RAM, given some mildly fancy footwork with “virtual host” configurations in DNS, and in Apache, and in Tomcat… it was never pretty, but in the best cases it managed to work. So it’s been hard to adjust to the notion that the overhead of even a lightweight OS distribution, replicated for each application, could ever be less than gross inefficiency. But the distributions get lighter (see Ubuntu Jeos, “Just Enough OS”) and more to the point the machines have grown so massive, so quickly, that it’s a false economy to quibble about the cost of partitioning a server’s applications into virtualized appliances. Solaris’s Zones, which provide the maximally lightweight implementation of this notion by virtualizing the OS around a common kernel, rather than virtualizing the hardware stack, make this economics plain — a typical machine can host hundreds if not thousands of zones at trivial incremental cost. So it’s a lazy or shortsighted administrator indeed, at this point, that resorts to spending time figuring out how to make applications coexist, given ample solutions for isolating them in clean OS instances, from hardware virtualization (Sun’s LDOMs, IBM’s LPARs) to software hypervisors (VMWare/Xen/KVM/etc/etc/etc/) to OS virtualization (Solaris zones, Linux virtual servers). (Thus it’s all the more ironic that the worst cases I’ve seen, in the last few years, of Unix servers with configuration management nightmares, with over a decade’s accumulated cruft of configured applications interdependent on ancient versions of tools nobody remembers installing, are inevitably AIX machines on IBM p-series machines, which support hardware virtualization and thus could have avoided the problem years before a Linux/x86 machine had a comparable solution.)

At any rate, there’s no mystery as to what we can expect to see in the next few years — desktop-grade computers with more cores than we know what to do with, enough RAM to cache an HD movie, and virtualization tools that approximate VMWare ESX’s all-out stance. So how’s this all help my kid and his iMac? Well, first: why wouldn’t any interactive session be likely to occur in a VM, given technology that can hot-migrate a running VM from one host to another? On a gigabit network, transferring an entire running VM image from upstairs to downstairs still shouldn’t take more than a few minutes; and after 10G Ethernet becomes commonplace (and how long could that take — a few years at most) the wait would cease to matter. So freeze your Garageband VM session upstairs, and retrieve it downstairs, on the laptop; close the laptop and take it to the coffee shop. From that view, the traditional approach of switching usersm as in Windows and OS X, is symptom of the familiar historical configuration management problem — why should I and my son share the same Applications folder, just because we both sit at the same terminal? Why should my tools, and my entire OS configuration, not float from box to box?

Of course, given a dozen cores and a dozen GB of RAM, a single machine could, in raw horsepower, serve even a very large family using thin clients of some kind or other. But this is complementary to the notion of portable VMs floating around the house, not contradictory — in normal use, everything could run on the basement 16-core monster, and only migrate to the laptop when heading over to the library.

Anyhow, bafflingly, the biggest barrier I can think of to reaching this point in the next half-decade is, bizarrely, the simple fact of Apple’s restrictions on virtualizing Mac OS X, a problem purely of license rather than technological. If one were willing to inflict Linux or Solaris on one’s family, such scenarios are probably reachable soon, but as long as OS X only runs on native hardware, the floating-VM notion will have to wait for Apple to catch up.

Category: media, sysadmin Comment »

Leave a Reply

You must be logged in to post a comment.

Back to top