Utility Computing – It's management, stupid.

All this talk of computing as a fungible utility is very nifty but it wouldn't be the first 'sounds great, adds nothing' technology to come down the pike (repeat after me – "I mostly just need XML and HTTP."). So as I embark on my new job of helping to figure out BEA's utility computing strategy I want to make sure that there is some value in "them thar hills". Which means that before I worry about what utility computing is or what it should do I want to first know what the problem is.

Our customer base is cost obsessed. This of course is a change for someone who started his career in the free flowing 90s but on balance I like it, it means companies are demanding good value for their money which will help the economy. But it also means that customers are looking to cut costs where ever they can. When it comes to software most of their cost comes long after the software is developed. From what we can tell (trustworthy studies are thin on the ground and anecdotal evidence shouldn't be trusted too far) most of the money our customers spend on their apps is spent on people followed closely by hardware. Much to our eternal embarrassment software seems to be a tiny portion of total system costs (or perhaps we should be proud that we offer so much value for so little money?). So when our customers are looking to save money they want to do it by reducing their people and hardware costs.

Saving On People

From what we can tell most of those very expensive people seem to be spending most of their time deploying, maintaining, sizing and versioning their applications (i.e. managing them) it would seem obvious that the way to reduce people costs is by creating applications that are better able to take care of themselves. Normally this requires making data centers and the software they host simpler. But the irony is that applications are getting orders of magnitude more complex rather than simpler. Take a simple web applications, just to deploy it one generally has a database and an application server. Oh, and some router configurations and don't forget the firewall configuration and the security configuration and then there is the user directory and of course the staging system and what about the caching appliances and…. You get the idea.

To make things even more interesting all sorts of specialized servers are now available to provide custom fitted functionality. There are portal servers, integration servers, data virtualization servers, enterprise service buses, lions, tigers, bears, etc. Oh and don't forget to throw in the 'new and improved' (read: old and boring) Service Oriented Architecture hoo-haa with its loosely coupled interfaces and ever more complex application entanglements. Long gone are the days when a single application was a jar file or executable. Today an 'application' is really a compound entity consisting of numerous independently deployed but interconnected components. Think of it as application spaghetti. Far from simplifying the job of all those people in the data center, we are making their lives significantly more complex and increasing their management challenges. This doesn't sound to like a recipe for reducing people costs to me.

Saving On Hardware

It turns out that in many cases many of our customers are running a lot of the machines in their data centers well below capacity. This matters because hardware is expensive. Sure, a bottom of the line 1U box from Dell might go for $2,000 but figure on spending at least another $1,000/year to have that box sitting in the data center. Furthermore most of our customers aren't buying bottom of the line boxes, they are buying bigger, multi-processor behemoths that can easily cost $20,000 or more. The thought of all those boxes sitting around mostly doing nothing is giving our customers heart burn.

The reason all those boxes are idle is that modern operating systems have focused more on kitchen sink design then on fine grained control of process behavior. The end result is that running two applications on the same box is the technological equivalent of chicken where one waits to see which application will either crash the whole box (thus taking out both applications) or suck up all available resources (thus choking out the other application) first. Admins 'solved' the problem by putting each application on its own boxes. While this did take care of the reliability issues it did so at a very high cost in unused capacity.

Rather than try to add fine grained process control to OS's, which is probably a bad idea anyway given their hopelessly "feature rich" designs, the industry is moving towards hardware virtualization. The idea being that using some fairly nifty software tricks it is possible to make a single CPU look like multiple CPUs. But the key is that each virtual CPU is separate from all of its virtual brethren. At least in theory this means that an application can toast itself and its OS inside of a virtual CPU without causing any damage to the other virtual CPU instances. In practice I suspect things aren't quite so clean since virtualization packages like Xen and VMware do rely on underlying operating system functionality that is shared between virtual instances and so provide the possibility of behavior that could crash or cripple multiple instances. But in practice things do seem to actually work pretty well, mostly because the number of shared access points between instances is small and so it becomes feasible to focus on improving their robustness. Besides, the situation is only likely to improve as folks like Intel and AMD release their hardware based virtualization technologies. I expect that eventually all programs will run in their own single instance OSs hosted in their own virtual CPUs but that's another story.

So the good news is that virtualization will likely provide a robust solution to the problem of process isolation and resource management. The bad news is that virtualization will make managing applications more expensive. Today if an admin wants to increase or decrease the computing resources available to some part of an application they go into the data center and configure a machine. This means that sizing is done in large chunks and fairly infrequently. With virtualization it suddenly becomes possible to slice and dice machines into arbitrary sized pieces and to move things around more or less at will all without stepping foot in the data center. This will mean that admins will be expected to manage their resources to the limit which requires someway of keeping track of all the parts of all the applications on all the machines for all of the geographically distributed data centers. In other words, while virtualization may save a lot of money on hardware it's going to do damage to the people budget because it makes managing applications a lot more expensive.

It's Management Stupid

So it would appear that the most expensive part of running a data center is paying people to manage applications followed by hardware costs and while virtualization will help with hardware costs, it will do so by increasing management costs. Unless I'm missing something it would seem that all cost saving roads lead to application management costs. Which would therefore be the most reasonable place to start helping out customers. The problem we need to solve then is – how can we minimize the money companies have to spend on managing their applications?

Leave a Reply

Your email address will not be published. Required fields are marked *