I really hate upgrading servers. In order to upgrade, the server usually has to stop serving for a little while, and as the name “server” implies, this isn’t really desirable. In addition, there’s often too much art and not enough science in an upgrade. “Oh dear,” you hear someone say at three in the morning. “The test machine didn’t do that. We’re going to have to change the plan a little.”Photo courtesy of stofiska and is reproduced with permission
Upgrading servers is painful. Plus, what you end up with, after a few upgrades, is a server that you probably don’t know how to rebuild if needed — after all, it just grew that way over time! One of my personal philosophies is to avoid doing things that are painful and have poor results (which is incidentally also why I rarely play sports).
So, here are three reasons why you can often avoid upgrading servers in the cloud:
1. Virtual machines are disposable in the cloud.
Yes, just like paper plates, Styrofoam cups and plastic forks, except without the environmental consequences. You snap your fingers, and one appears. You give it the “thumbs down” gesture, and it disappears back into the dust from whence it came. Your data needs to remain persistent, but the computing power (CPU and RAM) can be created and destroyed as needed.
2. There’s always a spare machine around when you need one in the cloud.
Most clouds have a characteristic called “rapid elasticity,” meaning that you can scale your usage up or down almost instantly and there will be resources available. This means that you can “float” additional virtual machines for a while. Plus, most cloud services are billed in short time intervals, such as hourly. Even if your budget is very tight, you can probably afford to run two copies of your application for a few hours or days.
3. You can automate creation and deletion of these disposable, spare machines in the cloud.
Invest some time creating an automated server build script that uses the cloud’s Application Programming Interface (API) to provision a server on the cloud, install the latest “production-ready” version of your application and run some tests against it. This is probably just an extension of existing automated build scripts that you have.
With the ingredients above, the next time you need to upgrade your application you can simply create a new instance of the application. Use your build script to provision a new server that has the correct OS configurations, middleware software and the latest version of your application. Test the new server, then substitute the new server for the old one. If the new one doesn’t work, put the old one back into place; if the new one works, toss the old server in the virtual landfill. You get a fresh server each time, and you know exactly how it was created and that there are no surprises lurking there, waiting until three a.m. on the next upgrade cycle.
Most applications do have persistent data, so you have to have a way to maintain data consistency when creating new instances of the application. This is a solvable problem for most applications by using the correct architecture — keeping the data separate from the application — and data replication strategies. In some cases, you might still have to perform an upgrade on the database server (if it’s not feasible to create a new instance of it), but you may also be using a platform as a service database service that allows you to take a database snapshot from the old production instance and create a new database from that.
There’s nothing especially revolutionary about not upgrading servers; this is how we’ve treated code for a long time. The difference is that, with cloud technologies, we can now treat the infrastructure (such as the virtual machine hosting the application) as if it’s part of the code.