For those of you who have been under a rock for the last several years, there is a buzz-phrase floating around—cloud computing. If you haven’t been paying attention, it is time to wake up.
While I could spend an entire blog post—if not several—on a definition of cloud computing, I will be talking only about cloud computing in the sense of companies moving servers from their building or network operations center to running virtual servers in this computing cloud.
While there are a number of companies providing virtual servers, the most visible is Amazon, with their Amazon Web Services (AWS). I will be talking about AWS in this post as it is the service with which I am most familiar. It seems like every month, AWS rolls out new options and services. Just recentlyÂ Amazon announced that you can now run on AWS the Windows operating system along with SQL Server.
Amazon also announced a service level agreement (SLA) of 99.5%. The SLA is important. It is a guarantee of service uptime. If Amazon don’t meet the SLA, then you get money back. As any of you will know, you have to be able to count on your data center. 99.5% is a pretty good level of coverage.
Beyond that, one of the new features Amazon will be implementing during the next year is the use of regions and availability zones. Regions are distinctly different areas of a country (or completely separate countries); availability zones are designed to be insulated from failures in other availability zones and provide inexpensive, low-latency network connectivity to other availability zones in the same region. What does this mean? It will soon be very easy to deploy a set of servers in different areas and/or regions so that your data and servers are spread out and not vulnerable to a single point of failure.
I am not going to go into any detail about how virtual servers work. That’s not the point of this post. I am going to concentrate on what you can do with virtual servers.
Once you are properly set up, you can bring new servers online in just a few minutes. Got a spike in your app usage due to Digg coverage or a big sale? Bring X number of servers online to handle the load, and then after it passes, shut down the now-unneeded servers. Simple, cheap and fast.
In a traditional setting where the company maintains its own servers, they either have to order servers far in advance and having them sitting idle in the racks at the data center. Or risk not having enough compute power in place. If this taken into account, the savings offered a company by a cloud-based approach can really begin to add up.
A simple way to begin using the cloud would be to use the S3 storage service to store your backups from your current databases. In one simple step you have off-site storage of unlimited capacity, available anytime you need it. Use the Amazon Simple Queuing System (SQS) to build a network of servers that can operate on a queue of tasks in parallel. While you could develop this setup internally, it would take a lot of time, energy, and money. With SQS, you can bypass all that and be running in days, if not hours.
These are just the obvious things that spring to mind. I am sure over the next few years we will see many innovative applications and uses that I can’t imagine right now.
Are there limitations? Of course. Currently, RAM is limited to 15 gigs with four cores of CPU power. Amazon offers a SAN-based permanent storage called Elastic Block Storage (EBS). I have seen no data regarding the actual throughput of EBS, so I don’t know how suitable it would be for a very high-end system. For many systems, however, this would be perfectly adequate.
Overall, I think the flexibility that a system such as AWS provides will allow a new generation of companies who are quicker to market, more agile to make changes and have lower overhead. All good things in this economy!
I am certain that at the MySQL Users Conference next year, there will be many talks on how to leverage cloud computing with MySQL. I look forward to that.
Interested in working with Keith? Schedule a tech call.