Virtualization

Master MySQL in the Amazon Cloud

Keep Offsite Backups

Replication is great for a high-availability solution that will cover you should your primary database fail you. But what if Amazon suffers a sitewide failure that affects networks or EBS in general? These scenarios can and do happen.

To guard against this, take advantage of Amazon's global proliferation of data centers. Place your objects and instances in a variety of AWS Regions. You can choose MySQL's Multi-AZ option, which works with RDS to replicate your data automatically to another data center. Or you can build your own MySQL primary database in your main availability zone and a replica slave in another region.

Further, consider a scenario where a court action or subpoena impacts Amazon. In the discovery phase, an expansive net accidentally draws your servers into the mess, interrupting your business.

In either case, you'll want a last-ditch insurance policy for restoring your application. Scripting can vary depending on the complexity of your environment. For just a couple of servers, you can build your image as you like it, then snapshot and use that as your gold-standard server. When the instance spins up, a user data script is called, which you can supply to fulfill last steps or additional configuration needs.

For more complex environments, Scalr or Rightscale can provide a templating solution for your automation needs. For even more sophisticated environments or for operations teams ready to embrace configuration management to the fullest, Chef and Puppet may be options for you. With your automation scripts built, you can deploy a new server either in Amazon or another cloud provider, then deploy your code and configurations. As a final step you'll want to restore your data; with an offsite backup, you'll have that base covered.

Cache Aggressively

Caching is crucial in the cloud, and you can ensure high performance through four modes of caching: caching in the browser, caching objects, caching queries, and caching data.

Caching in the Browser

All items in a page that the browser fetches can be cached. This includes images, HTML objects, code, and so forth. Although many browsers are configured to perform a lot of caching by default, this is often not sufficient. That's because Web pages are returned to the browser with a cache-control setting that is primarily commanded and configured by the Web server itself. This is good news for the operations team, as it is another switch or dial that can be turned to speed things up.

Typically, cache control should be set with a maximum age of one week. This becomes the default for objects sent to the browser. They can still be expired manually in code and application logic as necessary.

Caching Objects

Object caches are a great addition to the caching layer at the Web server tier. Application code such as PHP or Ruby include libraries for making use of memcache, a popular object cache that handles name and value pairs. Whenever data is requested from the database, the object cache is checked first. If it is available, the data will be returned as much as 100 times faster -- no network round trip to the database box, no complex database queries or caching.

If the data is not available in the object cache, a cache miss happens, and a request to the database will follow. The results returned will then be placed in the object cache for future use.

In AWS, a systemwide solution called ElastiCache is available. This technology is built on memcache but does not require individual configuration, monitoring, or tuning. Simply configure your application for this object cache -- you're off and running. Scalability is built in as a bonus.

Caching Queries

Oh those queries, requiring heaps of attention to get them just right and as much attention to keep them working properly! That's why the query cache should be another tool in your arsenal of performance equipment. Be sure to have it enabled in MySQL and configured for sufficient memory so that useful queries are not expiring before their useful life has ended.

Caching in the query cache will keep the query plan and data in one place. If the database finds an exact match of a query it's already run in the past, it'll send it back to the Web server without doing any real work at all.

Caching Database Data

The MySQL database contains a very important setting called the buffer cache. This caches all index and table data. Since data that is previously cached will incur only logical reads, you'll avoid a ton of I/O to your data files. As we've mentioned, because of the challenges of EBS virtualized storage, this is doubly important in the Amazon AWS environment.

Scaling the Database Tier

The Web server tier can be set up with autoscaling in AWS because each of the server's data is not changing dynamically. Sure, some files and images are uploaded or new code deployed, but this happens intermittently.

Scaling the database tier, however, typically involves a bit of finesse and careful execution as it poses specific challenges. The most widely used method is to maintain a single master database. This database receives all changes, such as Insert, Update, Delete statements from the application. All Select activity from the application gets sent to one or more slaves. These can be regularly reimaged, so you always have the latest copy of your MySQL slave as an image ready. When you need to add read capacity, simply spin up the latest image, the instance will start, and MySQL will start and connect to the single master just as the others do.

If capacity on the master database is waning, you can vertically scale that server. Set up a new, larger EC2 instance in AWS, then set it to replicate off the master. Set all of your existing slaves to point to the new master, then failover the application to the new, larger instance. Voilà, you've scaled vertically with zero downtime.

Actionable Advice for Databases on AWS

Now that you have a handle on the AWS environment and some of the hurdles you will encounter along the way, here is a targeted list of considerations for your database migration to the cloud, in particular around performance and disaster recovery.

Next page: Performance tips

recommended for you

RAID Made Easy

Read more »

Subscribe to the Business Brief Newsletter

Comments