Ever since Amazon launched its Simple Storage Service (S3) cloud storage service in 2006, people have been using it to prop up Websites hosted with other service providers.
Initially S3 was used to host large files, such as movies, in which case S3 is both cheaper and faster than most Web hosting providers. Hosting other files such as images was initially problematic because of latency issues (that is, a delay between a browser requesting a file, and receiving it). This was addressed when S3 introduced more data centers worldwide and better load balancing.
However, until now it’s been impossible to host entire Websites at S3 because there was no way to define root and error documents within an S3 ‘bucket’ (a bucket being the name given to individual storage configurations).
In other words, there was no way to configure S3 to serve index.html when visitors were directed to an S3 bucket, and no way to serve something like error.html if something went wrong.
This has now changed. Root and error documents can now be defined for each bucket, although there are a handful of important caveats.
Firstly, because S3 is just dumb storage, hosting an entire site is only possible for entirely static content–that is, nothing more than HTML files, images, and so on. Anybody wanting to utilize PHP or similar will need to use Amazon’s Elastic Compute Cloud (EC2) or a standard hosting provider. That means you couldn’t host something like a WordPress blog at S3, for example.
Secondly, it’s impossible to host a root domain at S3 because S3 can only be accessed via CNAME redirection in DNS records (S3 has no static endpoint IP address, so it can’t be configured for the A-record). In other words, hosting www.example.com is possible, but not example.com. A separate hosting service configured for the DNS A-record would need to direct visitors arriving at example.com to www.example.com. (Incidentally, many experts consider adding www as a CNAME record to be bad practice, although it works fine.)
Setting up complete Website hosting with S3 is easy. Start by visiting the S3 control panel and creating a new bucket in your AWS, named after the Web address that will direct there. For example, should I wish visitors to www.keirthomas.com to be redirected to a bucket, I’d name it www.keirthomas.com.
Then upload all your Website’s files using the Upload button on the S3 console, or via a separate client application, if you use one. Don’t forget to set all the files as publicly accessible, which you can do from the Upload window.
Next, right-click the new bucket, listed on the left of the console, and select Properties. In the new panel at the bottom of the window, click the Website tab and ensure Enabled is checked, before typing the filename of your index and error documents. Note that it’s not possible to specify specific 4xx documents, such as 404, 403, and so on; all errors will have to be directed to the same error page. Click the Save button when done.
Make a note of the address listed alongside Endpoint, which is beneath the index and error document filenames in the same panel. Now head over to your domain registrar’s configuration panel and configure a new CNAME record for www, specifying the S3 endpoint address (remove the http:// prefix from the endpoint address and any trailing slashes at the end).
How CNAME configuration is done varies from provider to provider. You’ll probably need to delete the A-record for www, too.
And that’s it! Once the DNS changes have propagated, which might take a few hours, visitors to your Website will be directed straight to the S3 bucket containing your site and will see the index.html file.
As an additional step, it’s a good idea to point the A-record address (that is, the address without the www prefix) to a simple hosting service where you can set up an automated redirect to the www address.
If yours is a high traffic site, it might also make sense to use Amazon’s CloudFront service to ensure traffic is directed to the nearest geographical server, which will avoid any latency issues. This will add to the costs, though.
So how does using S3 compare in terms of costs to a standard provider? I use Dreamhost’s basic Web hosting package for the sites I run, which costs me $119 per year and offers unlimited bandwidth and storage (although I share a server with others, which can often limit the speed at which my site is served).
However, unlike S3, the Dreamhost price also includes PHP, databases, and various useful extras, such as one-click installs of popular site software.
One static site I run from Dreamhost sees around 300 visitors a day and offers a 2MB file for download. From the Dreamhost configuration panel, I can see that the site burns through around 350MB of bandwidth per day, even on weekends.
Using a quick rough calculation I can see that’s 10.65GB per month, which if served via S3 would cost me absolutely nothing if I signed up to the Amazon Web Services Free Usage Tier. This allows up to 15GB of data transfer per month, although I might incur a few dollars across the year because of GET requests. The Free Usage Tier allows 20,000 GET requests per month, and my 300 visitors a day typically visit many pages and therefore download many images, each of which incurs a separate GET.
While a comparison with Dreamhost is simply unfair because Dreamhost provides so much more than simple storage, the S3 price is very compelling if you’re paying a per-GB or per-TB fee with your existing provider. Plus, S3 will always be fast, and you’ll never run out of storage space.
Keir Thomas has been making known his opinion about computing matters since the last century, and more recently has written several best-selling books. You can learn more about him at http://keirthomas.com. His Twitter feed is @keirthomas.