Cloud CIO: The Next Generation Cloud Offering

Today's Best Tech Deals

Picked by PCWorld's Editors

Top Deals On Great Products

Picked by Techconnect's Editors

1 2 3 Page 2
Page 2 of 3

First, Open Compute presents a new benchmark for operating a data center, and, unlike, say, Google and Microsoft, who also have implemented extremely efficient data centers but have kept the details proprietary, Facebook has chosen to share its design.

Essentially, Facebook has rethought every element of a data center in order to make the overall aggregation of components and operations as efficient as possible. In announcing the project, Facebook noted that its data center implementing the Open Compute design uses 38% less energy to do the same amount of work as an older data center, while costing 24% less. As already noted, the data center uses no air conditioning; this is because its sited in Oregon, where outside air temperatures obviate the need for cooling. Additional measures include using a specialized energy distribution system, custom-designing servers that have no unnecessary components, and eliminating a central UPS. By treating its data center as an integrated aggregation, rather than an assembled collection of standardized components, Facebook has made the integrated whole far more efficient. Its announcement stated that the data center operates at a 1.07 PUE, significantly better than its previous 1.5 PUE.

Facebook's announcement reinforces our perspective that the nature of IT operations is going to change: IT will increasingly be a service provider, but not necessarily an asset owner. How can one justify owning and operating a data center unless it achieves the kind of efficiency now possible? The coming role for infrastructure operations will be selecting among physical infrastructure providers and implementing operational oversight and monitoring. There will undoubtedly be many challenges in pursuing this strategy, including security, bandwidth sufficiency, and SLA monitoring; however, failing to pursue a low-cost infrastructure strategy is inappropriate when options are available.

Turning to CloudFoundry, it comes at an interesting time for software efforts within IT organizations as they seek to adopt cloud computing. While there has been significant use of Amazon Web Services and other cloud providers, much of what has been implemented is what we refer to as "Hosting 2.0." By this we mean the developers are using virtual machines within the cloud infrastructure, but designing and implementing applications the same way as when physical machines provided the computing environment. Specifically, this means that the applications are installed as though they will run in a persistent virtual machine, with any configuration or application topology changes implemented manually, and with no redundancy to protect availability in the event of resource failure.

The problem is, all the assumptions that underlie that approach are inappropriate in a cloud computing environment. To achieve the agility and elasticity that cloud computing promises, applications must be designed and operated differently. Our conclusion, though, is that many, many software engineers will struggle acquiring the skills needed to develop these type of applications.

Consequently, I have begun to conclude that for many organizations, a PaaS software infrastructure will be critical to enable "true" cloud application development. Furthermore, that PaaS infrastructure has to support the languages and application design patterns that developers know and are comfortable with. Only with a framework that abstracts the details of achieving scalable elasticity, automatic database replication, integration with other platform services, and so on, will many organizations be able to obtain the promise of cloud computing.

This is why the CloudFoundry initiative looks so promising. It supports "traditional" Java development as well as later-generation Ruby, et al languages. It envisions easy access to platform services like queues and so on. And it promises to help organizations avoid lock-in by enabling transparent deployment to any number of public clouds as well as internal clouds.

This appears to me to be an exactly on-target approach to what is required: Specialized software engineers take care of the complicated plumbing bits, while mainstream application developers focus on solving business problems, secure in the knowledge that what they develop will automatically obtain the benefits promised by cloud computing.

1 2 3 Page 2
Page 2 of 3
Shop Tech Products at Amazon