Servers

Just Over the Horizon, Private Clouds

The adoption of private cloud computing is so young that it's hard to talk about -- it's something that doesn't yet exist fully, but is found only in skeletal, experimental form. Many CEOs, CFOs, and COOs are rightly skeptical about how much of their company's most important possession -- its data -- should take up residence alongside other firms' operations on a shared server. In the multitenant cloud, who knows? Your fiercest competitor might be occupying the same server as you and be grateful for any slop-over of your data.

We'll take a look at why some corporate enterprise data centers, both large and small, will move toward becoming more cloudlike. The users of these internal, or private, clouds, as opposed to the users of the publicly accessible Amazon Elastic Compute Cloud (EC2), Google App Engine, and Microsoft Azure, will not be members of the general public. They will be the employees, business partners, and customers of the business, each of whom will be able to use the internal cloud based on the role he or she plays in the business.

Management Strategies for the Cloud Revolution
InformationWeek, which tries to be out front in addressing the interests of business computing professionals, first aired the concept of private clouds as a cover story on April 13, 2009, after hearing about the idea in background interviews over the preceding months. In July, Rackspace announced that it would reserve dedicated servers in its public cloud for those customers seeking to do "private" cloud computing. In August, AmazonWeb Services announced that it would offer specially protected facilities within its EC2 public cloud as the Amazon Virtual Private Cloud.

These developments set off a debate inside InformationWeek and among cloud proponents and critics throughout the business world. John Foley, editor of the Plug into the Cloud feature of InformationWeek.com, asked the question: How can a public cloud supplier suddenly claim to offer private cloud services? Weren't shared, multitenant facilities awkward to redefine as "private"? Some observers think that a public cloud can offer secure private facilities, but any sensible observer (and most CEOs) would agree with Foley's question. How good is a public cloud supplier at protecting "private" operations within its facilities? In fact, there are already some protections in place in the public cloud. There is no slop-over of one customer's data into another's in the multitenant public cloud. If there were, the virtual machines running those operations would experience corrupted instructions and screech to a halt. Still, what if an intruder gains access to the physical server on which your virtual machine is running? Who is responsible if damage is done to the privacy of your customers' identity information through no fault of your company's?

There are no clear answers to these questions yet, although no one assumes that the company that owns the data is somehow absolved of responsibility just because it's moved it into the cloud. What security specialists refer to as the trust boundary, the layer of protections around data that only trusted parties may cross, has moved outside the perimeter of the corporation along with the data, but no one is sure where it has moved to. The question is, what share of responsibility for a lapse in data security would a well-managed cloud data center bear compared to that of the data's owner?

There are good reasons why CEOs don't trust the idea of sending their company's data into the public cloud. For one thing, they are responsible for guaranteeing the privacy and security of the handling of the data. Once it's sent into the cloud, no one inside the company can be completely sure where it's physically located anymore-on which server, which disk array, or maybe even which data center. If something untoward happens at a loosely administered site, it probably will not be an adequate defense to say, "We didn't know our data was there." In fact, Greg Shipley, chief technology officer for the Chicago-based information security consultancy Neohapsis, wrote in Navigating the Storm, a report by InformationWeek Analytics, "Cloud computing provides . . . an unsettling level of uncertainty about where our data goes, how it gets there and how protected it will be over time."

Because of these concerns, the security of the cloud is the first question raised in survey after survey whenever business leaders are asked about their plans for cloud computing. And that response is frequently followed by the conclusion that they'd prefer to first implement cloud computing on their own company premises in a "private cloud."

On the face of it, this is an apparent contradiction. By our earlier definition, cloud computing invokes a new business model for distributing external computing power to end users on a pay-as-you-go basis, giving the end user a degree of programmatic control over cloud resources and allowing new economies of scale to assert themselves. At first glance, the idea of achieving competitive economies of scale trips up the notion of a private cloud. With a limited number of users, how will the private cloud achieve the economies of scale that an EC2 or Azure does?

Nevertheless, I think many private enterprises are already seriously considering the private cloud. Until they understand cloud computing from the inside out, these enterprises won't risk data that's critical to the business.

If the on-premises private cloud offers a blend of augmented computing power and also guarantees of data protection, then it is likely to be pressed into service. Its owners will have made a conscious trade-off between guaranteed data security in the cloud and economies of scale. A private cloud doesn't have to compete with EC2 or Azure to justify its existence. It merely needs to be cheaper than the architecture in the data center that preceded it. If it is, the private cloud's advocates will have a firm business case for building it out.

Hardware Choices for the Private Cloud

Part of the argument for adopting public cloud computing is that companies pay only for what they use, without an up-front outlay in capital expense. But that argument can also be turned on its head and used for the private cloud. An IT manager could say, "We're making the capital investment anyway. We have 100 servers that will need a hardware refresh later this year. Why not use this purchase as the first step toward converting our data center into something resembling those external clouds?" The benefits of private clouds will flow out of such decision making.

Google is building its own servers because the configurations of servers in the marketplace so far do not meet the cost/benefit requirements of its cloud architecture. If Google, Yahoo!, and others continue to publish information on their data centers, the data center managers at companies will figure out how to approximate a similar hardware makeup. Indeed, Dell is rapidly shifting gears from being a personal computing and business computing supplier to becoming a cloud supplier as well. As I was working on a report at the 2009 Cloud Computing Conference & Expo, Barton George, Dell's newly appointed cloud computing evangelist, poked his head through the door to tell me that Dell is in the process of discovering the best designs for cloud servers to produce for private cloud builders.

Dell's staff is practiced at managing the construction and delivery of personal computers and business servers. Why not turn those skills toward becoming a cloud hardware supplier? In doing so, it will be turning a cherished business practice upside down. Dell lets a buyer self-configure the computer she wants on the Dell Web site. Then, Dell builds and delivers that computer in a highly competitive way. To become a cloud supplier, it will have to figure out in advance what makes a good cloud server, concentrate on getting the best deals on parts for those types of servers, and then, upon a customer order, quickly deliver thousands of identical units. Forrest Norrod, general manager of Dell's Data Center Solutions, said his business unit has supplied enough types of servers to Amazon, Microsoft Azure, and other cloud data centers to have derived a handful of types that are favored by cloud builders.

Cisco Systems, a new entrant in the blade server market, is a primary supplier to the NASA Nebula cloud under construction in Mountain View, California, and would doubtless like to see its highly virtualizable Unified Computing System used to build additional clouds. HP and IBM plan to do so as well, although IBM's deepest wish is to find a new mass market into which to sell its own Power processor, not the rival x86 servers built by Intel and AMD that currently dominate public cloud construction. Whether IBM will be able to convince customers to use its processor remains to be seen, but it has succeeded in the past at extending its product lines into successive technology evolutions of business. At the very least, expect the Power processors to appear in a Big Blue version of the public cloud still to come. SunMicrosystems also would like to see its hardware incorporated into cloud data centers, but its UltraSPARC server line is now owned by Oracle. The uncertainty associated with that acquisition will temporarily stall cloud construction with UltraSPARC parts. Nevertheless, it's imminent that "cloud"- flavored servers will find their way into mainstream catalogs and well-known distribution channels, such as those of Dell, HP, Cisco Systems, and IBM.

It remains unlikely that CIOs and IT managers will start building a private cloud as a tentative or experimental project inside the company; few have the capital to waste on half measures. Instead, as the idea of cloud computing takes hold, small, medium-sized, and large enterprises will start recasting their data centers as cloud clusters. The example of public clouds and the economies of scale that flow from them will prove compelling.

Subscribe to the Power Tips Newsletter

Comments