Servers

Just Over the Horizon, Private Clouds

The adoption of private cloud computing is so young that it's hard to talk about -- it's something that doesn't yet exist fully, but is found only in skeletal, experimental form. Many CEOs, CFOs, and COOs are rightly skeptical about how much of their company's most important possession -- its data -- should take up residence alongside other firms' operations on a shared server. In the multitenant cloud, who knows? Your fiercest competitor might be occupying the same server as you and be grateful for any slop-over of your data.

We'll take a look at why some corporate enterprise data centers, both large and small, will move toward becoming more cloudlike. The users of these internal, or private, clouds, as opposed to the users of the publicly accessible Amazon Elastic Compute Cloud (EC2), Google App Engine, and Microsoft Azure, will not be members of the general public. They will be the employees, business partners, and customers of the business, each of whom will be able to use the internal cloud based on the role he or she plays in the business.

Management Strategies for the Cloud Revolution
InformationWeek, which tries to be out front in addressing the interests of business computing professionals, first aired the concept of private clouds as a cover story on April 13, 2009, after hearing about the idea in background interviews over the preceding months. In July, Rackspace announced that it would reserve dedicated servers in its public cloud for those customers seeking to do "private" cloud computing. In August, AmazonWeb Services announced that it would offer specially protected facilities within its EC2 public cloud as the Amazon Virtual Private Cloud.

These developments set off a debate inside InformationWeek and among cloud proponents and critics throughout the business world. John Foley, editor of the Plug into the Cloud feature of InformationWeek.com, asked the question: How can a public cloud supplier suddenly claim to offer private cloud services? Weren't shared, multitenant facilities awkward to redefine as "private"? Some observers think that a public cloud can offer secure private facilities, but any sensible observer (and most CEOs) would agree with Foley's question. How good is a public cloud supplier at protecting "private" operations within its facilities? In fact, there are already some protections in place in the public cloud. There is no slop-over of one customer's data into another's in the multitenant public cloud. If there were, the virtual machines running those operations would experience corrupted instructions and screech to a halt. Still, what if an intruder gains access to the physical server on which your virtual machine is running? Who is responsible if damage is done to the privacy of your customers' identity information through no fault of your company's?

There are no clear answers to these questions yet, although no one assumes that the company that owns the data is somehow absolved of responsibility just because it's moved it into the cloud. What security specialists refer to as the trust boundary, the layer of protections around data that only trusted parties may cross, has moved outside the perimeter of the corporation along with the data, but no one is sure where it has moved to. The question is, what share of responsibility for a lapse in data security would a well-managed cloud data center bear compared to that of the data's owner?

There are good reasons why CEOs don't trust the idea of sending their company's data into the public cloud. For one thing, they are responsible for guaranteeing the privacy and security of the handling of the data. Once it's sent into the cloud, no one inside the company can be completely sure where it's physically located anymore-on which server, which disk array, or maybe even which data center. If something untoward happens at a loosely administered site, it probably will not be an adequate defense to say, "We didn't know our data was there." In fact, Greg Shipley, chief technology officer for the Chicago-based information security consultancy Neohapsis, wrote in Navigating the Storm, a report by InformationWeek Analytics, "Cloud computing provides . . . an unsettling level of uncertainty about where our data goes, how it gets there and how protected it will be over time."

Because of these concerns, the security of the cloud is the first question raised in survey after survey whenever business leaders are asked about their plans for cloud computing. And that response is frequently followed by the conclusion that they'd prefer to first implement cloud computing on their own company premises in a "private cloud."

On the face of it, this is an apparent contradiction. By our earlier definition, cloud computing invokes a new business model for distributing external computing power to end users on a pay-as-you-go basis, giving the end user a degree of programmatic control over cloud resources and allowing new economies of scale to assert themselves. At first glance, the idea of achieving competitive economies of scale trips up the notion of a private cloud. With a limited number of users, how will the private cloud achieve the economies of scale that an EC2 or Azure does?

Nevertheless, I think many private enterprises are already seriously considering the private cloud. Until they understand cloud computing from the inside out, these enterprises won't risk data that's critical to the business.

If the on-premises private cloud offers a blend of augmented computing power and also guarantees of data protection, then it is likely to be pressed into service. Its owners will have made a conscious trade-off between guaranteed data security in the cloud and economies of scale. A private cloud doesn't have to compete with EC2 or Azure to justify its existence. It merely needs to be cheaper than the architecture in the data center that preceded it. If it is, the private cloud's advocates will have a firm business case for building it out.

Hardware Choices for the Private Cloud

Part of the argument for adopting public cloud computing is that companies pay only for what they use, without an up-front outlay in capital expense. But that argument can also be turned on its head and used for the private cloud. An IT manager could say, "We're making the capital investment anyway. We have 100 servers that will need a hardware refresh later this year. Why not use this purchase as the first step toward converting our data center into something resembling those external clouds?" The benefits of private clouds will flow out of such decision making.

Google is building its own servers because the configurations of servers in the marketplace so far do not meet the cost/benefit requirements of its cloud architecture. If Google, Yahoo!, and others continue to publish information on their data centers, the data center managers at companies will figure out how to approximate a similar hardware makeup. Indeed, Dell is rapidly shifting gears from being a personal computing and business computing supplier to becoming a cloud supplier as well. As I was working on a report at the 2009 Cloud Computing Conference & Expo, Barton George, Dell's newly appointed cloud computing evangelist, poked his head through the door to tell me that Dell is in the process of discovering the best designs for cloud servers to produce for private cloud builders.

Dell's staff is practiced at managing the construction and delivery of personal computers and business servers. Why not turn those skills toward becoming a cloud hardware supplier? In doing so, it will be turning a cherished business practice upside down. Dell lets a buyer self-configure the computer she wants on the Dell Web site. Then, Dell builds and delivers that computer in a highly competitive way. To become a cloud supplier, it will have to figure out in advance what makes a good cloud server, concentrate on getting the best deals on parts for those types of servers, and then, upon a customer order, quickly deliver thousands of identical units. Forrest Norrod, general manager of Dell's Data Center Solutions, said his business unit has supplied enough types of servers to Amazon, Microsoft Azure, and other cloud data centers to have derived a handful of types that are favored by cloud builders.

Cisco Systems, a new entrant in the blade server market, is a primary supplier to the NASA Nebula cloud under construction in Mountain View, California, and would doubtless like to see its highly virtualizable Unified Computing System used to build additional clouds. HP and IBM plan to do so as well, although IBM's deepest wish is to find a new mass market into which to sell its own Power processor, not the rival x86 servers built by Intel and AMD that currently dominate public cloud construction. Whether IBM will be able to convince customers to use its processor remains to be seen, but it has succeeded in the past at extending its product lines into successive technology evolutions of business. At the very least, expect the Power processors to appear in a Big Blue version of the public cloud still to come. SunMicrosystems also would like to see its hardware incorporated into cloud data centers, but its UltraSPARC server line is now owned by Oracle. The uncertainty associated with that acquisition will temporarily stall cloud construction with UltraSPARC parts. Nevertheless, it's imminent that "cloud"- flavored servers will find their way into mainstream catalogs and well-known distribution channels, such as those of Dell, HP, Cisco Systems, and IBM.

It remains unlikely that CIOs and IT managers will start building a private cloud as a tentative or experimental project inside the company; few have the capital to waste on half measures. Instead, as the idea of cloud computing takes hold, small, medium-sized, and large enterprises will start recasting their data centers as cloud clusters. The example of public clouds and the economies of scale that flow from them will prove compelling.

This doesn't mean that stalwart Unix servers and IBM mainframes will be pushed onto a forklift and carted away, replaced by sets of, say, $2,400 x86 servers. On the contrary, proprietary Unix servers and mainframes run many business applications that can't be easily converted to the x86 instruction set. For many years to come, applications in COBOL, FORTRAN, RPG, Smalltalk, and other languages, written in house years ago or customized from what is often a product no longer in existence, will still be running in the corporate data center. But there are some applications running on legacy systems that can be converted to the x86 architecture and run in the internal cloud, and many new applications will assume the x86 architecture is their presumed target. Private clouds may never achieve the economies of scale of the big public clouds, but they don't have to. They only need to be cheaper to operate than legacy systems.

The process is already well under way. While Unix and the mainframe remain a presence, the fastest-growing operating systems in corporate data centers are Windows Server and Linux, both designed for x86 systems. The trend to consolidate more applications on one server through virtualization, thereby reducing the total number of servers, can be done on any of the named architectures, but the most vigorous activity is virtualization of x86 servers. VMware, the market leader, grew from a start-up to $2 billion in revenues in 10 years. VMware, Citrix Systems, and now Microsoft produce virtualization products for the x86 servers, with open source products Xen and KVM available as well. It's possible to cluster such machines together and run them as a pooled resource from one management console, a first step toward the private cloud.

The Steps Leading to the Private Cloud

But why would a company want to build its own private cloud? Like the public cloud, the private cloud would be built out of cost-effective PC parts. It would be run as a pool of servers functioning something like a single giant computer through a layer of virtual machine management software. Workloads can be spread around the pool so that the load is balanced across the available servers. If more capacity is needed for a particular workload, the private cloud, like the public cloud, would be elastic. The workload can be moved to where that capacity already exists, or more hardware can be brought on line to add capacity. After disposing of peak loads, any server that isn't needed can be shut down to save energy.

Furthermore, the end users of the private cloud can self provision themselves with any kind of computer -- a virtual machine to run in the cloud -- that they wish. The private cloud can measure their use of the virtual machine and bill their department for hours of use based on the operating costs for the type of system they chose. This self-provisioning and chargeback system is already available through the major virtualization software vendors as what's called a "lab manager." That product was aimed at a group of users who are likely to be keenly interested in self-provisioning-the software developers who need different types of software environments in which to test-drive their code. After they know that their code will run as intended, they turn it over to a second group of potential private cloud users, the quality assurance managers. These managers want to test the code for the load it can carry-how many concurrent users, how many transactions at one time? They want to make sure that it does the work intended and will work with other pieces of software that must depend on its output.

Software development, testing, and quality assurance is a major expense in most companies' IT budgets. If the private cloud can have an impact on that expense, then there is an economic justification to support its implementation. But beyond the software professionals, there are many other potential internal users of this new resource. Frequently, line managers and business analysts, who understand the transactions and business processes that drive the company, lack the means of analyzing those processes from the data that they produce. If they had that analysis on a rapid basis for time periods that they chose to define, such as a surge in a seasonal product, then they would be able to design new business processes and services based on the results.

By giving priority to such work, the private cloud could apportion resources in a more elastic manner than its predecessor data center filled with legacy systems. The many separate parts of the traditional data center had their own work to do; few were available for reassignment on a temporary basis. Or the private cloud could monitor the company's Web site, and when it's in danger of being overloaded, assign more resources to it rather than lose potential customers through turned-away or timed-out visitors.

Once a portion of the data center has been "pooled" and starts to be managed in a cloudlike manner, its example may bring more advocates to the fore, arguing that they too should have access to cloud-style resources. It might sound as if the private cloud is a prospect that remains far off in the future, but virtualization of the data center, as noted in the previous chapter, is already well under way. Such virtualization lays the groundwork for the move to a private cloud.

As cloud computing grows in importance in the economy, top management will ask if it is possible to achieve internally the economies that they're reading about in public clouds. Those that have built up skills in x86 servers and built out pools of virtualized servers will be able to answer yes, it is.

The next step would be to acquire the layer of virtualization management software to overlay the pool, provide monitoring and management tools, and give yourself automated ways of load balancing and migrating virtual machines around.

VMware is leading the field with its vSphere 4 infrastructure package and vCentermanagement tools. In fact, vCenter can provide a view of the virtualized servers as a pooled resource, as if they were one giant computer, and manage them as a unit, although there is a limit to the number of physical servers one vCenter management console can cover. Citrix Systems' XenSource unit, the Virtual Iron part of Oracle, and Microsoft's System Center VirtualMachineManager product can do many of these things as well.

A manager using vSphere 4 and vCenter can track what virtual machines are running, what jobs they're doing, and the percentage of their host server that is fully utilized. By moving virtual machines around from physical server to physical server, the data center manager can balance the workload, move virtual machines to servers that have spare capacity, and shut down servers that aren't needed to save energy.

Moving to a private cloud may not necessarily be a goal at many business data centers. But many of the fundamental trends driving efficient computing will point them in that direction anyway. Those who have built out an x86 data center and organized it as a virtualized pool will be well positioned to complete the migration to a private cloud. The better the economics of the cloud portion of the data center look internally, the more likely it is that the rest of the data center will be converted into the private cloud.

There's a second set of economics pushing the corporate data center toward a private cloud as well. Whether the CEO, the CFO, or the CIO likes it or not, there is going to be an explosion of computer power and sophisticated services on the Web, both in large public clouds and among smaller entrepreneurial providers of services that run in the public cloud. They will have much in common in that they will follow the standards of Web services, distribute their wares over the Internet, and keep their cost of operation as low as possible.

Even if top management in enterprises can live with higher costs in its own data centers, and there are good data security reasons for why it will, that still leaves the problem of coordinating everything that could be done for the company by new and increasingly specialized business services in the external cloud.

Such services already exist, but they remain at an early stage of development compared to their potential. If you're dealing with a new customer and he places a large order with your firm, your order capture system goes out on the Web and checks his credit rating before you begin to process the order. If a $500,000 order comes in from a recognized customer in good standing, but the address is different from the one you normally ship to, the order fulfillment system automatically goes out on the Web, enlists an address checking system to see whether the customer has a facility at the address listed, and collects data on whether the customer might expect the type of goods ordered shipped to that location. These services save businesses valuable time and labor by performing automatically things that would take well-paid staff members hours of labor to perform. Another example is online freight handling services, which can now take your order to ship goods between two points; consult their own directories of carriers, tolls, and current energy prices; and deliver a quote in seconds that proves valid, no matter where in the country you're seeking to make a delivery. They will find the lowest-cost carrier with the attributes that you're seeking-shipment tracking, confirmed delivery, reliable on-time delivery-in a manner that surpasses what your company's shipping department could do with its years of experience.

On every front, online information systems are dealing with masses of information to yield competitive results. To ignore such services is to put your business in peril, and indeed few businesses are ignoring them. The next generation may cede key elements of programmatic control to customers, allowing them to plug in more variables, change the destination of an order en route, fulfill other special requirements, and invoke partnerships and business relationships that work for them, ratcheting up the value of such services.

The alignment of the internal data center with external resources will become an increasingly important competitive factor, and many managers already sense it.

They've also seen a precedent. At one time, corporations built out high-performance proprietary networks to link headquarters to manufacturing and divisions at different locations. Two of these networks, Digital Equipment's DECnet and IBM's Systems Network Architecture (SNA), looked like solid investments for many years. But the growth of the Internet, at first a phenomenon that the corporation could ignore, began to take on a new meaning. The Internet could handle e-mail and file transfer for any company that was equipped to send things over a Transmission Control Protocol/Internet Protocol (TCP/IP) network. As the Internet became the default connection between universities, government agencies, and some companies, the cost of not having a TCP/IP network internally went up and up. At the same time, a vigorous debate ensued over whether TCP/IP was good enough for the needs of the modern enterprise beyond e-mail.

As previously mentioned, TCP/IP, the protocol on which the Internet is based, had been designed to survive a nuclear attack. It was a network of networks. If a segment of the network were to go down, the other segments would automatically route around it. It made for what critics labeled a "chatty" protocol. A router would map a good route for a particular message, then call up the next router on that route. "Are you there?" it would ask, and it would get a ping back, "Yes, I'm up and running." The sender would ping again, "Are you ready?" and the router on the next leg of the route would answer, "Yes, I'm ready." The message would be sent. The sender would then ask, "Did you receive the message?" and would get back a response of either "Yes, I did," or "No, send again."

Neither DECnet nor IBM's SNA would have tolerated such chitchat. It wasn't efficient, according to their designers. And perhaps TCP/IP is a bit of a Chatty Kathy or Gabby Hayes. But what made it hard to resist was the fact that it worked in so many cases. It was sufficient for much enterprise networking, which was discovered as enterprises started relying on internal TCP/IP networks called intranets to carry traffic derived from the Internet. These intranets turned out to be "good enough" even when their performance lagged that of the proprietary networks. And the messages got through with high reliability. They might on rare occasions arrive minutes or even hours later than the sender intended, when a router outage led to concentrations of traffic on the nearby routers drafted by many other routers as the way around the outage. Instead of maintaining an expensive proprietary network across the country, the company could let its internal TCP/IP network originate the message, then let the Internet serve as its external connection to other facilities, partners, suppliers, and customers.

If there was still resistance to conversion, it faded at the mention of the price. The Internet was free, and the TCP/IP protocol used inside the company was freely available, built into various versions of Unix and Linux and even Microsoft's Windows Server. When internal operation is aligned with the external world operations-and the cost is the lowest available- the decision on what to do next becomes inevitable. A similar alignment will occur between external cloud data centers and the internal cloud.

To prepare for that day, it's important to start expanding x86 administration skills and x86 virtualization skills rather than sitting out this early phase of cloud computing. There are immediate benefits to starting to reorient your computing infrastructure around the concept of the private cloud. This is an evolutionary, not revolutionary, process that will occur over many years.

I can hear the voices saying, don't go down the route of the private cloud: it will destroy your security mechanisms; it will drag down your performance in your most trusted systems; it will lead to disarray. I think instead that those who can't move in this direction will find that they are increasingly at a competitive disadvantage. Whether you're ready for it or not, cloud computing is coming to the rest of the world, and those who don't know how to adapt are going to find themselves in the path of those who do and who are getting stronger.

The private data center will remain private, that necessary place of isolation from the outside world where data is safe and someone always knows where it is. The private cloud in that data center is as much behind the firewall and able to implement defenses in depth as any other part of the data center.

The day will come when the virtual machines running on x86 servers will have a defensive watchman guarding the hypervisor, that new layer of software that is so close to all the operations of the server. The watchman will know the patterns of the server and will be looking for the specific things an intruder might do that would vary those patterns, blowing the whistle at the first untoward movement that it spots. In response, an automated manager will halt the virtual machine's processing, register what point it was at with the business logic and the data, then erase the virtual machine. A new virtual machine will then be constructed and loaded with the application instructions and data and pick up where its predecessor left off.

If the intruder is still out there, he may find a way to insinuate himself again, but the watchman will be ready. The more extreme advocates of security say that this process can be pushed to a more logical conclusion, where the virtual machine is arbitrarily stopped, killed, and deleted from the system every 30 minutes, whether it needs to be or not. A new one spun up from a constantly checked master on a secure server will be a known, clean entity. Such a practice would make it so discouraging for a skilled hacker-who needs, say, 29.5 minutes to steal an ID, find a password, await authentication, and then try to figure out a position from which to steal data-that it would be a level of defense in depth that exceeds those devised before. Such a watchman is just starting to appear from start-up network security vendors; the hypervisor firewall with intruder detection already exists as a leading-edge product. Only the periodic kill-off mechanism still needs to be built into virtual machine management.

As the desire for private clouds builds, the technology convergence that has produced cloud computing will be given new management tools and new security tools to perfect its workings. We are at the beginning of that stage, not its end. Guaranteeing the secure operation of virtual machines running in the private enterprise data center-and in the public cloud-will enable the two sites to coordinate their operations. And that's ultimately what the private cloud leads to: a federated operation of private and public sites that further enhances the economies of scale captured in cloud computing.

This article was excerpted from the book Management Strategies for the Cloud Revolution by Charles Babcock, (McGraw-Hill, 2010).

Subscribe to the Power Tips Newsletter

Comments