Just Over the Horizon, Private Clouds

They've also seen a precedent. At one time, corporations built out high-performance proprietary networks to link headquarters to manufacturing and divisions at different locations. Two of these networks, Digital Equipment's DECnet and IBM's Systems Network Architecture (SNA), looked like solid investments for many years. But the growth of the Internet, at first a phenomenon that the corporation could ignore, began to take on a new meaning. The Internet could handle e-mail and file transfer for any company that was equipped to send things over a Transmission Control Protocol/Internet Protocol (TCP/IP) network. As the Internet became the default connection between universities, government agencies, and some companies, the cost of not having a TCP/IP network internally went up and up. At the same time, a vigorous debate ensued over whether TCP/IP was good enough for the needs of the modern enterprise beyond e-mail.

As previously mentioned, TCP/IP, the protocol on which the Internet is based, had been designed to survive a nuclear attack. It was a network of networks. If a segment of the network were to go down, the other segments would automatically route around it. It made for what critics labeled a "chatty" protocol. A router would map a good route for a particular message, then call up the next router on that route. "Are you there?" it would ask, and it would get a ping back, "Yes, I'm up and running." The sender would ping again, "Are you ready?" and the router on the next leg of the route would answer, "Yes, I'm ready." The message would be sent. The sender would then ask, "Did you receive the message?" and would get back a response of either "Yes, I did," or "No, send again."

Neither DECnet nor IBM's SNA would have tolerated such chitchat. It wasn't efficient, according to their designers. And perhaps TCP/IP is a bit of a Chatty Kathy or Gabby Hayes. But what made it hard to resist was the fact that it worked in so many cases. It was sufficient for much enterprise networking, which was discovered as enterprises started relying on internal TCP/IP networks called intranets to carry traffic derived from the Internet. These intranets turned out to be "good enough" even when their performance lagged that of the proprietary networks. And the messages got through with high reliability. They might on rare occasions arrive minutes or even hours later than the sender intended, when a router outage led to concentrations of traffic on the nearby routers drafted by many other routers as the way around the outage. Instead of maintaining an expensive proprietary network across the country, the company could let its internal TCP/IP network originate the message, then let the Internet serve as its external connection to other facilities, partners, suppliers, and customers.

If there was still resistance to conversion, it faded at the mention of the price. The Internet was free, and the TCP/IP protocol used inside the company was freely available, built into various versions of Unix and Linux and even Microsoft's Windows Server. When internal operation is aligned with the external world operations-and the cost is the lowest available- the decision on what to do next becomes inevitable. A similar alignment will occur between external cloud data centers and the internal cloud.

To prepare for that day, it's important to start expanding x86 administration skills and x86 virtualization skills rather than sitting out this early phase of cloud computing. There are immediate benefits to starting to reorient your computing infrastructure around the concept of the private cloud. This is an evolutionary, not revolutionary, process that will occur over many years.

I can hear the voices saying, don't go down the route of the private cloud: it will destroy your security mechanisms; it will drag down your performance in your most trusted systems; it will lead to disarray. I think instead that those who can't move in this direction will find that they are increasingly at a competitive disadvantage. Whether you're ready for it or not, cloud computing is coming to the rest of the world, and those who don't know how to adapt are going to find themselves in the path of those who do and who are getting stronger.

The private data center will remain private, that necessary place of isolation from the outside world where data is safe and someone always knows where it is. The private cloud in that data center is as much behind the firewall and able to implement defenses in depth as any other part of the data center.

The day will come when the virtual machines running on x86 servers will have a defensive watchman guarding the hypervisor, that new layer of software that is so close to all the operations of the server. The watchman will know the patterns of the server and will be looking for the specific things an intruder might do that would vary those patterns, blowing the whistle at the first untoward movement that it spots. In response, an automated manager will halt the virtual machine's processing, register what point it was at with the business logic and the data, then erase the virtual machine. A new virtual machine will then be constructed and loaded with the application instructions and data and pick up where its predecessor left off.

If the intruder is still out there, he may find a way to insinuate himself again, but the watchman will be ready. The more extreme advocates of security say that this process can be pushed to a more logical conclusion, where the virtual machine is arbitrarily stopped, killed, and deleted from the system every 30 minutes, whether it needs to be or not. A new one spun up from a constantly checked master on a secure server will be a known, clean entity. Such a practice would make it so discouraging for a skilled hacker-who needs, say, 29.5 minutes to steal an ID, find a password, await authentication, and then try to figure out a position from which to steal data-that it would be a level of defense in depth that exceeds those devised before. Such a watchman is just starting to appear from start-up network security vendors; the hypervisor firewall with intruder detection already exists as a leading-edge product. Only the periodic kill-off mechanism still needs to be built into virtual machine management.

As the desire for private clouds builds, the technology convergence that has produced cloud computing will be given new management tools and new security tools to perfect its workings. We are at the beginning of that stage, not its end. Guaranteeing the secure operation of virtual machines running in the private enterprise data center-and in the public cloud-will enable the two sites to coordinate their operations. And that's ultimately what the private cloud leads to: a federated operation of private and public sites that further enhances the economies of scale captured in cloud computing.

This article was excerpted from the book Management Strategies for the Cloud Revolution by Charles Babcock, (McGraw-Hill, 2010).

Subscribe to the Power Tips Newsletter

Comments