Scaling and Provisioning Virtual Servers
While virtual servers have proven a boon in the data center, they don't address the challenge of incrementally adding server capacity and automatically distributing load across them. As a result, the responsiveness and availability of a highly utilized Web application, such as Microsoft SharePoint, can deteriorate when the virtual machine it runs on is out of capacity. Next-generation application delivery controllers (ADCs) not only address this challenge, they interoperate with virtualization tools to provide greater control and even make it possible to automatically deploy server resources based on real-time demand.
Virtualization ignores the reality that a given physical server has a fixed performance capacity. The result of virtual machines (VMs) sharing resources means spikes in any one virtual server's utilization can have an adverse impact on all the other virtual servers running on the same hardware. For example, if a virtual server running a database application has a spike of queries, any virtual server on the same hardware may be unable to deliver adequate performance due to the increase in processor load.
Perhaps the most frequently misunderstood aspect of virtualization with respect to quality of service management is the hypervisor's lack of application awareness. While virtualization management tools are able to monitor and control the operating systems they host, the same is not true for the applications running on those guest operating systems. Virtualization environments are blind to failures or bottlenecks at the application layer, which means that, although virtualization infrastructure may consider a guest machine to be healthy according to operating system metrics, the applications running on that server may be unresponsive.
Scaling applications without having to change the application requires server load balancing, where advanced ADCs intelligently distribute end-user requests across multiple servers; from the end-user's perspective, there is only one server.
Advanced ADCs with virtualization-aware management capabilities spin up and shut down virtual machines automatically. If the load increases, additional servers can be brought online. When the load subsides, those additional servers can be automatically turned off, freeing up resources for other servers. The virtualization-aware ADC communicates with the server virtualization platform, such as VMware's vSphere, to monitor Virtual Machine resource utilization, power up VMs when application load requires additional resources, power down unneeded VM instances during periods of low utilization, and power physical machines on and off to save power.
IT administrators can ensure there will always be optimum use of hardware resources by intelligently distributing traffic load among multiple, diverse server resources. Hot spots are eliminated by effectively managing the distribution of work across compute resources, and the need to overprovision to handle load spikes is removed. The fiscal impact is in capital expense (fewer servers), and operational expenses (reduced power, cooling, management and administration).
The virtualization-aware ADC communicates through the hypervisor API to monitor VM resource utilization. This provides the ADC with real-time information about the virtual server instances, such as memory and CPU utilization. Combined with the ADC's application awareness, the ADC can load balance virtualized applications.
The ADC directs user requests to the best available server by shifting traffic loads away from slow-to-respond servers and by routing around down servers, highly utilized VMs or crashed applications. The availability, scalability and performance of the virtualized server environment can be further improved if the ADC can proactively modify the virtual environment based on the needs of the applications and users. This can be accomplished via an intelligent ADC control interface.
A virtualization-aware ADC control interface enables the administrator to create threshold conditions related to application performance and server responses. Combining these boundary conditions with two-way communication with the hypervisor API and now the ADC can trigger the hypervisor to make automated responses to application-centric events, such as load spikes.
For example, consider a hosting environment for a Web site selling flowers when Mother's Day is approaching and the traffic volume increases significantly. More VM resources are needed than normally were made available to the site. Load balancing alone will not mitigate the overloaded servers.
An intelligent control that has been set up to recognize the overload condition will trigger automatically and, via the intelligent platform management interface (IPMI), physically turn on additional real servers. The intelligent control then tells the hypervisor to spin up additional virtual server resources, which the ADC can then load balance to handle the load spike. This on-demand provisioning of additional VMs provides high availability and improved application performance to handle the additional load.
To achieve greater energy efficiency, the intelligent control mechanism can be used to set up triggers that specify if a server falls below a certain threshold of usage to stop any new traffic from going to it and when it reaches zero tell the hypervisor to move and consolidate VMs away from that server, and finally to power it down until it is needed once again.
The next step in identifying opportunities to further reduce servers, and reduce operational costs, is to identify what tasks hardware can do more efficiently than software.
Compression and SSL encryption are requirements for many applications. Mobile users, connecting via high-latency networks, benefit from compressed data delivery. Transmitting any kind of sensitive information over insecure networks such as the Internet requires deploying SSL and HTTPS encryption. Both compression and encryption place a heavy burden on server CPUs, whether physical or virtual.
Removing load associated with these requirements is simple and transparent when leveraging dedicated hardware within advanced ADCs. The ADC identifies when it's necessary to provide these capabilities, and then uses specially designed high-performance hardware to do the work.
The business impact of using hardware to offload servers is significant. A typical server can manage hundreds of SSL transactions each second. By comparison, an ADC with hardware-based SSL acceleration can perform 14,000 SSL transactions/sec and offload the requirement to perform any encryption from the servers behind it. The number of servers necessary to support application users can be reduced, as the servers no longer have to process SSL security and encryption.
A critical component to achieving the goals of virtualization is ensuring that the distribution of work is balanced across many servers, each possibly having different capacity. Virtualization-aware ADCs provide high application availability and load balancing for virtualized data center environments. They enable IT administrators to leverage new and existing servers to make the most cost-efficient and effective traffic distribution possible.
Virtualization and application delivery technologies are here to stay and together can help transform the economics of the data center for years to come.
Coyote Point Systems Inc. is a leader in application delivery, acceleration and load balancing solutions that enable IT personnel to have greater control over their Web and application servers. Coyote Point's Equalizer, Envoy and VLB products provide the industry's foremost combination of performance, affordability and ease of use, offering 24/7 server high-availability, optimized server and WAN performance, flexible scalability and secure application and network access.
Read more about data center in Network World's Data Center section.