Guide to Storage Resource Management

How to save millions through storage optimization

By Michele Hope

While it's undoubtedly invaluable, technology often offers only part of the solution to storage optimization. "If you don't know how to drive and you're driving a broken car, buying a new car will not fix your problem [of not knowing how to drive a car]," says Ashish Nadkani, principal consultant with GlassHouse Technologies, an enterprise-storage consulting firm.

Although many enterprises have undertaken storage-tiering and data-classification initiatives, pinpointing exactly how much money they've saved as a result is a difficult challenge, Nadkani says. Cost-cutting efforts can be hurt when a storage array or type of RAID is not matched optimally to the application, he says.

Mark Diamond, CEO at storage consulting firm Contoural, puts the issue another way.

This isn't about buying new stuff to optimize your storage, he says. Instead, it's about determining whether the data you've created is stored in the right place. This discussion goes beyond the basic concept of using inexpensive disk to store data, and delves into how the disk is configured, especially when it comes to replication and mirroring.

"We typically see that 60% of the data is overprotected and overspent, while 10% of the data is underprotected -- and therefore not in compliance with SLAs [service-level agreements]," Diamond says. "Often, we can dramatically change the cost structure of how customers store data and their SLAs, using the same disk but just configuring it differently for each class of data."

One case in point is a recent analysis Contoural performed for a large manufacturer that used three storage tiers. After assessing the different types of data and their need for replication, the Contoural team recommended a more detailed, six-tiered storage environment (see graphic). The company's estimated savings are pegged at more than $8 million over the next three years. This includes the ability to defer further Tier 1 storage-hardware acquisitions for as long as two years.

Optimization technologies, such as virtualization and deduplication, are excellent and probably can save an organization thousands of dollars, Diamond says. But if you take the bigger picture of optimizing, not just storage but the data residing on it, "you can save millions," he says.

Determing the value of investing in SRM software

By Mike Karp

I have a heavy prejudice in favor of storage resource management in general, and standards-based (which is to say, SMI-S) SRM in particular. If you can't discover it, you can't monitor it, and if you can't monitor what's out there… well, good luck with the management.

As is the case with all tools that contribute to automating IT, the value of an investment in SRM software increases with the complexity of the systems the software helps to manage. Highly complex environments benefit greatly from automation; less complex environments extract proportionately less benefit.

It doesn't take deep insight to understand why it works this way. Consider such fundamental "gotta haves" as:

• Asset discovery
• Asset management
• Capacity management
• Chargeback
• Configuration management
• Migration
• Events management and alerts
• Performance management
• Policy management
• Quota management
• Removable media management

Every time you automate one of these, you decrease the chance of operator-induced error. Additionally, things happen more quickly, more accurately and are more repeatable.

Complex IT situations such as grids, on-demand or utility computing - in fact, any kind of dynamic system - invariably win when automation is introduced.

A problem lies within one of my earlier statements, however: "less complex environments extract proportionately less benefit." While I'm convinced this is true, no one can tell us what that proportion might be. I have my own suspicions, of course, as to what the "value decay" might be for an SRM investment. I think the value scales or decays more-or-less in accordance with what our networking brethren know as Metcalfe's Law: "The power of the network increases exponentially by the number of computers connected to it."

Replace "power of the network" with "value of SRM" and you may have a useful metric for defining how much value SRM will have at your site.

Whether or not you like my little law, you have to admit that it does raise at least one question worthy of note, namely: if value diminishes as environments get smaller, how small does an IT installation have to be before SRM is simply not worth the effort?

The answer to this question has, I think, little to do with mathematics or physics, and a great deal to do with human vision: if you can see all the lights, dials and whirligigs on all your machines from where you sit, SRM is likely to provide poor payback. In other words, if you have only a few servers that are supported by a simple infrastructure, you are probably better off spending your money on almost anything other than SRM. On the other hand, if you can't see the front bezels and indicator lights of all your assets from where you sit, SRM may have something to offer.

At least as far as storage is concerned, this guidance also offers a fairly accurate definition of where to draw the line between the "S" and the "M" in SMB. If the vendors understood this, they would waste less of their time and yours with misdirected marketing campaigns.

Virtualization drives advances in storage management

By Deni Connor

The convergence of server and storage management is slowly taking place as enterprises look for more automated ways of managing their data center assets. Spurring the trend toward convergence - which remains somewhat hampered by a lack of available tools - is virtualization technology.

"If you don't believe servers and storage have converged, all you need to do is take a look at server and storage virtualization," says Greg Schulz, senior analyst for Storage IO. "Let's go way back - originally servers and storage were managed together, then they were separated, and now they are being put back together again."

Virtualization technology lets administrators divide physical servers or storage devices into logical virtual machines that can support different operating systems and applications.

Businesses are looking to virtualization so they can consolidate server and storage resources; run multiple workloads on a single machine more efficiently; and dynamically provision resources as application and business needs shift.

Server virtualization software from companies such as VMware, SWsoft, Virtual Iron and XenSource has been adopted by leagues of users; according to IDC, more than three-quarters of companies with 500 or more employees use virtual servers, and 45% of all new servers purchased in 2006 were virtualized.

However, storage virtualization deployments are less mainstream. IDC reports 49% of companies are evaluating storage virtualization, while 34% have implemented virtualization software or hardware. Enterprises are having difficulty adopting storage virtualization products to pool resources from multiple heterogeneous arrays because the software to do it is lacking.

When businesses deploy software for creating and managing virtual servers, the virtual machines typically get storage capacity from shared storage networks. Most link to Fibre Channel and IP storage-area networks (SAN) or network-attached storage devices - not direct server-attached storage. According to host bus adapter vendor Emulex, at least 70% of VMware ESX Server users get their storage from SANs.

Managing and provisioning compute power for these virtual and dynamic environments is often a manual process. Software such as VMware's VMotion and VirtualCenter Distributed Resource Scheduler can move running virtual machines among physical servers, but it falls short of adequately managing storage resources for capacity-hungry virtualized applications.

"VirtualCenter's Distributed Resource Scheduler takes care of monitoring utilization and reallocating [compute] resources as needed to provide an as-fast-as-possible use scenario," says Eric Kuzmack, IT architect at Gannett in Silver Springs, Md., which has dozens of servers partitioned into hundreds of virtual machines with VMware ESX Server.

To move storage resources from one virtual machine to another, Kuzmack uses VMotion and Distributed Resource Scheduler. But allocating additional storage capacity requires manual intervention by Kuzmack, who has to use multiple tools: one for monitoring storage capacity, another for monitoring and reporting on the links between application performance and storage resources, and a third for provisioning storage.

"Right now the VMware VirtualCenter tools don't have insight into the allocation of free space in storage," Kuzmack says. "We monitor that disk space with operating system tools. Provisioning storage is done manually when we create a new virtual server farm. As long as I have enough storage provisioned for the server farm, I don't have to do anything on the storage system."

The matter of aligning compute and storage resources is complicated further by the number of IT staff it takes to make changes to the storage and server infrastructure.

For example, adding compute and storage resources to a business-critical database involves coordination of several people: a database administrator who makes the request for more capacity and compute power; operations staff who install the additional servers; a storage administrator who approves the allocation and provisions the storage; and a server administrator who provisions the server with the correct configuration.

"When you want to provision a set of resources for an application, you need to not just provision storage, you need to provision servers, networking [connections] and do all that in conjunction with each other," says Patrick Eitenbickler, director of marketing for HP StorageWorks. "All these [forms of management] need to work in concert with each other. HP is going to build those linkages and bridges between server, storage, virtualization and automation software to avoid that."

Some storage vendors have adopted the concept of thin provisioning to solve customers' need for dynamically allocated storage.

With thin provisioning, a single pool of storage that can handle the growth requirements of applications is set aside. Allocation of storage capacity to applications can reach more than 100%, but because no application will at any one time consume all the storage available, capacity is left in the pool.

The use of thin provisioning eliminates over-provisioning of storage, in which storage capacity is pre-allocated to applications but never used. Among the companies using thin provisioning in their products are 3Par, Network Appliance, Compellent, LeftHand Networks, DataCore and EqualLogic.

Gary Berger, vice president of technology solutions for Bank of America Securities Prime Brokerage in New York City, uses 3Par's thin provisioning to allocate storage for his IBM BladeCenter server environment.

"In early 2005, we had a very fragmented environment [with lots of silos] everywhere," Berger says. "We spent most of our time catching up and sending disks around the country to recreate disk allocations and whole allocations, which provided the difference between what was useable and what was allocated. Virtualization helps us distribute workloads across many different physical resources."

Berger has two mirrored data centers with a consolidated SAN infrastructure. He is using 3Par's "chunklet" technology, which breaks disk allocation into 256MB groups to distribute workloads across many different disks in his system.

"When we need to do capacity upgrades, we can simply add new disk magazines to the system and rebalance our allocation across those disks again to get more efficiencies," Berger says.

Berger also uses the 3Par software and hardware to boot his servers from the SAN.

"Being able to export a [group of disk volumes] to a blade server gives us a tremendous amount of capability because we can easily recover from simple hardware failures," he says.

For the full article, please go here.

How does SRM work?

By Deni Connor

Storage resource management (SRM) software collects information on the heterogeneous resources - operating systems, host computers and SAN devices such as Fibre Channel switches and storage arrays - on shared storage networks.

Information is collected to help increase utilization, to help with storage provisioning and to improve performance of the storage area network (SAN), IP SAN or network-attached storage devices.

According to Gartner, SRM packages should contain:

* A repository for the resources that are discovered;
* The ability to plan capacity and manage it;
* Performance, event and quota management;
* SAN design, provisioning and the automation of workflow;
* Root-cause analysis, change and configuration management; and
* Reporting and chargeback.

Once storage resources are discovered, they need to be stored in a database so that the state of the SAN can be assessed and provide information for historical and future trending. Data is stored in the repository by size, creation date and owner if it is a file, and by capacity and performance if it is a storage system.

Capacity management includes the ability to identify use of resources and to reclaim unused capacity if necessary. The software should also let the user determine when it is necessary to acquire more disk space or improve performance. It should predict storage utilization by business unit, application, user, server or department.

In managing performance, SRM software should look at the relationships between applications, servers, host bus adapters, switches and storage arrays, and let users monitor and diagnose performance problems and bottlenecks caused by different resources in the SAN or by configuration changes.

Quota management lets the IT administrator set disk limits by user, department, group or business unit and monitor the disk for out-of-disk conditions. Within quota management, rules can be created that enforce the type of files being saved.

Event management is the recognition of triggers or alerts that may signal out-of-disk events or performance issues. It is important that the event management function be integration with systems or network management packages.

SRM packages should also include tools that make it easier for the IT administrator to provision more storage when the quota management and capacity management pieces indicate it is necessary. Provisioning includes the ability to assign storage volumes to host computers and applications or to delete these relationships or change them. SRM should also include tools that allow SAN design verification. Automating the workflow of commonly occurring operations is also an important feature of an SRM package because it allows the dynamic operation of error-prone manual processes.

Some packages - like those from Onaro and Akorri, which focus on performance and change management - offer only a subset of SRM functions. Others such as Monosphere and TeraCloud focus on capacity planning. Change management software allows IT to manage changes to the SAN, identify unplanned changes and create alerts when changes are made. This capability, according to Gartner, may include the ability to take snapshots before changes are made. If errors occur as the result of the changes, the state of the system can be rolled back.

Similarly, root-cause analysis allows IT to identify the reason for a problem and avoid willy-nilly troubleshooting.

Finally, an SRM package must have three other capabilities - reporting, chargeback and a central management console. Reporting is necessary so the IT administrator and upper management can assess the effectiveness of the SAN. Chargeback is a recommended function in that it allows IT to charge departments, business units or groups of users for their use of storage resources.

In addition, each SRM package should have the ability to manage storage resources from a Web-based management console, scalability to adapt to large or small environments and integration with systems and network management packages.

SRM packages should span a variety of operating systems - Windows, Linux and Unix servers - as well as a wide variety of storage systems and applications such as messaging and databases.

Subscribe to the Power Tips Newsletter

Comments