Business Software

2009's Hottest Tech Trends

Network Access Control: After the Shakeout

Network access control has been a hot, fun topic for the past couple of years.

Epic standards battles pitted Cisco against Microsoft, each having its own terminology and approaches. And who could forget the Trusted Computing Group, which, with its own architecture, acted as a wild card?

Then there was the horde of third-party vendors offering to handle a company's NAC needs if it didn't want to wait for Cisco and Microsoft to deliver on their promises.

Last year was a turning point for NAC, however. The standards battles appear to have been resolved, and everything looks like it's falling into place. Customers apparently decided to wait for Microsoft to deliver its NAC products - and that left many third-party vendors out in the cold. A lot of them went under, including Caymas Systems and Lockdown Networks.

And because Network Access Protection (NAP, Microsoft's version of NAC) comes with Vista and Windows Server 2008, deciding to go with Microsoft has become a no-brainer for many customers. NAP represents a clear choice, rather than a technology that requires extensive research, RFPs, product tests and evaluations, and so forth.

NAP even proved itself in a recent product evaluation Forrester Research performed to determine which NAC tools would solve real-world deployment problems. Microsoft came in first, followed by Cisco and Juniper Networks.

This year the questions for customers will be where do we deploy NAC, and how many NAC features do we turn on? Most customers today are using NAC just to control guest access. That's important, but the technology can do more. On the pre-admission side, it can scan user devices, determine whether they are clear of viruses, check to see if patches have been updated and quarantine the device if security conditions aren't met. On the post-admission side, it can make sure that a clean machine remains that way, and that users access only those parts of the network to which they have authorization.

These important functions are ones that every IT exec should be implementing.

10 Gigabit Ethernet: A Switch in Time

In 2001, when 10 Gigabit Ethernet switches were introduced, the average per-port cost was $39,000, according to IDC.

Today, a 10G Ethernet port costs less than $4,000, which makes 10G Ethernet switches affordable for the enterprise wiring closet or data center.

With ongoing data-center server consolidation, not to mention the needs of service providers and high-volume Web sites, standards groups and vendors are hard at work on 40 Gigabit Ethernet and even 100 Gigabit Ethernet. For now, however, 10G Ethernet is the industry standard, and customers are flocking to 10G Ethernet switches. Switch-based 10G Ethernet port shipments grew by 140% in 2007, Infonetics Research reports. Worldwide revenue for 10G Ethernet services and equipment will hit nearly $9.5 billion by year-end, a 30% increase from last year, the firm predicts.

If your Fast Ethernet boxes are becoming stressed, this might be the time to move to 10G Ethernet. Per-port prices are coming down and feature sets are going up. A recent Network World test of seven 10G Ethernet switches found these products offer not only powerful packet-pushing capabilities but also 802.1X authentication, enhanced multicast support, protection against denial-of-service attacks and IPv6 support. The test demonstrated that these switches have extensive management and security features, which are just as important as how many packets they can move per second.

Virtualization: Beyond the Server Farm

By now, you've most likely implemented some level of x86 server virtualization. So, the question of the moment is this: Does data-center virtualization on x86 boxes represent the end of your virtualization efforts or just the beginning?

What about storage virtualization? What about desktop virtualization? What about application virtualization? What about virtualizing all your data-center hardware including Unix boxes and mainframes?

Those are the key, long-term virtualization questions facing IT executives. Once you've started down the road to decoupling the underlying technology infrastructure from the services you're providing to the business, doesn't it make sense to extend that strategy across the enterprise?

If you're inclined to agree, the next logical step would be storage virtualization, because you're dealing with another technology residing in the data center. The advantages of creating a virtual storage pool include lower-cost data migration, easier storage-resource management, common replication services and the ability to maximize and extend your storage resources.

Client virtualization, which comes in a variety of options, also offers real benefits. In the hosted virtual-desktop setup, applications are hosted on a server and users work on thin-client machines. This would be ideal, for example, in a call center.

In another version of desktop virtualization, one physical machine is virtualized. Here, separate business and personal zones could be created on mobile workers' laptops for security and compliance.

Or multiple operating systems could be run on a single PC. This scenario would apply to engineers, for example, who might be running a specific Unix or Linux-based technical application but using Windows for e-mail and other basic applications.

Most companies today are in the first stage of virtualization, says Gartner analyst George Weiss. This means they're consolidating and virtualizing servers as cost-cutting measures, typically with a single vendor.

The next phase would be using virtualization technology for the dynamic allocation of resources across servers. And the final phase, which won't occur for several more years, is heterogeneous virtualization, the ability to move workloads dynamically across hardware platforms.

Subscribe to the Daily Downloads Newsletter

Comments