Guide to Enterprise Wireless LAN Systems

Wireless networks: The burning questions

What impact will 802.11n have? Which security threats are scariest? What of wireless VoIP?

By John Cox, Network World, 06/11/07

Wireless networks might be mainstream across enterprise networks, but that doesn't mean they're no-brainers. Here, we've raised and attempted to answer some of the thornier questions you might still be dealing with. 

 A surprising number of wireless LAN vendors have recently announced enterprise access points based on the draft IEEE 802.11n standard, promising throughput of 100M to 200Mbps per frequency band, or from three to six times that of today's 11g and 11a nets.

Whether network managers opt for the draft 11n products, certified interoperable by the Wi-Fi Alliance, or wait for the final IEEE ratification in late 2008 or early 2009, they could face any of these four issues: overloading part of the wired infrastructure; overloading existing, older wireless LAN switches; forcing an upgrade to higher-powered Power-over-Ethernet; and repositioning and rewiring some number of existing wireless access points.

Most of the new access points will come with one or even two Gigabit Ethernet ports. "We're mostly '100 meg' to our buildings," says Michael Dickson, network analyst at University of Massachusetts at Amherst. "[For 11n,], we'll need gigabit switches in the closet with 10-gigabit uplinks. That's a definite cost, almost a necessary cost for 11n."

"11n adds an incentive to go to 'gigE' [in the wired infrastructure]," says Craig Mathias, principal with Farpoint Group.

One related issue with upgrading a cable plant, given the capacity of 11n, is whether to upgrade the Ethernet wall jacks, a decision about whether the wireless infrastructure becomes the principal means of network access.

If existing wireless LAN controllers also lack the net capacity, and the needed processing power and memory to handle the increased traffic, they'll have to be replaced, especially if the vendor has a purely centralized architecture with every packet running from each access point to the controller. Vendors have been upgrading their controllers over the past year with 11n in mind, sometimes also offloading the packet switching functions to the access points, creating a distributed data plane.

"With this kind of distributed data plane, there's no bottleneck at the controller," says Mathias. "If you have Meru or Extricom, you have centralized data and control planes. But if you design the box to handle whatever is thrown at it, it's not a problem."

Benchmarking wireless performance to verify such things as workloads and traffic conditions is likely to become much more important for 11n nets. To do this, enterprises or systems integrators will use complex performance-testing tools, such as those from VeriWave and Azimuth Systems, which previously had been used mainly by radio chip makers and equipment manufacturers. "This will be a big thing down the road," Mathias predicts.

The Power over Ethernet (PoE) issue may catch some users by surprise. "The PoE infrastructure may have its upper limits tested by 11n deployments [that are] used to their maximum capabilities," says Chris Silva, analyst at Forrester Research.

PoE lets you run just one cable between switch and access point, instead of two, potentially a big cost saving. But the 11n access points draw more electricity than the 15.4 watts maximum provided by power injectors based on the IEEE 802.3af standard. That will at least double with a new standard, 802.3at, now being finalized. At least one vendor, Trapeze, has created new code that can let its just-announced 11n access point make use of existing PoE injectors, but there are tradeoffs in terms of performance.

"The promise of 11n is more than simply going faster," says Phil Belanger, managing director for Novarum. "The increased range of 11n will make it more practical to deploy large systems using the 5-GHz band, which has many more channels than the 2.4-GHz and has not been used very much to date. That, in turn, will enable much higher capacity wireless LANs. For many enterprises, a wireless network that delivers hundreds of megabits of capacity everywhere will be good enough to be the only network."

 We've identified three, but we'll treat one of them (denial of service), as its own question.

The other two threats are emblematic of two very different human dynamics: one springs from the increasing cunning of attackers, the other from the continuing ignorance of users and even IT professionals about the nature of wireless threats.

In 2006, researchers identified problems with wireless interface device drivers that could be exploited in various ways by attackers. Drivers function at the level of the operating system kernel, where malicious code potentially has access to all parts of the system.

Typically, these driver vulnerabilities involve manipulating the lengths of specific pieces of information contained in the wireless management frames, causing a buffer overflow where a malicious payload can be executed, according to Andrew Lockhart, security analyst with Network Chemistry.

"A driver will process these data elements whether or not [the adapter is] associated with an access point. So the combination of simply having a powered-on wireless card with a vulnerable driver can leave a user open to attack," he says.

The obvious solution is to replace the vulnerable drivers. But that is an ad hoc process. "In the Windows world, most wireless drivers are part of a third-party software package, so they don't get updated with a Windows update, which makes it troublesome to eliminate the problem, and it will likely be a problem for a while," he says.

Attackers are becoming smarter about what and how they attack, increasingly using evasion tactics to sidestep or confuse wireless intrusion detection/prevention applications (IDS/IPS). The long-term solution is smarter IDS/IPS systems that can more comprehensively monitor and analyze wireless traffic and behaviors. But researchers, such as those at Dartmouth College's Project MAP (for measure, analyze and protect) are only in the early stages of such work.

The second wireless threat is related to the fact that many mobile users seem to be not getting smarter about wireless security.

"The biggest threat is people who use open Wi-Fi access points and don't use encryption or VPNs," says David Kotz, Dartmouth professor of computer science and one of the lead Project MAP researchers. "They trust some random hot spot operator or open access point somewhere with their personal or professional data. People are careless."

That's putting it diplomatically.

Security consultant Winn Schwartau likes to tell how his then-12-year-old son used a Windows-based Palm Treo to wirelessly eavesdrop on business executives using laptops or PDAs on an airport or other public Wi-Fi net. He routinely collected username/password combinations to corporate nets. "My son had passwords to 40 of the Fortune 100 [nets]," he says.

The key vulnerability was these users, even if they used an encrypted VPN tunnel to access the corporate net, repeatedly used an unencrypted wireless link to access Internet mail or other Web sites in the clear, allowing the younger Schwartau to collect information to access the user's Web mail account. He then used it to send the user an e-mail from his own account. "I can then infect that machine [with malicious code], and have access to your VPN account," Schwartau says.

The inverse of this problem is allowing personal mobile devices, which have been exposed to the Internet in the wild, to connect to corporate nets. "Normal security standards and procedures are often ignored when users are allowed to connect their own devices," says Lora Mellies, information security officer at Hartsfield-Jackson Atlanta International Airport. "For instance, there may be no scheme to regularly back up the information, no firewall or antivirus protection installed, and no use of encryption for confidentiality or [of] tokens/certificates for strong authentication."

"No one can define the perimeter [of the corporate net] anymore," says Schwartau. "The rule is: 'Thou shalt connect nowhere except to the corporate network; once you're there, you can do whatever you want, but we'll be watching you.'”

This threat will only get worse as the number of ill-trained mobile users grows, along with the ballooning amount of sensitive or proprietary corporate data on their mobile devices.

 Judging from the market, where enterprises vote with their dollars, the answer so far is, "Generally, no" at least for large-scale deployments.

There are exceptions, though rare, and they tend to prove the rule. One of the most often cited is Osaka Gas, in Japan. The utility used Meru Networks' WLAN infrastructure to support 6,000 mobile phones that were equipped with cellular and Wi-Fi network interfaces. The price tag for the whole project: $10 million.

  2007 2012
VoIP access points for enterprises $442 million $1.75 billion
VoIP wireless LAN switch and mobility controllers $500 million $2.7 billion
VoIP over Wi-Fi handsets (Wi-Fi only) $93 million $600 million

The reluctance to embrace large-scale wireless VoIP isn't suprising. Enterprisewide wireline VoIP deployments have only fairly recently found traction, and many of these have been angst-ridden. To be fair, often the angst is created by specific issues or problems at a given enterprise site.

But using a wireless connection in place of a wire adds lots of complexities, solutions to which are only slowly maturing. Access points have to be pervasively distributed to support voice traffic, while radio interference can easily affect voice quality or call sessions. Wireless eavesdropping on unsecured VoIP sessions is another worry for enterprise managers.

And it's difficult to pinpoint savings, says Forrester's Chris Silva. "Wireless VoIP has been positioned as a way to replace cellular minutes of use," he says. "But corporate IT doesn't have a good handle on what they're actually spending on this: It's often just expensed. So it's hard to make a case for savings and hard therefore to make a case for investing in VoIP over WLAN."

Over the course of three months we tested WLAN switches and access points from Aruba Wireless Networks, Chantry Networks (now Siemens), Cisco and Colubris Networks in terms of audio quality QoS enforcement, roaming capabilities, and system features.

Among his findings:

* With QoS enforcement turned on, and with only voice traffic on the net, calls nearly matched toll-quality audio.

* With even a small amount of data traffic, dropped calls became common and audio quality was poor, even with QoS still enabled.

* Roaming from one access point to another either failed or took so long, from 0.5 to 10 seconds, that calls dropped.

Those findings reflect some of the experience at Dartmouth College, which embraced a limited VoIP deployment on its pervasive Aruba-based campus wireless LAN four years ago. Initially, some college staff used the wearable mobile VoIP phone from Vocera. There were some problems with roaming, according to David Bucciero, Dartmouth director of technical services, who despite these teething pains is one who says wireless VoIP is worth the hassle.

More recently, the college has added just under 100 Cisco 7920 wireless VoIP handsets which "were flawless," though latency was an issue early in the deployment, says Bucciero. Reducing those delays has been an ongoing tuning process, working closely with both Aruba and Cisco, the wireline net vendor for the college.

Things have changed in two years, including the advent of the 802.11e QoS standard, augmented by continued proprietary QoS tweaks, and faster handoffs between access points.

But the real change has been the growing interest in, and products for, shifting call sessions automatically between cellular and Wi-Fi nets. At the enterprise level, this convergence entails an IP PBX, usually a Session Initiation Protocol (SIP) server, the WLAN infrastructure, new specialized servers from start-ups like Divitas and established players like Siemens, and accompanying client code running on so-called dual-mode handsets, which have both a cellular and a Wi-Fi radio.

Dartmouth is doing exactly this, running a pilot test with the Nokia E61i, a dual-mode mobile phone recently introduced in the United States as part of its convergence partnership with Cisco. The handsets use SIP to talk to the Cisco CallManager IP PBX.

"Cellular and Wi-Fi convergence is the real pull for VoIP over wireless LANS," says Farpoint's Mathias. "Once that [convergence] happens, then we can converge dialing directories, voice mail, other services, and have one phone that works everywhere."


A growing number of companies are moving beyond or even ignoring mobile e-mail in favor of mobilizing line-of-business applications.

"When you start rolling out these applications over a wider expanse, the questions become 'how can I lower costs of existing operations' or 'how can I provide new opportunities to grow revenue,'” says Bob Egan, chief analyst with TowerGroup, a Needham, Mass., consulting company. "These questions force you into thinking in a strategic mode versus an ad hoc mode."

In a 2006 TechRepublic survey, 370 U.S. IT and business professionals said they were targeting the following applications for mobilization (respondents could pick more than one answer): intranet access (chosen by 23%), field service/data entry/data collection (21%), personal information management (19%), customer relationship management or sales force automation (16%), supply chain management (12%), and ERP (nearly 10%).

The justification for making these applications mobile is increased worker productivity and efficiency, which was cited as "extremely significant" by 35% of the same respondents. The two other top justifications ("extremely significant") were reduced costs, cited by nearly 30%, and improved data collection and accuracy, cited by 28%. In all three cases, larger percentages cited these justifications as "significant."

Successfully exploiting such applications and achieving these goals requires changes in such diverse areas as employee and manager responsibilities and accountability, network access and authentication, mobile device management, end user and wireless networking tech support, and security and data-protection policies and enforcement.

"If you don't actively manage [mobile] workforce issues, including human resources and psychological issues as well as technology, you don't get the full value," says John Girard, vice president for Gartner. "In the end, the most important parts are the human parts: How do you monitor work, how do you assign responsibility, and do you understand what your team is doing?"

To make this possible, Gartner recommends consolidating an array of mobile provisioning, management and security functions (such as vulnerability assessment, security configuration, standard software image control, security and performance monitoring), shifting routine functions from the security group to the operations group, and forging joint policy development between those groups. One goal of this approach is to minimize the number of individual software products that target subsets of mobility issues but can't share information and aren't part of a strategic mobility plan.

"If you have different policies for different platforms [desktops, notebooks, smartphones], how do you maintain consistency?" Girard asks. "Most companies have a software distribution plan that works well for the desktop but less well for notebooks, and even less well for smartphones." Or a well-developed method for backing up desktop PCs may ignore mobile devices completely, despite the growing amount of corporate data on them and the greater likelihood of loss, theft or hacks.

"[Organizational changes] are all about controlling the flow of the company's intellectual property – how to provision and protect the data on the net and on the devices - and all the responsibilities that go along with that," says TowerGroup's Bob Egan.

Mobility becomes a system, or a system of systems that has to be viewed and treated as a whole. "With more and more users being mobile every day, we are paying a lot of attention not only to the uptime but also to the health of the system," says Daver Malik, telecom engineer at Hartsfield-Jackson Atlanta International Airport. "Careful watch on the system usage, capacity and trends is kept so as to prevent any undue disruption to the users."

One related aspect in preventing undue user disruption is tech support and the enterprise help desk. "Very few companies do a good job in supporting mobile workers," says Jack Gold, principal of J. Gold Associates. "Their support infrastructure today is for desktop support: You can't send a technician into the field to fix a [mobile] problem." The tech support team needs new training, new tools, new policies and procedures to be able to effectively and quickly respond to mobility problems.

One emerging alternative is to outsource some or all of these tasks to a new breed of managed services supplier. One example is Movero Technology, an Austin company that handles all aspects of cellular-based device and application deployments for an enterprise.

 Get a grip.

There are lots of costs in mobility: wireless and wired infrastructures; cellular voice and data plans, including roaming charges; the usage patterns of those plans; mobile device purchases; applications; software for device management; training; tech support.

"Viewing this from a strategic perspective means these costs become more visible," says TowerGroup's Egan. A strategic mobility plan for the enterprise uncovers, identifies and quantifies the true costs of the typical piecemeal approach to enterprise mobility, and creates the possibility for systematically controlling and minimizing them, he says.

This can be a shock to organizations that have handled mobility in an ad hoc way, Egan says. "Viewed from a strategic viewpoint, costs become more visible, so it seems like they're much greater," he says. "But the ad hoc approach to mobility hid the real costs, and those costs are much greater in my view than they are for a strategic approach."

A strategic plan can also make more visible the potential benefits of mobility, in terms of saving money or increasing revenues, an essential element in evaluating the needed investments.

Egan says one of his biggest surprises was talking with auto rental giant Avis, which was one of the first to have employees equipped with wireless handhelds, to meet customers in the parking lot as they returned their automobiles. "I said 'what a great thing for customer service,'” Egan says. "The Avis guy started laughing." The real benefit of the system was that it let Avis make an instant, on-the-spot decision about whether to keep the car for servicing, which costs money, or send it to auction. It was about where not to spend Avis' cash.

With a strategic plan, centralized and standardized device and software purchases are possible, a key element in rationalizing and reducing mobility costs. At the same time, changes in network infrastructure and in business processes can be budgeted and planned for. A mobile deployment can be frustrating and investments wasted if, say, an increase in data or transactions overwhelms back-end systems.

"Utilize your fixed infrastructure to its maximum potential to support the expanding wireless/mobile environment," says Hartsfield-Jackson Airport's Malik. "A carefully developed plan for the fixed portion of the network (for example fiber) that is capable of supporting future expansions both in terms of size and technology is the key component of controlling the cost related to such expansions, as and when they happen."

Acquisition costs have to be managed for mobility just as they are for corporate desktops. "It's very important to know the costs and ownership implications of everything you buy [for a mobile deployment]," says Gartner's Girard. "Figure out what platforms you're willing to support, and provide business groups and users the incentives for adopting those."

Girard recommends a thorough inventory of the relevant tools, systems and services you already have, including software licenses. "Where have you already spent money?" he says. "Then apply Occam's Razor, simplify. Ask yourself, 'How do I reach fewer products, both to reduce complexity and reduce costs?'”

A hidden element in cost calculations, according to Venture Development Corp. (VDC), is the impact of downtime and tech support if the mobile device, or some other part of the mobile system, fails. In an October 2006 report, VDC estimated that the failure rates of some consumer-grade mobile devices can exceed 20% per month. "In fact, the overall cost of downtime/lost productivity can represent up to 30% of the TCO (total cost of ownership) of a mobile device," according to the report.

VDC notes that device vendors are introducing new features and technologies to boost the durability and ruggedness of laptops and other handhelds. This class includes the semi-rugged laptops, which can endure a lot more rough handling and accidents than their consumer-grade cousins, even though they can't match the military-grade devices designed for the harshest conditions. The higher initial capital cost for such devices is worth it, because the company avoids the much higher costs of downtime due to equipment failures.

A strategic plan makes it possible to negotiate more aggressively with wireless carriers, refining cellular data plans tuned for various groups of users, minimizing overage charges in terms of rates and shared minutes or megabytes, and keeping international roaming charges in check, says consultant Jack Gold.

 Not much.

There are two kinds of DoS attacks emerging. One uses radio waves to jam a wireless LAN (WLAN) access point or network access card. The other, more sophisticated, manipulates the 802.11n protocols to accomplish the same thing – blocking a radio from sending or receiving.

A good example of jamming, though it's unintentional, is caused by the microwave trucks used by TV stations covering the Boston Red Sox home games at Fenway Park. In some cases, the tightly focused beams are not a problem for the baseball park's unlicensed band 802.11 WLAN because they're aimed away from the park to one of several towers. But in one case, the beam shot across the park, bounced off a bank of newly installed metal bleachers, and reflected back into the park, wiping out the WLAN.

Red Sox IT Director Steve Conley says he could stand right next to a WLAN access point with a wireless notebook and still not be able to connect to it.

Few homemade or commercial jammers come with the power of these commercial microwave systems. But for short distances, they don't need a lot. Products available include a $400 pocket-sized jammer that can disrupt three frequencies, including 2.4 GHz, up to 90 feet. It's advertised as a way to disable "spy cameras" running on wireless links. Another palm-sized model with a range of about 30 feet costs about $290.

There's even the Wi-Fi Hog project, complete with its own philosophical justification for "liberating" public wireless nets from the concept of shared use. The Hog, mounted on a notebook PC, uses selective jamming to lock out other clients from an access point and stake an exclusive claim on its use.

But a recent article on the Web site of the Instrumentation, Systems and Automation Society, a nonprofit professional group focusing in industrial automation, puts the jamming threat into perspective. The article, by Richard Caro, chief executive of CMS Associates, lays out several reasons why jamming is not as easy to pull off effectively as some claim and others fear.

(Caro mentions that the tactic of battlefield radio jamming by German forces in World War II led to the invention of frequency hopping spread spectrum communications as a countermeasure, an innovation patented by Hungarian-born Hollywood actor Hedy Lamarr and her associate George Antheil.)

"Interference is definitely an issue," says Farpoint Group's Craig Mathias. "We were able to construct some bad interference scenarios and show their impact. It was quite interesting to see how much damage could be done."

"You're toast," says Winn Schwartau, of The Security Awareness Company, who wrote about the threat in his 2000 book CyberShock.

Currently, there's no real countermeasure for a deliberate, focused jamming attack, except to quickly detect it, with a tool like Cognio Spectrum Analyzer, which Cisco is offering as part of its wireless LAN management tool set. Once it's located, you can use "crowbar remediation, to beat the crap out of it," says Mathias.

Less amenable to crowbars is the second type of DoS attack, the abuse of the 802.11 media access control (MAC) layer protocols by creating changes in drivers or firmware. "It causes the network card to misbehave with respect to the MAC protocols," says David Kotz, professor of computer science at Dartmouth College, where this is one of the areas under study by Kotz's MAP Project (for measure, analyze, and protect), a joint effort with Aruba Networks. "Because the card isn't being 'fair' in following the rules, it makes the net unusable to others."

One example would be to send de-authentication frames to a specific client, or broadcast them to all the clients, of a given access point. Obediently, the clients will disconnect from the access point. "Now most of them re-authenticate right away," Kotz says. "But if the attack repeats, you're getting these interruptions on your [Wi-Fi] phone or video stream."

For now, the response is the same as for jamming attacks: detect the problem as quickly as possible, find the offender as quickly as possible, and send in "police with guns," says Kotz.

"But fundamentally, the long-term solution is to fix the protocol itself," he says.

Network World Lab Alliance member David Newman outlines some tuning parameters in another Network World article on that topic.

How do I get my wired LANs and my wireless LANs to play nicely together? 

Former Network World senior editor Phil Hochmuth explores the tricks of the trade in his recent article on that point. 

Five questions you need to ask WLAN vendors before buying

By Craig Mathias, Network World, 10/1/07

If you're working on a RFP for the purchase of an enterprise-class wireless LAN (WLAN) system, or even just beginning to think about a deployment of more than a few access points, here are five questions to ask potential vendors:

Radio is a notoriously difficult environment. While the principles of radio transmission are not too different from those applied to wire, radio performance can vary from moment to moment. So, when you're estimating workloads and response windows, it's important to ask a vendor how they provide assurance that the solution they'll be proposing will meet your performance objectives. It's important to be as quantitative as possible on the requirements side, and then carefully examine the proposed solution. Get a guaranty of performance – that's always a requirement in RFPs I prepare, and the list of specific performance requirements gets stapled to the purchase order – perform or refund.

It's easy to specify a solution that meets only current needs. But the demands on networks only grow over time, along with the number of users and applications. It's thus critical to ask how the solution will scale to a potentially much larger size in response. For example: will more access points (APs) be required, or will a wholesale swapout of the controller be required? Can the management appliance handle multiple sites, and not just multiple controllers? Can I use wireless mesh connections to link APs if I need more capacity but don't have the time (or the budget) to install more wire?

Traffic mix can almost never be estimated accurately in advance. And, with more time-bounded traffic – especially voice, but also streaming video in some venues – becoming part of the mix, support for traffic prioritization is critical. Provisioning such is complicated, though, by the statistical variability inherent in radio communications. What does the proposed solution do to improve the reliability and quality of time-critical communications?

Sure, you can install a WLAN yourself, but it's ultimately much cheaper (especially when performance guarantees are part of the deal) to have the vendor, VAR, or integrator do it. Professional installers have seen it all, and can work around installation challenges that would otherwise tie up resources more suitable to other tasks. Similarly, get a support contract, with a guarantee of response times when critical failures occur (they won't very often, but angry users are zero fun).

Despite rapid growth, the wireless-LAN business is ferociously competitive. Get several bids; that's obvious. But go through a best-and-final bidding round with vendors that qualify in every dimension – design, solution, installation, and support. You'll be surprised how hard they'll work to win your business.

Finally, formal RFPs aren't always required, but they're always a good idea if for no other reason than they'll help organize your thoughts, clarify your objectives, and get buy-in from the guys with the money.


Top trends in the WLAN world

Changes in standards, convergence and product architectures drive the market

By Craig Mathias, Network World, 10/1/07

Most observers of the wireless-LAN (WLAN) industry might have assumed that, more than 20 years into the evolution of the technology and the products based on it, we'd be, well, done at this point.

Let's see, we've got radios, protocols, a set of standards, lots of vendors, and lots of demand. We've got traditional office applications, a broad range of verticals, telemetry, voice, and more. We've got broad deployments across enterprises of all forms, and in public-space/metro and residential installations as well. In short, WLANs have seen phenomenal success, with more on the way.


But the idea of being "done" after a mere 20 years or so is rather abstract and theoretical in a field where the rate of innovation has never been greater – innovation driven by new technologies, market demand, and the fact that we still don't have a clear "best" way to build wireless LAN systems.


We started with the so-called "thick" access point (AP), the idea being that we would deploy these cellular-style and hand off client connections as they roamed within an area of coverage. It soon became pretty clear, though that a lot of the functions common to each AP could be centralized, and the switched WLAN was born. But I'm getting a little ahead of myself here.


It's difficult to pick the top five trends in a space where innovation influences every element, but I think there are in fact five key trends in enterprise-class wireless LANs today:


  1. 802.11n - .11n is the latest in a long line of standards produced by the IEEE 802.11 Working Group. 802.11, along with Wi-Fi, has been a key reason for the success of WLANs, just as 802.3 did for Ethernet. 802.11n is, however, perhaps the most important WLAN standard since the original of 1997. It's all about performance, moving throughput from today's 54 Mbps of 802.11a and .11g to as much as 300-600 Mbps, depending upon implementation. Sure, those are raw speeds, but we're seeing over 100 Mbps at Layer 7 today with early production products. More importantly, we're seeing better rate-vs.-range performance, corresponding to improved reliability. Capacity is the name of the game as WLANs become the primary and even default vehicle for almost all clients in the enterprise going forward. And, with the Wi-Fi Alliance now testing for interoperability, it's not too early to jump on the .11 train.


  1. Unified networks – The traditional model for WLAN installations has been the overlay – quite literally, overbuilding a WLAN on top of an area already provisioned with wire. We thus end up with two networks with common users but separate management, security, and operations. We're just now entering the era of unified networks, where the boundaries between wire and wireless won't be quite so defined. A common wired infrastructure with unified management is on the way, replacing today's separate networks with minimal points of intersection with, well, the LAN. Wireless will become the primary access for data, voice, and almost all client traffic, and wire will be used for stationary devices, interconnect of wireless components, and backhaul.


  1. VoFi – That's voice over IP over Wi-Fi. I've done enough experiments with VoFi to now be satisfied that a properly-provisioned WLAN infrastructure can in fact quite easily support a large number of high-quality voice connections. VoFi handsets are widely available, but the really exciting part is in trend #4.


  1. Convergence – There are already well over 100 cellular handsets on the market today that have built-in Wi-Fi. And many of those can be programmed to dynamically hand off voice and data traffic between these two radios. This is called mobile/mobile convergence, and it promises a future with a single subscriber unit that can quite literally do it all – voice and access to applications, in the office, in the home, and on the road.


  1. Architecture – Which brings me back to the point I started with, to wit: there is no trend in architecture. We still have traditional APs, and thin APs, as well as ultra-thin APs, "fit" APs, meshed APs, APs that can directly forward traffic without going through a central controller or switch, and APs that are ganged into Wi-Fi arrays designed to provide maximum capacity in a given area. The arguments over architecture won't be settled anytime soon, but it's a good bet that the tool needed for a particular job is going to be available, and often from multiple sources.


Innovation, like I said, is key to the future of WLANs as they race towards ubiquity. But even now we have excellent solutions meeting the needs of enterprise users today, and the trends noted above bode well for the future of enterprise wireless access.


Outlining the basics of WLAN communications

By Craig Mathias, Network World, 10/1/07

Wired LANs are made out of a reasonably small set of common components – switches, routers, and gateways. It's easy to assemble the right set of these components for a given solution, and deploy that solution with a reasonable assurance that performance objectives will be met. Wireless LANs follow a similar philosophy, but with a somewhat greater degree of variance primarily relating to where in the network particular functions are located, and how traffic moves through the wireless LAN itself.


At the edge of a wireless LAN are access points (APs), which function as bridges between a wired network and (typically mobile) client devices. The functionality inherent in a specific AP can range from little more than a radio and an antenna to the ability to route traffic across subnets. APs are typically interconnected via Ethernet cabling, but can also relay data between them over the air using wireless mesh techniques. This capability can be useful in difficult-to-wire or temporary installations, and is also now being used as a reliability mechanism as well.


The key differentiator in the design of APs is how much local intelligence they possess. Before the introduction of the wireless LAN switch in 2001, APs were atomic network elements, separately provisioned and managed. The WLAN switch centralized common functions, like security and management, into a single location which resembled an Ethernet switch. The switch is less common today and has largely evolved into the wireless LAN controller, using intermediary switches for interconnection and PoE, but again centralizing common traffic-flow functions. Controllers are usually designed to work in together to allow scalability across large geographies and for fault tolerance.  Some can also take on management functions.


With so much architectural variability, it's often useful when analyzing WLAN architectures to think in terms of three planes describing key functions, as follows:


  • The data plane describes how traffic flows from the AP to other nodes in the network. Some APs must be connected directly to a wireless switch, while others can communicate with a wireless switch or controller over an IP connection – Layer-2 vs. Layer-3. An interesting approach today is the ability of some APs to directly forward traffic to a destination without going through a centralized controller, which some vendors claim will yield meaningfully-higher performance.


  • The control plane is responsible for the real-time control of APs, which can include when a particular AP transmits or receives and which client node will receive attention next. This function can be distributed, whereby each AP makes its own decisions, or centralized, where a controller handles this task. Security can also be centralized or distributed in each AP. The decision of how much control to locate where will be the key architectural differentiator going forward.


  • The management plane handles configuration, monitoring, reporting, exception handling, and other functions common to network operations. All enterprise-class WLAN systems are designed around centralized management, as having to independently manage each AP would become unwieldy beyond a small number of nodes. This function is often resident in a server, or, increasingly, in an appliance capable of managing even a very large distributed WLAN installation.


Farpoint Group believes that the future of enterprise-class WLAN systems lies in distributed data, centralized management, and both centralized and distributed control. It's now clear that purely "thin" or "thick" AP cannot be a universal solution, and the ability of an AP to adapt to changing conditions, perhaps even switching to mesh mode when a wired backhaul link fails, is the key to flexible, reliable, mission-critical enterprise WLAN installations. But the degree of architectural variability is in fact likely to increase before any convergence takes place; it's still fairly early, after all, in the history of this technology.


There is, thus, today no single enterprise Wi-Fi architecture today that can claim the title of best. Part of the reason for this is that it's very difficult (and often very expensive) to characterize performance and do comparative benchmarking analysis, especially considering the impact of radio artifacts (fading, interference, etc.) and the high degree of variability in traffic loads, duty cycles, volumes, and mixes. This situation is slowly improving, however, with the development of new performance-analysis tools, and we'll gradually see over the next few years, enough success stories to at least generalize on the best approach for a good variety of applications.



Subscribe to the Best of PCWorld Newsletter