High-Performance Networking Unleashed

Previous chapterNext chapterContents


- 8 -

Switches

by Frank C. Pappas and Emil Rensing

One of the most amazing, and most aggravating, aspects of networking in the late 1990s is the rapid pace at which the underlying technologies consistently manage to evolve. In other industries, core technologies experience revolutionary growth every 5 to 15 years (or more), but even the most conservative projections show the underpinnings of the information technology industry growing by leaps and bounds every 18 months, sometimes even more frequently! Most recently, companies have become locked in a number of heated battles over which vendors will provide the hardware and software that will lead the industry into the twenty-first century. These companies have been releasing at breakneck speeds attractive yet bloated browsers, plug-and-play desktop operating systems, and proprietary high-speed modem designs, in addition to wiring various cities with cutting-edge communications media, including ATM (Asynchronous Transfer Mode), ADSL (Asynchronous Digital Subscriber Line), and more. The race to provide fast, robust, and cost-effective data-sharing solutions is on, and everyone--from vendors to corporate clientele--is playing for keeps.

The one disheartening factor associated with this tremendous growth is that the budgets of most corporate IT managers and network engineers continue to shrink. If you think about it from the technologically illiterate standpoint of most executives, unless system problems become chronic and network reliability begins to suffer noticeably, there's obviously no need to invest tens of thousands of additional dollars for infrastructure build-outs. After all, if it ain't broke, why fix it? Rarely, if ever, are any long-term plans or proactive procedures developed for identifying upcoming network issues and resolving them in a timely and low-cost manner.

Unfortunately, this hands-off attitude leaves the network engineer in a rather precarious position. Financially, your hands are tied: You'll be lucky to get even enough funding to maintain your current network infrastructure, let alone make the serious improvements to cabling, hardware, and software necessary to support your ever-growing user community. So how do you stave off latency, device time-outs, intermittent connections, and crotchety users, all while not spending thousands of dollars to physically increase bandwidth, add new servers, or hire a staff of thousands? The solution can be described in two words: Ethernet switching.

Why Switch to Switching?

As executive management becomes increasingly tight-fisted when it comes to providing financial assistance for the support, maintenance, and expansion of network resources, you'll frequently be confronted by a number of different problems that will require innovative or unorthodox solutions--often your only route when funding is scarce. Two factors will significantly affect your network. You'll need to pay special attention to both of them, or you're liable to suddenly find yourself between a rock and a hard place.

First, computers, workstations and servers alike, simply aren't what they used to be. No longer are desktop machines simply dumb terminals or systems that are only slightly more intelligent than the average toaster. Today's class of desktops, workstations, and network servers have been rebuilt bigger, faster, and stronger than their predecessors, with the memory and processing capacity to crush many of the server-class machines that were in wide use only five years ago. This has resulted in newer, more demanding roles for these machines, roles that require the systems to be constantly transmitting across your network to access remote files, surf the Internet, use shared devices, and so on.

As if this weren't enough of a challenge all by itself, overall network growth will, if left unchecked, work to cripple your network infrastructure. As your network grows in terms of nodes, users, and services provided, sooner or later you'll be faced with the mother of all network problems: insufficient bandwidth. Performance will undoubtedly degrade, users will gripe about slow network response times, client/server applications will grind to a halt, and cross-network connections will be few and far between. This is not good for your network or for your hopes for a promotion. You've got two options at this point--work for a company that doesn't care how much money you spend and can afford long spans of network downtime, or adopt Ethernet switching.

To Bridge or Not to Bridge

Using bridges to separate multiple network segments has long been seen as an excellent method of reducing cross-segment traffic and realizing modest performance increases in growing or mature networks. Although bridging allows network engineers to subdivide saturated networks into more manageable mini-networks, the gains achieved through bridge solutions are only effective to a certain point, when the initial saturation problems recur. This generally happens when large numbers of users, on any of your segments, begin to demand significant numbers of intersegment connections. This is when the bridge's incapability to provide simultaneous cross-segment connections starts to hobble the effectiveness of the bridge solution. Thankfully, one of the strengths of switches revolves around multiple cross-segment communications, so there is a route out of your misery. Despite all this, it is important to recognize that in certain situations, bridges offer a better solution than switches--don't reject bridges out of hand. If your subnets will require little, if any, cross-segment contact, a bridge may just do the trick.

In an Ideal World

In an ideal world, you'd never be faced with critical network congestion. You'd always have plenty of time, staff, and financial support to isolate and neutralize even the most nascent of problems. You say that your company has just acquired your major competitor and that you have to integrate an additional 12,000 nodes into your currently overburdened token ring network? Just scrap the damn thing and install FDDI. You've just taken over three new floors in your building and are hiring a bunch of new employees? Heck, why not just rip out your old wiring, run some CAT5, and sing the praises of Fast Ethernet? Pardon the sarcasm, but on what planet are you currently living?

Alas, we operate in anything but an ideal world. Fast Ethernet, ATM, FDDI, and ADSL solutions (among others) are simply too expensive and resource-intensive to be used as fast-response weapons in your fight against network congestion. Although your IS team is probably quite versatile, the task of designing, implementing, and supporting a network based on these new technologies requires a good deal of time and effort, is anything but a straightforward procedure, and no doubt will require significant outlays of cash for staff training. Not to mention, of course, transition downtime, the inevitable glitches associated with large-scale installations of new technology, as well as any of a host of unanticipated issues that will need immediate attention. Maybe, just maybe, you can pull it off successfully. But if this is your only contingency plan, you can start looking for a new job soon.

Another avenue that is frequently explored when trying to alleviate network congestion centers on the installation of bridges and routers to segregate intersegment traffic. Although this is a valid solution that can often yield at least marginal results, configuring bridges and routers to provide optimal performance takes high degrees of skill, patience, and lots of network traffic analysis. Again, this option should be exercised only if time and money are on your side.

Switching to the Rescue

In reality, you can't simply dump your entire infrastructure every time problems begin to crop up with the current iteration of your network. Your boss won't stand for it, most likely won't (or can't) pay for new hardware and software. Your company certainly can't afford the hassle of one transition after another, along with the downtime and other hassles, every time you want to take the easy way out of a networking fiasco. In many instances, Ethernet switching has emerged as the de facto solution for dealing with these types of network congestion issues, often saving network administrators time, money, and frustration in the process.

The redesign of networking infrastructure to integrate Ethernet segment switches into a traditional, nonswitched networking environment can yield surprising results, not only in terms of increased overall network performance but also in light of the low price/performance trade-offs that will be required to implement switching solutions, as opposed to FDDI, Fast Ethernet, or other high-speed networking technologies. The benefits of Ethernet switching are many: It is relatively inexpensive compared to other options; it can reap tangible improvements in network performance, regaining lost bandwidth and allowing for full duplex (20Mbps) networking; it can be implemented in a proportionately shorter period of time than FDDI, Fast Ethernet, or other technologies; and it allows you to retain your investments in current network infrastructure. All in all, this sounds like a pretty good option when the demon of clogged networks is staring you in the face.

How do switches work? In a very broad sense, Ethernet switches function by helping you break down greater, traffic-intensive networks into smaller, more controllable subnetworks. Instead of each device constantly vying for attention on a single saturated segment of 10Mbps Ethernet, switches allow single devices (or groups of devices) to "own" their own dedicated 10Mbps segments connected directly to the high-speed switch, which then facilitates intersegment communication. Although this sounds a lot like a bridge, there are some important distinctions that make switches much more dynamic and useful pieces of hardware.

Switches themselves are hardware devices not entirely different in appearance from routers, hubs, and bridges. However, three important factors separate switches from their networking brethren: overall speed (switches are much faster); forwarding methodology or electronic logic (smarter); and higher port counts. In contrast to the functionality of bridges and routers, which traditionally utilize the less effective and more expensive microprocessor and software methods, switches direct data frames across the various segments in a faster and more efficient manner through an extensive reliance upon on-board logic, through Application-Specific Integrated Circuits (ASICs).

Like bridges, switches subdivide larger networks and prevent the unnecessary flow of network traffic from one segment to another, or in the case of cross-segment traffic, switches direct the frames only across the segments containing the source and destination hosts. In a traditional nonswitched Ethernet situation, each time a particular device broadcasts, or "talks," on the network, other devices become incapable of accessing the network, thus preventing collisions (as defined in the IEEE's 802.3 specification). Although this ensures the integrity of your data, it does nothing to increase overall network speed. Switches help to ensure additional network access opportunities for attached devices (increasing speed and reducing latency) by restricting data flows to local segments unless frames are destined for a host located on another segment. In such a case, the switch would examine the destination address and forward the requisite frames only across the destination segment, leaving all additional segments attached to that switch free from that particular broadcast and (theoretically) able to facilitate local-segment traffic. Rather than being a passive connection between multiple segments, the switch works to ensure that network traffic burdens the fewest number of segments possible.

Switch Properties

By now you should be convinced of the important role that switches can play as part of your Ethernet network. If not, you may want to reread the chapter up to this point. Switches may save your job when your network starts down the inevitable road toward total collapse.

If you've whole-heartedly embraced this chapter's recommendations and are ready to go switch-shopping, a question presents itself: How do you pick a switch that will suit your needs? You'll need to spend a little time getting to know switches, some of their more important features, and how they do that which they do so well. Once you've got that information under your belt, you should be in a fairly good position to make authoritative choices about switch purchases.

Static Switching Versus Dynamic Switching

If you've gotten to the point where your network is extremely congested and you've called vendors in for demonstrations and quotes, be extraordinarily wary if their solutions depend on static switches. Although the devices that you evaluate during the course of re-engineering your network may or may not be explicitly referred to as static switches, take a good look at the functionality of the particular piece or pieces of hardware. If the products perform in a fashion that appears to make them nothing more than glorified hubs, chances are that you really don't want to invest in that type of switch. After all, the point of this whole operation is to segment and intelligently control intersegment traffic, thus reducing congestion. Static switches just don't hit the mark.

On the other end of the spectrum are the products that you do want to consider seriously: dynamic switches. Dynamic switches not only pay special attention to the forwarding of packets to their proper destination, but also maintain a table that associates individual nodes with the specific ports to which they are connected. This information, updated each time a particular machine transmits across the network, or perhaps at operator-defined intervals, keeps the switch's information as to node/port combinations up to date, allowing the switch to quickly direct frames across the proper segments, rather than across all segments on the switch.


NOTE: Dynamic switches will continue to save you huge amounts of time and energy long after you first integrate them into your network. Because dynamic switches update their forwarding tables every time devices broadcast across the network, you can rearrange your network, switching workstations from port to port to port, until your network is configured in the manner that suits you best, or you're blue in the face, whichever comes first! The tables will be updated automatically and your network won't go down!

Segment Switching Versus Port Switching

There's a great ongoing debate about whether segment switching or port switching provides the optimum solution for resolving network congestion crises. It all boils down to a question of cash on hand: If you've got the cash, go with port switching; if not, then segment switching will be the order of the day. What's great about the segment-versus-port debate is that, for a change, you win either way.

Segment switches are able to handle the traffic from an entire network segment on each port, allowing you to connect a higher number of workstations or segments with fewer switches/physical ports. The great aspect of segment switches is that they are also capable of handling a single workstation on each port (in essence, a segment with one node). This will allow the network engineer to prearrange machines requiring only intermittent network access along the same segment, sharing one (relatively) low-traffic 10Mbps pipe. At the same time, high-end machines, such as network and database servers, optical drives, and other devices can be connected with a one device/one port scheme, allowing these high-bandwidth and critical devices their own dedicated path to the greater network without having to compete with someone's Internet game for network access. Because of the inevitable cost controls that you encounter on a daily basis, segment switching is the preferred and most readily implemented solution because it requires little in the way of additional expenditures for hardware, additional cabling, and so on.

Port switches (also referred to as switching hubs) are designed to accommodate a single device on each physical port. This is a network manager's dream--each workstation, server, and random device would have its own dedicated, 10Mbps path to the rest of the network. However, implementing a port-switching solution demands a good deal of capital for additional wiring (cable runs are needed from each device directly to the switch) and enough switches to provide the requisite number of physical ports. Additionally, as your network grows, you'll be faced with significantly increased expansion costs because you'll need new cable runs and possibly entirely new switches every few months. Again, if you've got lots of cash, this is a great option; you'll have quite the impressive network. However, whatever route you choose, you'll certainly end up with a much better network than you had prior to implementing switching.

Cut-Through Switching

Although switches by themselves will provide impressive gains in your overall network performance, there will occasionally be certain situations in which you will want (or need) to squeeze just a little more juice out of the system. Instead of looking at your boss and screaming in despair, an excellent alternative is to implement a cut-through switching solution.

Cut-through switching helps speed network communication by forwarding packets much sooner than traditional switching configurations will allow. This is achieved by forwarding packets to their destination machine prior to receiving them in their entirety, sending them on as soon as the switch is able to determine the destination address. Although this generally reduces network latency, cut-through switching can often allow many bad packets to eat up available bandwidth. To prevent this, reconfigure your switch to allow for a marginally longer delay between the receipt and forwarding of packets. Ideally, as soon as the switch receives the packet, it should buffer 64 bytes to ensure that the possibility of packet errors has been eliminated. After the possibility of these errors has passed, the switch can then forward the packets across the appropriate segment to the destination host. This slightly increases network latency, though it will provide for faster speeds than floor-model switching. Unfortunately, if yours is an extraordinarily busy network, the benefits of cut-through switching will be less noticeable, and will reach their limits much sooner than in a less intensive environment.

Store and Forward

Store-and-forward switching devices, as the archnemesis of cut-through switches, take an entirely different approach. It's very much like the tortoise and the hare, with store-and-forward devices playing the slower, yet more dependable, role of the two.

Instead of the faster send-it-as-soon-as-you-can rule used by cut-through devices, store-and-forward devices wait until the entire packet is received by the switch, only then sending it on to its destination. This lets the switch verify the packet's CRC and eliminate the possibility of other transmission errors, allowing for highly reliable data transmission across your network. Although this doesn't strictly increase network performance, it does eliminate the additional transmissions that must occur as a result of packet errors that otherwise would have occupied network resources, thus providing an associated speed increase.

Other Switching Issues

As with just about any area of networking technology, there are a number of additional issues that must be considered when implementing a switching solution for your network. These topics go above and beyond the simple selection of a basic switch and instead take a holistic view of networking in order to create a more powerful and more efficient finished package. The following section briefly covers high-speed network interfaces, competing solutions for high-speed Ethernet, network management issues, and virtual network options that can work in concert to make your network perform well above your expectations.

High-Speed Interfaces

When building (or rebuilding) a network with high-speed switching as the centerpiece of the endeavor, it's important to upgrade and to standardize as many of your network connections as you can, based on your particular hardware and financial constraints. In addition to the host adapters in each of these instances, it is also important to provide the associated high-speed connection, such as CAT5. Three areas need to be addressed: servers, workstations and attached devices, and interswitch connections.

Network Servers

Because the NIC on a server is one of the prime areas where bottlenecks occur, it's important to install a high-speed interface on the server to alleviate NIC congestion. Because your network is only as fast as its slowest part, it's up to you to ensure that easily upgraded items, such as host adapters, do not pose significant performance barriers in and of themselves. This will allow for fast data transfers to and from the server.

Workstations (and Other Network Devices)

As the second factor in the network equation, workstations and other devices that are attached to your network can also make or break a network's performance based on the type and speed of network adapters that are installed throughout your workstation community. High-speed interfaces are important because they allow for faster connections and data transfers with switches and servers, and will free up the network for other devices in a much shorter time frame. Additionally, your network, switches notwithstanding, will experience degraded performance if there is a significant differential among the interface speeds of your various network devices. The reason for this is that fast ports usually can't transmit prior to receiving the whole transmission from a slow port, and slow ports cannot even begin to utilize the full bandwidth provided by a fast port. In either case, bandwidth is wasted because the faster port is at the mercy of the slower port, with the associated bandwidth going to waste.

Interswitch Connections

Just as it is important to provide high-speed interfaces to servers and workstations, it is equally important to ensure that your switches, when interconnected, can communicate with one another at similarly fast rates. After all, your network redesign is for naught if your switches themselves become the very bottleneck that you have been trying to avoid!

Streamlining interswitch connections is generally not the most urgent issue that you'll encounter when integrating switches into your network, simply because the majority of installations will require only a single switch, or will have switches that themselves are supporting completely isolated segments. However, when switches need to be interconnected, in effect creating a miniature high-speed backbone between each other, it's important to provide the fastest interfaces and cabling possible to ensure plentiful bandwidth and low-latency network access.

High-Speed Ethernet Options

Two of the more recent developments in Ethernet networking deserve a paragraph or two when discussing switched Ethernetworks. Both technologies can afford significant increases in overall network performance with only modest expenditures in labor and hardware necessary to implement them.

Full Duplex (20Mbps) Ethernet

Designing a switched Ethernet network that combines port and segment switching, coupled with full duplex (20Mbps) Ethernet operation will increase overall network throughput and will greatly decrease network latency, providing a much more responsive network for your user community. Full duplex Ethernetworks are implemented by disabling the collision detection procedures involved with the traditional (half duplex) Ethernet CSMA/CD schema. Full duplex Ethernetworks function across existing cable infrastructure, allowing you to retain a good portion of your current investment in network technology. Of course, there's always a catch. Although you can use your current wiring, you will need to purchase newer, high-speed network interface cards (NICs) for PCs, network and other servers, and any other devices attached to your network.

100Mbps Fast Ethernet

Another of the amazing technologies that have been increasing the speed at which network communications are taking place is Fast Ethernet, which is a new system that allows Ethernet-based traffic to cross your network at speeds at or near 100Mbps, well beyond the speed of traditional Ethernet (10Mbps) or Token Ring (4 or 16Mbps) networks. The 100base-TX implementation of Fast Ethernet will operate across existing CAT3 or CAT5 cable, although CAT5 is preferred because its lower signal-to-noise ratio ensures a more reliable level of communication.

As we've already discussed, one of the most frequent areas that suffer from network congestion occurs at the network server. Although switched networks are the first step in lessening this bottleneck, supplementing your switch with high-speed Ethernet can provide greater performance gains than can be realized by either technology by itself. If you're up to the task of rewiring your entire office building, Fast Ethernet is the least expensive and most easily implemented solution available for providing 100Mbps networking throughout your organization. ATM, FDDI, and ADSL are nice, but without a lot of time for training, a large budget, and a whole lot of patience, they're not likely to be deployed in your company any time soon!

Network Management

When evaluating your switching solutions, be sure that your manufacturer supports (or hopefully provides) some type of SNMP-compliant management tools so that you can easily and effectively monitor and troubleshoot your switches. Although network management resources will vary to some degree from vendor to vendor, make sure that your particular switch can readily supply you with performance, error, and other related information so that you can easily spot and address network trouble.

Virtual Networks

If proactively monitoring your network, designing speed-saving cut-through switches, and installing high-speed interfaces throughout your network still hasn't delivered the fantasy performance levels that you've been pining for, you're not out of options just yet. Another step you can take will require a lot more time and effort, but if you're working in a particularly large and unwieldy switched environment, then perhaps virtual networking is the right solution for your particular needs.

The reality of virtual networking is not far off from what you're probably envisioning right now. Virtual networking is the creation of multiple logical networks out of a single physical network or grouping of segments connected to a single switching device. This can be accomplished by configuring the switch to allow certain workstations or segments access only to specifically delineated segments, thereby denying access to all other segments connected to that particular switch.

Virtual networking can also be used as a management and security tool, enabling an administrator to group segments together into logical networks based on department (legal, production, customer service, and so on), physical location in the building, or simply to further filter excess traffic from busier parts of the network. What's more, virtual networking can be implemented in such a way as to allow only packets from certain predefined hosts access to restricted segments. In this way, the flow of data from potentially hostile machines can be eliminated long before those machines pose a threat to corporate data or the performance of your network.

Summary

As local area networks become increasingly more complex and crowded beasts, the limits, financial and otherwise, that are imposed on technology support teams require that these troubleshooters approach network problems in new and often unconventional ways. Rather than scrapping your current network in favor of the revolutionary and often enticing technologies that are constantly flooding the market from a variety of vendors, one of the best ways to address network performance issues is to redesign your current network to include Ethernet switches.

By including Ethernet switches as part of your greater network, you'll gain a wide variety of benefits, including decreased latency, faster file transfers, fewer collisions and other transmission errors, and significantly easier management of the greater network. Your users will love you for providing a much more user-friendly network environment, you bosses will be thrilled with the fact that you've managed to keep the network together and responsive with a minimum of cash, threats, and ultimatums, and you'll be happy to avoid the gripes, groans, and midnight pages from the network operations center that tend to go hand in hand with slow-performing networks.


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.