by Frank C. Pappas and Emil Rensing
The reality of networking in the 1990s is that everyone--from management and local users to the most distant of Web surfers--is demanding more and more from rapidly-aging legacy networks. Not to mention, of course, the hell that the poor souls tasked with keeping everything online and in tip-top condition must endure on a daily basis. It's almost a certainty that these ravenous bandwidth hogs will expect nothing less than miraculous improvements in network speed, access, and reliability, with only the most conservative of investments set aside for the upgrade of existing network components. Never mind the silly notions that may have been dancing (like sugarplums) in your head about actually expanding your network's processing and bandwidth capacity in order to accommodate the ever-increasing demands coming from your user community.
So what's a network engineer to do?
In an ideal world, every network engineer would be able to report to work on the first day of a new job and design a complete network from scratch, tailored specifically to meet--and beat--the many unique and demanding challenges that the particular computing environment has to offer. Of course, coupled with this carte blanche, every engineer would also be given ample money to purchase the best hardware (PCs, servers, hubs), software (network and client operating systems, client/server applications), and other networking essentials (cabling, external connectivity, and so on). And (considering we're fantasizing here) let's not forget a generous amount of time thrown in to build out the data center, run cable, and tweak, configure, and pray over every aspect of the network until there was nary a packet lost anywhere in the system.
Unfortunately, any veteran of the network world knows very well that this is hardly ever the case. The first days (sometimes weeks) spent with a new employer are most often spent analyzing the current network topology, services, and technology resources--at the same time that you're trying to get a handle on office politics, assess the skills of coworkers, and generally make a good first impression with the management team. Granted, little of this is particularly helpful in your quest to improve the company's network. Projects most often have to be accomplished in short spans of time and within the confines of ridiculously hobbled budgets, meaning that you'll have to be no less than a miracle worker in order to satisfy--let alone impress--the hand that feeds you.
Even for a synchronized and highly talented corporate infrastructure team, the threat of network death due to substandard or failing hardware and software is most certainly the single-largest drain on resources, leaving little in the way of spare time to plan--or build--for the future. Keeping the company's current systems intact and functional is, properly, the lifeblood of networking teams, especially because any degradation in network performance will most certainly be noticed by the users and bumped upstairs to management, often resulting in agonizing meetings with non-technical managers whose role--or so it would seem--is to know as little about the technology involved as possible, yet still have the final word in all serious decisions.
Consequently, the meager funding that is allocated to these groups is usually directed at stopgap or "band-aid" upgrades (incremental memory upgrades, an extra processor or two, and so on) intended to hold things together only as long as it takes for the next relatively minor advancement to present itself. It is quite frustrating when you realize that you'll rarely find companies eager to set aside the requisite thousands or possibly millions of dollars needed to properly deploy the latest in high-speed gadgetry and telecommunications gear, especially when the network has been neglected to the point of obsolescence. Granted, sooner or later significant cash outlays and large-scale network reengineering will become the only viable options for many of the battered and decaying networks deployed today throughout the world, so network engineers are in for some fun times in the years ahead. But take heart, fellow network fanatics, because despite the overwhelming tasks that lie ahead, there is a light at the end of the tunnel.
As impossible as it may seem, there are many productive avenues that you can explore in order to improve the general performance of your network. Keep in mind, however, that because of the heterogeneous and often baffling array of network configurations and user demands that has cropped up over the years, these options (or certain combinations) will yield markedly different results for each specific network in which they are implemented. There is no single fix that can account for every hardware, software, and (mis)configuration variable, so it's up to you to do a little brainstorming with your team before you proceed with anything drastic that may significantly impact your corporate networking environment.
Identifying all of the problems affecting your network at any given time is anything but a simple task. To be completely honest, on occasion it can seem to be an easy task, though you should not necessarily stop after identifying one or more obvious problems after a cursory troubleshooting episode. Remember that networks--both literally and figuratively--are constructed from layer upon layer of heterogeneous hardware and software combinations, and the problems most readily noticed, while important, may be serving as camouflage for more serious issues buried deeper within your network.
From time to time, the problems you encounter may indeed be straightforward and resolved with relative ease. If you have a simple problem such as a lack of available ports to which to connect your users' PCs, and all other aspects of the network seem to be performing at acceptable levels, you may be able to get away with nothing more than running some additional cable and perhaps the installation/upgrade of a hub or two. Unfortunately, most network problems are far more insidious, requiring a significant amount of investigation and analysis before any serious plans-of-attack can be drafted or reinforcements deployed.
The next step in "Magical Network Expansion Land" requires you to identify all the aspects of an ideal network that would make you, your users, and management happy, hopefully all at the same time. While this can be a nerve-wracking, hair-pulling experience, the ease of administration and general level of user satisfaction that can be achieved from a "golden" network is often well worth the effort.
Does this include the installation of new (or additional) cabling in order to increase internal bandwidth available network drops and decrease latency? Perhaps your company has a few HAV--high asset value--or mission-critical resources to which your network must provide access, such as color laser printers, RAID arrays, optical drives, or news feeds. In some cases, certain security products such as firewalls can prevent users from accessing certain types of data, especially Internet-related resources such as streaming audio and video, America Online, and UseNet newsgroups. Is a proxy server, then, the right solution for your troubles?
Once you get a handle on all the macro-level services and characteristics of your "golden" network configuration, you'll need to get approval (and support) from your superiors in order to transform your whims--um, carefully considered recommendations--into technical reality.
It is to be expected that you are at this very instant daydreaming about the latest and greatest in AlphaServers, SparcStations, fiber optics, and the like. As nice as it would be to have such high-tech toys complementing and supporting your network, chances are that your boss will never fork over that much cash, especially for products and services that are not necessarily critical to the daily operations of your company.
In this business, the only way you get anywhere near the amount of money you need is to architect a plan that succinctly addresses the needs (read: wants) of your company as perceived by your user community and goes slightly beyond--say, 10 to 15 percent of the total cost--for additional capacity.
At this stage of the game, you've (hopefully) channeled much time, effort, and energy into the framing of a suitable plan for improving and/or expanding your network. Theoretically, at least, you've identified the major problems that you want to resolve, have drawn up specifications for your "golden" network configuration, and have received tacit approval and a blank purchase order from your boss. You're almost ready to begin implementing the many great upgrades that are sure to satisfy everyone involved. There's only one problem: Each user still needs to be able to access the network during the transition.
This is another one of the logistical nightmares that can sometimes catch network engineers and systems administrators unaware. It's a catch-22 of sorts: The essential services that allow the company to conduct business are in need of a little TLC, but these same services cannot be taken offline because they happen to be, well, essential services.
Certainly there are some companies that are less affected by this than others. Take, for example, a site like ABC's new flagship online presence, abc.com. If a particular piece of their internal network is taken down to accommodate the installation of new hardware or software, chances are that a good portion of their critical business resources--publishing systems, photograph databases--will go down as well, bringing production to a halt.
Because this is generally a bad thing, Internet administrators have to make sure that these upgrades are as transparent as possible, occurring at odd hours or on isolated segments of the network that can more easily or frequently be powered down. Of course, even in non-minute-by-minute organizations such as law firms, consulting houses, and universities, the reliability of the network, while not critical, is still quite important. Companies simply do not spend the amount of money necessary to install or upgrade advanced networks, only to find that the network is down for maintenance more than it is available for use.
Let me illustrate my ramblings with an example from my dark past. Back during the heyday of PoliticsNow, we suffered day after day with a quirky and altogether troublesome publishing system. This software failed countless times each day, but to our dismay it was the relative best of only a few products with the scalability and functionality that we required to produce the site. As our vendor's programmers--conveniently located in the Netherlands, Lake Titicaca, or some such place--churned out updates to both the server and client software, the tech team was constantly under the gun to evaluate and then install each supposed fix. Unfortunately, upgrading certain parts of the system required that the server-side database and publishing systems be disabled, or necessitated a seemingly unending series of complete reinstalls and subsequent reconfigurations of user workstations. The nature of the news business--especially the breaking kind--means that for every second you keep a user offline, you seriously and negatively impact the capacity of the company to do its work.
While it's not the most convenient of things for the technology staff, it is important to remember that the users come first. A partial or complete network outage can be scheduled ahead of time (with the appropriate approval from management--important!) and provides an excellent opportunity for small, easily implemented upgrades with minimal fuss.
A second, more risky option is to take the entire network down late at night, generally when network utilization is at a minimum. While this is often an excellent strategy for accomplishing larger-scale changes to your network (because you'll have more time and no immediate pressure to bring the system back online), it does have some rather unpleasant possible side effects.
While I don't want to harp on America Online, their troubles with such an issue illustrate my point quite nicely. At regular intervals, the technology people at America Online conduct what is known as a bounce, which is basically a complete shutdown of the system that occurs around 4 a.m. Eastern time. The bounce is a quick way to get everyone offline, thus opening a window for modem installations, software upgrades, or any of a number of possible alterations needed to keep a network of AOL's size running at peak efficiency.
Unfortunately, when AOL's systems went offline, so too did a system that monitors certain interactions with ANS, an AOL subsidiary that provides the majority of AOL's connectivity. Once the physical upgrades were completed and the servers restarted--all well within the standard span of a bounce--all hell broke loose because no one could access the service, thanks to errors on the AOL-to-ANS connection that would normally have been isolated and corrected by that non-running process.
While this wasn't directly caused by hands-on problems encountered by the AOL staff during the upgrade itself, it took the staff some 19 hours--an eternity during such a crisis--to get things fixed. I don't know if heads rolled in Reston because of this little problem, but rest assured that there are many bosses who'd go postal if their companies were damaged on a similar scale.
So what's the lesson here? Don't take anything for granted! While you may know almost everything there is to know about your network and its services, sometimes familiarity can breed carelessness. Be prepared for the unexpected, and don't be overly ambitious. Break large upgrades into easily achievable phases that can be implemented over the span of a few days rather than in marathon 38-hour weekend shifts. Know how your actions will affect not only your user pool but the rest of the network as well. And most importantly, know how (and be prepared) to undo your changes in case you reboot and find that your system has mysteriously become crippled or is trying to use Microsoft Bob for the shell.
Ask almost anyone who uses an office network about 10base-T, and you're sure to get a number of very interesting and humorous responses. As with many technologies, some terms and phrases become fashionable, and you'll hear 10base-T bantered about quite often, especially in meetings where the participants want to be part of the "in" crowd. Unfortunately, knowing that 10base-T exists doesn't confer any other special knowledge on these people, who can become dangerous if not quickly cut off. But what is 10base-T, you ask?
10base-T is one of a number of designations used to differentiate between four main types of networks, as defined by the 802.3 specification issued by the IEEE (Institute of Electric and Electronic Engineers). The other types are referred to as 10base-5, 10base-2, and in the 100 Megabit genre, 100base-T4, 100base-TX, and 100base-FX, with each type providing varying levels of performance at different price points. In a moment, we'll take a closer look at each of the IEEE-specified network types, but let's talk about which network is right for you.
Unlike some of the other decisions that you'll have to make when building out your corporate network, you'll have some guidance when it comes to picking the network scheme. This is the case because each one of the specifications offers uniquely disparate functionality, giving each type its own niche market. If your plans call for extremely high speed networking, then 100base-T is your choice. If you only need to connect 10 PCs in a small office, you can choose from any of the options. As we've discussed, once you draw up your network requirements, the path to choosing the proper scheme will pop into view.
According to the infinite wisdom of the IEEE, 10base-5 networks rely on a traditional 50-ohm thicknet (coaxial) cable to connect network devices. You can have a maximum distance of approximately 500 meters for the network bus, which must be terminated at either end with resistors similar in impedance to the cable itself.
Each device that is connected to the thicknet is attached with a transceiver, which in turn is itself connected to the backbone via a special tap that penetrates the coaxial shielding and connects to the internal wire.
10base-2 networks are commonly referred to as thinnet networks because--strangely enough--they use a thinner, double-shielded variation of the traditional thicknet coaxial cable. Because these networks use a thinner cable--and therefore have a smaller amount of bandwidth--the maximum run length is significantly reduced from the 10base-5 specification down to approximately 185 meters. While the IEEE specifications do not support this, certain vendors have been known to release products capable of supporting much greater lengths, even up to 300 meters or so.
You'll connect all of your network devices via a combination of co-axial type BNC connectors and host adapters, which makes setting up the network a bit easier than with 10base-5. However, you'll only be able to connect up to 30 separate devices, due to the limitations of the cable itself.
10base-T is by far the most widely installed network type, thanks to ease-of-use, reduced maintenance and troubleshooting time, and other important benefits that have been introduced as answers to the complaints and pleas of network administrators and engineers.
The new performance characteristics of 10base-T networks are many and varied. First and foremost is the fact that 10base-T networks rely on twisted-pair wiring to link network devices. Twisted pair is extraordinarily cheap, comes in a variety of pleasing colors and patterns, and is small and easy to work with--which is quite handy when running cable in confined spaces and around tight corners. Generally you'll connect the cable with 8-wire RJ-45 wall jacks, although you have some flexibility here if there is a pressing need or desire to be different.
What's more, while 10base-T networks allow only for cable runs of 100 meters between host and hub adapter, you'll benefit in the end because you can connect a virtually unlimited number of devices to this type of network.
As the need for faster network communications becomes more and more common, you may find yourself looking to technologies that offer significant performance over 10base-T. Sure, 10Mbps switching is a great building block for better network performance, but chances are you will find that some of your nodes or possibly an entire subnet (or two) will need more speed. If those systems are based around a PCI bus and have Pentium, Pentium Pro, or Pentium II processors, you are a prime candidate for faster 100Mbps Fast Ethernet networking.
There are currently three primary specifications for Fast Ethernet: 100base-T4, 100base-TX, and 100base-FX.
NOTE: Fast Ethernet is not the only 100Mbps topology available. Hewlett-Packard and a variety of other network hardware manufacturers offer a line of products that conforms to the 100VG AnyLAN specification. One of the major advantages of the 100VG technology (VG stands for Voice Grade) is a feature called Demand Priority. 100VG sends packets over four pairs of Unshielded Twisted-Pair cabling, and Demand Priority offers a way for certain network applications to receive priority on the network.
If you need Fast Ethernet performance on a limited basis and do not want to have to upgrade your network cabling or if you simply want to test Fast Ethernet, 100base-T4 might be for you. It can use the same type of LAN cable that you have in your current 10base-T network, either Category 3, 4, or 5. You will have to upgrade NICs in your systems and upgrade some of your hubs (depending on your specific network architecture), but you will not have to re-cable your entire network as you would have to do with other standards. Why would you want to upgrade to other standards that require costly re-cabling? Well, 100base-T4 requires four pairs of wire (hence the "4" in the name) at each node to attain these speeds. Because LAN cables contain only four pairs of wire, you are placing an extreme burden on your cable's integrity--which is not always possible, especially in older installations.
Similar in performance to 100base-T4 and in specification to 10base-T, 100base-TX LANs demand high-quality Category 5 cable throughout your infrastructure. In many cases, this requirement will force a large investment to upgrade your existing cable infrastructure. There is no other significant advantage to 100base-TX over 100base-T4 except for the reduced headaches of having a significantly more reliable infrastructure to push data through.
The 100base-FX offers the same network speed, with enhanced performance, but an even more costly investment in your cabling infrastructure. 100base-FX relies on fiber optic cable. This is great because it does a few things for you. It will dramatically eliminate almost all electromagnetic interference that can result in degraded network performance. Additionally, 100base-FX will extend the maximum cable run to over 1 mile, a significant increase over 10base-T and other Fast Ethernet standards that have significantly shorter distance limitations.
As with 100base-TX, the fiber optic requirement will place 100base-FX LANs out of the reach of most mere mortals. It is not uncommon for upgrades to fiber optic infrastructure to cost more than two or three times as much as an upgrade to a different category of network cabling.
As with just about everything else that we've covered in this book, the different network topologies--how your physical cabling and hardware are distributed throughout your offices to support your network--that you'll encounter each have fervent and protective camps of supporters who will do their best to convince you why Token Ring is better than star, or that everything should be wireless. Due to the fact that cable plays such an important role in the design of both LANs and wide area networks (WANs), there will be endless numbers of people who'll want to throw in their two-cents worth. Watch out for them. Seriously.
While you may not be able to separate the truth from the bottomless pit of marketing hype (at first, anyway), you should be able to get a clue as to how valid any advice is by evaluating the person who is advocating one particular technology over another. If the person with whom you're talking is a network engineer, you're probably getting very good technical advice that may or may not be the solution to your particular network needs. If, on the other hand, you've been cornered by a network supply vendor--and are without either taser or cattle prod--keep in mind that you're probably getting valid but highly biased information. After all, while the vendor probably has a good understanding of his product line, remember that his first duty is to move products out of the showroom and into your operations center, so be careful before embracing the solutions offered by overly interested parties.
What this all boils down to is that no one can give you a complete solution to building a network, nor should you accept one even if a reasonable one is proffered. After all, the whole reason that you're reading this book is that you've been given the task of upgrading or installing a new network. No one is as familiar with your company's needs, financial resources, and eccentricities as you happen to be, so it is up to you to marry that knowledge to an understanding of the available technology and make your bosses proud! What we're doing is providing the most important background information that you'll need to build an impressive network, essentially empowering you to combine technologies until your requirements are satisfied. Ready for more?
There are three central types of topologies that you're likely to read about: linear, ring, and star. However, only the latter two are widely found to be in use these days. Each of the topologies varies in the hardware and (sometimes) software that will be required, the services that will be supported, and in the maximum performance levels that can be achieved, assuming an optimally configured environment.
A linear topology, sometimes referred to as a bus topology, is a network in which all networked devices tap directly into one central run of cable, most often called the backbone. In this situation, every device is attached to the backbone by a transceiver that facilitates both inbound and outbound communication with the greater network (see Figure 23.1). Additionally, in order to decrease the possibility of signal loss or mangled data transfers, linear network backbones must be terminated at both ends using pairs of resistors that are equivalent to the impedance of the particular type of backbone cabling.
FIGURE 23.1. An example of bus topology.
If you happen to be familiar with the early-1980s Intellivision game Snafu, you're already familiar with the physical layout of a ring network topology. A ring network is a variation on the bus design, with the end of the cable run attaching itself to the beginning of the backbone in order to form a complete, unbroken ring.
Keep in mind, however, that Figure 23.2 is only a conceptual representation of a ring network; they're not really constructed in a circle. The function of a ring is achieved through the use of MAUs, or Media Access Units, hardware to which other devices are connected in order to gain access to the network itself. Within each of their MAU boxes is a cable ring that allows data to pass by every node in the network during transmission, thus achieving the ring concept.
FIGURE 23.2. An example of a ring topology.
Because both ring and bus topologies are unwieldy to install, maintain, and troubleshoot, it wasn't long before the star topology (see Figure 23.3) found its way into the mainstream network computing world. If you recall how you (probably) used to draw pictures of the sun in grade school, you'll begin to get an idea of the theory behind star networks. Imagine the heart of the sun as your hub, with the rays representing actual lengths of network cable that run out (depending on the type of cable used) and connect to individual PCs, Macs, network printers, and so forth.
The star topology's popularity is two-fold. First, it is significantly easier and less expensive to install and maintain, especially because it can mimic the physical distribution of other office wiring (telephony, intercom). Second, it has greater expansion potential than other topologies, in that you cannot run out of physical space on the backbone as with linear networks, nor do you have to worry about expanding an already huge ring in order to accommodate a new device.
FIGURE 23.3. An example of a star topology.
After you've selected the overall topology that will best serve the needs of your budding network, the next step in the process is to decide on what type of cable will be most appropriate as the foundation for your network. You've got two main choices, fiber or copper, the choice of which will determine how well your network will eventually be able to handle the many services and other traffic that your organization will require.
Choosing between fiber and copper cabling is not the hardest thing in the world. In fact, you'll have much more trouble actually getting the cable installed than you will picking between the two. Chances are that the decision will be made for you, simply because fiber is prohibitively expensive and offers performance characteristics that will most often exceed all but the most demanding of network performance requirements.
There are three main criteria against which you should evaluate your need (or ability) to use copper or fiber cable when building your network:
You'll have to do some soul-searching to determine which characteristics are the most important to your project and then plan accordingly.
If you can convince your company to go the extra mile and invest in a fiber-based network, you're going to be out a ton of cash in the short term. However, fiber soon makes up for this with higher performance specifications, including support for higher numbers of attached devices, as well much greater flexibility in the overall length of cable runs, on the order of n feet. You'll also reap two additional--and extremely enticing--benefits from cable. Because fiber optics are just that, optical, relying on light rather than electricity to transmit data, fiber networks are not susceptible to ambient EM (electromagnetic) energy radiating from other network devices or power lines or transmissions in the UHF (ultrahigh frequency), VHF (very high frequency), or other bands.
NOTE: It is important to understand that fiber and copper are not mutually exclusive terms. An excellent strategy for a dynamic and flexible network includes a fiber backbone that is connected to each individual device via less-expensive copper wire. This will give you a high-capacity backbone, yet still allow you to easily and affordably attach an almost endless number of network devices. Remember that the majority of a fiber network's price tag comes from attaching individual devices to the cable, not from the cable itself.
With the debut of the earliest Ethernet LANs back in the 1980s, the standard types of cable available at the time were anything but convenient. Unfortunately, the implementation of coaxial-based Ethernet didn't do much to help the situation, because coax is both quite expensive and difficult to work with due to rigidity and overall bulk, requiring obscene amounts of wall or duct space to install more than a few runs.
However, despite its limitations, coaxial cabling does have some interesting benefits, including a transmission capacity of 10Mbps to a maximum run length of nearly 500 meters! While you'll still find networks in use today that rely on coaxial cable for their inter-device connectivity, their numbers are steadily decreasing, thanks to newer, more network-friendly technologies that have been released in recent years.
Twisted-pair wiring is everywhere. You'd be hard-pressed to find a network environment in existence today--at least in professional environments--that doesn't have some form of twisted pair supporting at least a portion of the entire network. Twisted-pair wiring is comprised--coincidentally--of a helical strand of two insulated conducting wires. These strands are most often encased in a small sheath of plastic, and are referred to as UTP (unshielded twisted-pair) because the plastic does not serve as protection against ambient radiation or other signals that may interfere with the data transmission across your network. The second form of twisted-pair wiring is shielded much in the same way as standard coaxial cable.
In practical situations, twisted pair is an excellent choice for most networked environments, due primarily to its low cost, lack of rigidity, and excellent transmission quality.
The next area that you will need to examine in your quest to upgrade your existing network will be your client and server hardware. The computer systems connected to your network are often overlooked as a component of network performance. Certainly, applications will run at reduced speed and efficiency on slow hardware; the same holds true for a slow piece of hardware's capability to push data through your network. Nevertheless, your collective group of clients and servers, if poorly implemented, can drastically reduce network performance. Unfortunately, the same cannot be said about well-implemented systems. Ultra-fast systems with high-speed network interfaces using homogenized client access protocols across your Enterprise Level LAN will not greatly improve your performance--at least not above what is expected from a network that is functioning within normal operating parameters.
There are many things to consider when optimizing your networked computer systems. Like most of what you have already read about topology, the focus of your enhancement strategy will be on hardware. We have already talked about how you can improve your network topology. Most of that discussion focused on upgrading your cabling, re-segmenting your LAN, and adding faster hubs, routers, and switches. This section will be no different; hardware is what you want to examine first when trying to "speed up" your systems. Faster processors, more memory and, of course, faster LAN adapters--as well as how those systems are built and configured--are the most crucial components when discussing networking performance of your hardware.
As usual, software is a close second in the high-performance networking game. You do not want to run more protocols on your network than are absolutely required to satisfy the requirements of your users. Nor do you want to run more protocols on your network than you need to. Think about who needs access to what and how to minimize the protocols and services required getting there, just as you focused on minimizing routes in the previous section.
Finally, when upgrading your network, you have a third consideration: The users. Many, many LANs do not place a high enough value on what a user has to do to function on your network. Upgrades to your overall network operating system can greatly affect a user's ability to simply log on to the network, and that could wreak havoc come Monday morning when the same staff that has trouble using America Online shows up and does not recognize the new login procedure.
The first step in your hardware upgrade plan is to examine your collection of existing hardware. Do not look at your computer systems in a macroscopic sense; look at the specific hardware that comprises each of your systems. Inventory all of the internal components, such as individual SIMMs, processors, disk drives, video adapters, sound cards, network interfaces, backup devices, input devices, monitors--everything! Do not leave anything out.
Think about your systems not as a 486 DX4 100MHz system with 16MB of RAM, but as a mid-size tower case with an Intel 486 DX4 100MHz processor, 16MB of RAM with 2x 4MB SIMMs and 4x 2MB SIMMs. Then build another list of the systems that you need to assemble to fulfill the needs of your employees. (We will discuss how to do that in the next section.) Then redistribute your hardware as appropriately and as evenly as you can across your network, supplementing to reach a common level of hardware but not wasting the money on something that will be useless in six months when your boss will finally kick in the cash needed to purchase brand-new systems.
As you are doing this, you must always be thinking about the future. You do not want to spend money upgrading 68030-based Macintosh systems to 68040; you want to go straight to the PowerPC, if you can. You do not want to spend the money to upgrade the processor in your 486 DX 33MHz systems to 486 DX4 100MHz systems, if you will be able to buy Pentium 150MHz systems in six months. Of course, it is nearly impossible to predict what types of hardware will be available in six months to a year; your investment in Category 5 Ethernet cabling and interface adapters will be useless once the cost of fiber drops to $0.02 a meter. When will that be? Who knows?!
Planning, planning, and planning! Oh, and money. As important as planning is in your endeavor to upgrade your network infrastructure, planning is equally important in your path to upgrade your client and server hardware. Careful analysis of the tasks that need to be performed in your computing environment and your ability to match hardware, software, and services to those tasks are of the utmost importance.
Remember to match your computer system upgrade strategy to the overall plan you have already established for your network. Consider all of your network users. What do they need to do? What are their needs? What services do you need to provide for them? Who needs the fastest machines? If you run a call center, chances are you will not need to upgrade your operators' systems much past a Pentium 100MHz. That should be plenty of horsepower to run your custom Delphi or Visual Basic applications. Putting them on the path to a 200MHz Pentium Pro might be fun, but it is probably overkill. If you have software developers all running on 486 DX2 80MHz systems or artists using Power Macintosh 7100, you will probably want to get those folks onto Pentium Pro or Silicon Graphics UNIX workstations ASAP! If you have a 486 DX 66MHz server running Linux that 25 to 50 users access for Internet Mail, News, and intranet Web services, adding more drive space, RAM, and faster network adapters for each segment of your LAN will probably be sufficient.
Stop upgrading systems that don't need to be upgraded! Let the administrative assistants, managers, and vice presidents have the older but adequate systems. Get your development staff, financial analysts, and graphic artists some machines with real power; they are probably performing the most time-intensive applications in your organization. If your business consultants get paid as much as mine, you don't want them to have to wait 60 seconds each time they hit your SQL server to get data, do you? You want these guys working--not waiting--and a new $3,000 piece of hardware might save you $10,000 over time, especially if your employees have to wait for machines to be fixed or to finish recalculating a spreadsheet.
If you are anything like most organizations, you probably are not starting from scratch. You might have a room full of shiny new Pentium or PowerPC-based computers, or you might have a bunch of dusty old 486/33 MHz, 486/50 MHz, or 68040 based computers. In any of these situations, you need to be thinking about the future. Of course, client software configuration will be of more interest to the owners of new systems in their endeavor to improve their existing network. Nevertheless, both groups will need to think about where they want to be in six months to a year with their hardware, even though it is nearly impossible to predict what will be available so far in advance.
No! Are all of your users doing real-time 3D modeling? Are they all compiling multi-megabyte applications? Are they all doing massive financial calculations? Chances are, they aren't. If anything, your users are probably playing a networked game of Quake and wishing that when they killed Mike that he died in a timely fashion.
TIP: Quake runs best on most LANs when using the IPX protocol. This is also the protocol of choice on LANs that primarily access Novell servers. Running a network game using the TCP/IP protocol, however, will make games more difficult to detect--especially if you use a port that is different than the default!
For the most part, your hardware decisions should be based entirely on what a particular member of your staff does for your organization. Most administrative assistant-level employees should be content with systems that can run your word processor, spread sheet, scheduling, and contact management software. In most cases, 486 DX2/80 MHz systems, with sufficient memory, will do the trick--even if your network clients run Windows 95. Of course, a Pentium 100 will be even better, but a high-end 486 will usually get you by, and it might be more cost effective to upgrade your existing supply of 486 DX2 33, 50, and 66 MHz systems to DX2 80 or DX4 100 MHz systems than to buy all new Pentium systems--especially if you do not think you will be able to upgrade much more in the next 12 to 18 months. You will have to do extensive testing. Depending on what office suite you use, depending on what workgroup scheduler, depending on what additional applications you use--it all affects what your clients will need. A free upgrade to the latest version of your office suite--though seemingly a bargain--might end up costing you thousands of dollars to purchase the additional CPU processing power needed to run it, especially if the next version runs as a Java applet!
Table 23.1 shows recommended general guidelines for equipment distribution. This might not be where your organization is today; however, it might be where you want to think about being a year from now. An amiable and possibly daunting goal, but one that might be worth sacrificing for.
|Job Title||Optimal System|
|Administrative Assistant||Standard Macintosh or standard PC|
|Sales Force, Marketers, and the like||Standard Macintosh or standard PC, optional portable system|
|Developer, Artist, and the like||High-end Macintosh and high-end PC|
|Analyst||High-end Macintosh or high-end PC|
|Management, Executives||Standard Macintosh or Standard PC, optionally only a portable system|
|Chairman of the Board||Whatever the heck they want, if you want to keep your job!|
It is important that you always remember two things when delegating hardware to clients:
|Standard Macintosh||33 MHz 68040 processor, 16 MB of RAM, 500 MB hard disk, 15" monitor|
|Standard PC||66 MHz 486 processor, 16 MB of RAM, 500 MB hard disk, 15" monitor|
|High-End Macintosh||60 MHz PowerPC processor, 24 MB of RAM, 700 MB hard disk, 17" monitor|
|High-End PC||100 MHz 486 processor, 32 MB of RAM, 1.0 GB hard disk, 17" monitor|
|Standard Notebook||33 MHz 486 processor, 16 MB of RAM, 500 MB hard disk, 9" display|
|High-End Notebook||50 MHz 486 processor, 16 MB of RAM, 500 MB hard disk, 10" display|
Also, do not forget that you will need to supply your on-call staff with systems at home. In most cases, these can be any old things that you happen to have lying around. Depending on the employee, it might be more fun to give him a pile of parts that could be assembled into a computer and say, "Here. Put it together." These systems do not need to be the best--they will already be hindered by their slower than normal connection to your network, but they should be sufficient. Remember, the employee's inability to repair servers remotely might be directly tied to what hardware he has at home. An old 386 with a 14.4Kbps modem might be fine for a UNIX administrator dialing in to a UNIX host, but not for a Windows NT administrator expecting to make a Windows NT 4.0 dial-up networking connection.
Well, that depends on who you ask. Most competent Information Services professionals would say, "No! Of course not! What kind of ridiculous question is that?" However, many, many internal computing groups within most large organizations feel that a flexible, powerful, and easy-to-use personal computer might put them out of a job. Of course, there is no right answer, so it is probably wise to allow users (when possible) to make their own platform decisions. After all, they are the people who have to perform the real work on your LAN, and you want them to bother you as infrequently as possible--make it easy for them!
Most popular network operating systems can accommodate Macintosh clients with little or no modification. Windows NT, for example, has an excellent fileserver service for Macintosh. It can make available the same shared volumes that your Intel-based PC clients connect to using native Macintosh AppleShare networking. Microsoft also provides an extension to AppleShare that will allow authentication using Microsoft's secure protocols.
Although AppleTalk, Apple's network protocol, can be a bit "chatty" on your network, the increased activity makes for an extremely easy user configuration and operation. No Ethernet or IP address numbers to remember, no frame types to choose, no bindings to have to configure. Simply plug in your network cable, name your workstation, and away you go! Most Apple-Talk networks can function without routers of any kind. Simply plug your 10base-T or 100base-T Ethernet adapters into capable hubs, and you will be ready to go!
Software incompatibility is not really an issue any more. Most standard word processor, spreadsheet, and database file formats can be read by similar or competing Macintosh and Windows applications. Other file formats like images, digitized audio, and video can also be used among the different platforms. The only problem you might encounter is a Windows PC's inability to read a floppy disk formatted for a Macintosh, which is not really a problem if you are sharing common network space or if you use a PC-formatted floppy disk. Most modern Macintosh computers can effortlessly read PC floppy disks.
Getting back to improving existing hardware for you network, it is important to remember that a Macintosh investment, in most cases, is a wiser investment than an investment in a comparable PC. That is because most Macintosh computers can be upgraded with a lot less effort and expense than a Windows-based PC. For example, you may currently have some non-PowerPC-based Macintosh systems that use Motorola 68040 processors. If the upgrade path you want to take your organization on requires that all of your Macintosh users have PowerPC systems, you may not need to upgrade to brand new systems. There are PowerPC upgrades available for many Macintosh systems, available either directly from Apple or from third-party manufacturers such as DayStar Digital and Newer Technology.
The other major upgrade you might want to invest in, depending on the type of Macintosh you already own and type of network you are building, is the type of network adapter that is in your Macintosh. Similar to Windows-based PCs, the Macintosh has networking options to fit every price range. From inexpensive peer-to-peer connections running at serial speeds to the uncommon 100base-T or fiber optic transceiver running at more than 100Mbps, the Macintosh can handle most any type of networking infrastructure.
The Macintosh is also an Internet or intranet-ready workstation. The newest upgrade to the MacOS provides full TCP/IP network support across your Internet or intranet. It supports BootP, DHCP, and RARP IP Address allocation, and can support PPP connections as well.
For most organizations, the decision to use UNIX-based clients on its LAN is probably one of necessity rather than desire. Just as applications like word processors and spreadsheets run well on Intel- or Motorola-based systems, certain development tools and graphics packages simply run better on UNIX systems. Most UNIX systems require specialized hardware, and the flavor of UNIX you are running will dictate where you have to go for hardware upgrades.
NOTE: Chapter 20, "UNIX/Linux," has more in-depth information about many of the different versions of UNIX available and where to get information about hardware and software.
If you are running Linux, NetBSD, or SCO-based UNIX systems, however, you can use much of the same hardware as you use on your Intel-based PCs. It is important to remember that the requirements for running UNIX on Intel-based PCs vary greatly from the requirements you will find match your needs for clients running operating systems from Microsoft or IBM. Generally, you will find that the requirements to run UNIX are much more modest than the requirements to run Windows 95 or Windows NT. If UNIX systems are that important to your organization, you should include them in your general guidelines of equipment distribution, and if you have specialized UNIX hardware, get in touch with your sales representative about upgrade plans and stay close to that issue--it could be a very expensive one!
Your decision to run UNIX clients carries another important factor along--the presence of TCP/IP on your network. UNIX has very fast and efficient networking built right in to the operating system kernel, which is one of its many advantages. While most modern networks readily support the integration of TCP/IP among the clients, there may be some Internet, or intranet-phobic system administrators out there who do not want that protocol on the LAN. Alternatives are available in some cases, so check with your hardware and software vendors.
As technology advances and the price of hardware drops, the range of technology that is available to users increases at a breakneck pace. While it can be extraordinarily difficult to even begin to craft a long-term upgrade strategy for your network without worrying too much about advancements in current technology, "holding back" on the integration of brand-new technology such as Network Computers (NCI) from Sun Microsystems and Oracle or Pocket PCs from Casio or Philips might prove to be a very expensive experiment. The promise of new and exciting features from new and exciting products can prove to be too much for some Information Services professionals to resist. On a personal level, purchasing a U.S. Robotics PalmPilot might be a great investment, but the decision to purchase a PalmPilot for each member of your 1,000 employee sales force might prove devastating if the 6-month old product fails and you can no longer receive support or software from the vendor.
Be careful in your decision to include new technology. Don't exclude new technologies out-of-hand, but be sure to take a good, hard look at any infant technology before jumping in feet-first.
Apple was one of the first to enter the PDA market with its Newton MessagePad. This small, hand-held device has a touch-sensitive screen where users can write notes and enter data into address books, calendars, digital checkbooks and games. The initial Newtons were slow, had poor handwriting recognition, and had many other shortcomings, but the ever-persistent Apple kept at it, improving the software and hardware. Apple's latest Newton, the Newton 2000, is laden with features, including a back-lit screen, wireless keyboard, and significantly improved hand-writing recognition. The PDA market has also seen entries from U.S. Robotics and Sharp. In its brief life, the U.S. Robotics PalmPilot has seen tremendous initial sales to and applications from many third-party software vendors.
Companies such as Casio and Philips have small, highly portable Digital Assistants as well. Commonly referred to as Pocket PCs, the Casiopeia and Velo 1 are both among the first generation of portable systems to run WindowsCE, a version of the Microsoft Windows 95 operating system designed for low-powered hardware. Not as reliant on handwriting recognition technology as the PDAs from Apple and U.S. Robotics, the Pocket PC might be just what the doctor ordered to spur increased sales in the PDA market.
As far as network connectivity is concerned, most PDA-type devices do not plug directly in to the network. They plug in to a standard desktop or notebook PC that becomes a host for file synchronization and data transfer of contacts, appointments, and e-mail messages.
Oracle and Sun Microsystems have joined forces in an attempt to define the future of the modern network. Their diskless computer that plugs in to a regular television and high-speed network was designed to make client administration simple and error free as well as. All applications and data live on a centralized server, making your infrastructure investment even more important! All access to network resources is controlled from a central location, making Network Computer an excellent choice and logical decision for the client of the future.
Unfortunately, at this point in its development, Network Computer is too slow and too lean to be considered for any serious computing tasks. Very few third-party software developers are taking a serious interest in the Network Computer, and the availability of out-of-the-box software might be another weak area for it. Part of the speed issue concerning the Network Computer will undoubtedly be resolved once the speed of Java is improved.
Hot on the tail of Sun and Oracle is industry giant Microsoft. In his never-ending quest to dominate the market, Bill Gates has committed Microsoft to the development of the Zero Administration PC, which is in essence a system that runs a Windows 95-like operating system that can support most or all of current and future applications with the same ease of administration as the Network Computer. With the added functionality of self-diagnosing hardware that can notify a user or administrator of potential problems before they become too serious, the Zero Administration PC might just be the future of network computing.
From a networking perspective, a client can only be as useful as the services offered on the network. A client can only function on a network as fast as the services can handle the transactions required by the client. In this sense, the server plays an extremely important role in the performance of your network.
Depending on the types of services you require on your network, you may or may not need more than one server. Many types of services cannot coexist very effectively on the same machine. For example, if you are running Windows NT as your network operating system, you do not want to run filesharing and SQL database services on the same physical hardware. Those services have very specific and different optimal hardware configuration needs--needs that vary greatly based on traffic, the amount of data involved, and the distance the data has to travel. For the most part, server configuration is not something you want to skimp on. It is very different from client configuration; you probably do not want to play trial-and-error to find an optimal configuration for your servers. Rather, you want to consult with software and hardware vendors to get systems built to your exact needs, with the appropriate amount of disk drives, RAID arrays, RAM, and processing power. That is your safest and wisest bet, especially when running such dramatically different services on your servers.
On the other hand, you may have no choice but to determine your own optimal configuration for servers that run operating systems and services that are freely available. UNIX operating systems such as Linux are freely available and offer general guidelines for what hardware configurations will match what service and usage specifications, although those guidelines will vary greatly. As with Intel-based UNIX clients, Intel-based UNIX servers will generally have more modest hardware configurations to run similar services, as compared to network operating systems like Novell NetWare and Windows NT.
In all cases of computer hardware attached to your LAN, you want the fastest possible connection, regardless of the platform, service, and protocol. We have already discussed how simply resegmenting your network may dramatically improve performance of data transfers. We have also discussed how upgrading your cabling can enhance performance. To make the best use of that investment, you need to make sure that the interfaces between the network and your hardware are as fast as they can possibly be, and that can only be done through research and--you guessed it--trial and error.
The research phase is simple. Look at your hardware. Look at what type of LAN you have now. Look at what type of LAN adapter you have now. Think about how you are upgrading your infrastructure and match your components. Head to the Web, which is always an invaluable resource when preparing for upgrades and build-outs. Look at LAN adapters from vendors like 3Com, SMC, Hewlett-Packard, Digital Equipment Corporation, and Novell, and match up your needs with their product lines and availability. Get adapters that can "talk" with the data bus on your PC as fast as the bus can spew data. Make sure that your adapters can interact with your network as fast as your cable and servers will respond. Go as fast as you can possibly go! You should also consider data regarding vendors' services and past performance. How fast have they put out drivers for past upgrades to your operating systems? How long does it take to get an adapter repaired? How long have they been in the network hardware business? What is their dedication to the product line?
If you haven't already plowed through Chapter 8, "Switches," and are experiencing a high degree of network congestion, it's a good idea to flip back to the earlier part of this book and give Chapter 8 a once-over before you move on. Ethernet switching, which is covered extensively in that chapter, can provide one of the most powerful and cost-effective solutions for reducing network problems such as lost connections, slow or corrupt data transmissions, and other connectivity issues by effectively managing the flow of data across the many subnets of your greater network.
Switching solutions provide a viable solution to many of the problems typically cited by network engineers when trying to re-architect anything but the smallest of LANs: They are a relatively inexpensive technology (although high-end, high-price models are available); the benefits derived from switching solutions are usually quite dramatic, including a reclamation of total network bandwidth; full-duplex (20Mbps) networking becomes a possibility; and switching solutions are faster, easier, and cheaper than a complete network redesign based on FDDI, Fast Ethernet, or other high-speed options, because switches allow you to retain the bulk of your current network infrastructure.
The network performance gains that are associated with switching solutions come from the manner in which switches are able to segregate various segments of your network into subnets, Ethernet segments that support the communication of only a small fraction of the total number of workstations connected to your network. By splitting the greater network into subnets, switches are able to free up valuable bandwidth for vital network communication because the machines--now isolated on their various subnets--talk only on their local segments. Of course, if a particular packet is destined for a host on a different subnet, the switch will route it to the proper subnet, generally with only minimal delay. In the end, switching becomes a prime candidate for network rebuilds because it has such a great price/performance ratio.
Once you get some systems in to your testing lab (which you should obviously have built from your experiences in piecing together client and server hardware), run them through their paces. Test your operating systems and protocols on the adapter; see which ones perform the best. Chances are that you will have a very difficult time finding a clear speed demon out of a pack of well-engineered adapters. More than likely, you will base a large portion of your decision on LAN adapters on the ease of installation and configuration in a host computer. That, coupled with a vendor's dedication to a product line, quality of support, timelines of repair, and your overall upgrade plan, will allow you to make an intelligent purchasing decision.
"Less is More" is the name of the game when delving into the ever-exciting world of communication protocols. When you boot your PC, do you start the Microsoft Windows for Workgroups NetBEUI client, then load a series of drivers for IPX support to your Novell NetWare server, and then load your TCP/IP stack just before firing up Windows 3.1? Do you also have Macintosh clients and servers that are AppleTalking on the same physical network? This is a bad situation that, without question, is undoubtedly creating unnecessary and significant traffic with which your network must cope. Why do you need so many protocols on your network? The answer, generally, is that you don't.
Finding a lowest common denominator protocol--a single protocol that can satisfy the communications needs of all the services running on your network--can be time-consuming and frustrating, but the end result will be well worth it: a streamlined configuration that (you hope) will provide a significant boost in network speed. Elimination of excess chatter on your network will result in less packet collision, less waiting time for user requests, and faster data transfers. If you have any questions as to which protocol is currently the most widely used by the widest range of services, that's easy to say in a word: The Internet. OK, so it was two words, but the explosion in popularity, functionality, and desirability of the Internet has led to a flood of network services that requires the ability to be used across large distances and among various hardware combinations. From that explosion, TCP/IP has risen as clearly the most widely used by the widest range of services on the widest range of platforms.
So why not take advantage of this where you can? Fewer protocols for the same number of services equals less network traffic.
Do not forget the reason you exist. If you are vice president of information systems and technology, a SQL Server administrator, the senior network engineer, or the person who answers the phone at the help desk, your goal is servicing your users. You need to make things flexible, easy, and powerful for power users and the newbie user. Do not restrict things to the point where working is difficult. Do not be so unwavering in your stance to move printers from a Novell print server to a Windows NT print server for users who only need to run IPX and log in to a Novell directory tree for print services. That is not good for them, and it is not good for the traffic on your LAN!
Remember that when dealing with users you should be nice, especially when dealing with a "know-it-all." Be informative and listen carefully. Some "know-it-alls" might actually be able to help you! Ninety percent of all major problems you have on a large scale with your users can probably be avoided entirely by informing, instructing, and documenting changes ahead of time and getting those instructions into the hands of your users. Do not assume that they will pick up on even the most minor change in the operation of the network, even if things look similar on the screen.
With users screaming, crying, and (sometimes) becoming physically aggressive in response to less-than-impressive network performance, it is becoming increasingly necessary for businesses to upgrade and optimize their internal networks on a much more frequent basis than ever before. Unfortunately, taking your network to the next level of performance usually requires significant time, effort, and funding, which are not necessarily available or forthcoming from management, even in the most desperate of times. This means, unfortunately, that network engineers and systems administrators have to do more with less, and somehow try to address as many problems as they can in the most efficient and cost-effective manner available. This can involve incremental upgrades to aging and over-burdened server hardware and software, client systems, and other infrastructure items. In many instances, small but strategic upgrades will be able to resolve the most urgent (or visible) of network issues, allowing more time to address the underlying issues that prevent your network from achieving "golden" status.
Occasionally, of course, miracles do happen. Every so often there will be a little cash left in the corporate pot at the end of the fiscal year, which means that you can plan and implement large-scale infrastructure improvements targeted to alleviate current problems while simultaneously providing for additional performance and capacity in the future.
Whatever the case, remember that there are rarely any quick fixes that will make all of your network troubles magically disappear. With a significant amount of analysis and planning, you'll be able to determine what your requirements are for your optimal network and craft a package that is comprised of carefully selected hardware, software, and cable systems in order to make every person in your organization--from the boss right down to the secretaries--happy with the services that your network provides.
Now let's get busy!
© Copyright, Macmillan Computer Publishing. All rights reserved.