High-Performance Networking Unleashed

Previous chapterNext chapterContents


- 25 -

Internetworking Operating Systems

Frank C. Pappas and Emil Rensing

There once was a time when networks--as we've grown to know and love them--simply didn't exist. Imagine that: a time when all of the hardware for which you were responsible was under your control, generally under lock-and-key in a central computing facility and away from the tinkering and meddling hands of those pesky users. Of course, even if the users did manage to get their hands on a terminal, chances were that nothing bad happened simply because the systems themselves were too darn complicated and confusing. This was nothing short of a technical nirvana for network managers across the country, a workplace in which upgrades stayed upgraded, fixes fixed, and where the only screw-ups were due to vendor error--or so they'd have management believe. Unfortunately, a number of factors--that were mostly out of the network administrator's control--began to conspire toward the goal of shattering this computing paradise and replacing it with an open networking environment.

A Brief History

The first step that contributed to the demise of this ideal computing atmosphere occurred, ironically enough, with the introduction of the earliest forms of networked computing back in the 1960s. The situation was anything but ideal for the users or, truth be told, for the system administrators who had to manage the users and the hardware. After all, the systems in use at the time were not particularly powerful, quite restricted in their functionality, and fairly difficult to use. This, on top of the fact that there were only a few ways of interfacing with the mainframe, with the most common--dumb terminals--being a scarce and often busy resource. All of this made users and administrators yearn for something more, both in ease-of-use and overall performance, though (thankfully) it didn't sour them on the general concept of networked computing.

The second factor that prompted a serious re-thinking of the networking concept flowed from a wholly different set of motivations. For years, corporate IT managers had to deal with a significantly unpleasant challenge on a daily basis: proprietary systems. In the early days, prior to the development and wide-spread acceptance of the OSI open-standards reference model, nearly all computing and network systems were proprietary. Proprietary systems are designed in such a way as to only allow communications with similar hardware and software combinations, for example, Digital Equipment Corporation's early hardware could communicate only with other DEC equipment, but not with products developed by Hewlett-Packard, IBM, or other vendors.

Proprietary systems caused two main headaches for IT managers. First, computing infrastructure expansions were limited by the product lines of a particular company's main vendor. If your company relied on Big Blue for your hardware needs, you pretty much had to depend on IBM for any and all add-on products in order to handle new services, increase capacity, or to accomplish anything other than what your current systems already facilitated. Unfortunately, these closed systems often crippled the technology manager's ability to quickly and appropriately address performance or usability issues relating to corporate computing systems. Granted, technology companies weren't stupid--there was a reason that OSI took so long to take hold. Whenever you wanted to add on to a proprietary system, you generally needed to call in a vendor's representative or authorized consultant who--for a charming fee--would help you re-architect your system in order to accommodate the new and improved hardware. Because this had a significantly negative impact on the corporate pocketbook, executives looked for another avenue for designing and implementing technology upgrades. Most often, these financial concerns were allayed by building-out a complete and usually overly-powerful system which, as a major capital investment, was designed to meet the company's computing needs for more than a few years. Because hardware and software were (and still are) developing at a break-neck pace, it was generally felt to be a better idea to completely replace all IT infrastructure with new, exciting, and more powerful products once the current systems reached obsolescence. However, while these two factors were huge influences in the transition to an open networking environment, there was one final trend that was necessary to spell the death-knell of centralized computing: the PC.

Introduced in 1981, the PC was the final influence that convinced everyone--from vendors and developers to management and users--of the need for an efficient, relatively cost-effective method for connecting multiple computers, printers, and other devices together in order to facilitate local or wide area communication. The debut of the PC introduced a new world of possibilities, including higher processing power and significantly greater usability than had been found in the terminals of the time. However, these same PCs didn't quite match up to the power of the mainframes connected to the other end of the dumb terminals, so complaints of inadequate memory, insufficient processing power, and overall frustration soon began to filter up from the user community. The first attempt to remedy this situation was more of a stop-gap measure than anything else. PCs became front-ends for the muscle-machines themselves, becoming, in essence, high-tech and rather expensive dumb terminals. While this was a step up in the world, it too lacked the ground-shaking results that had been expected from the revolutionary technology.

As the months and years passed, many pieces of the PC puzzle that had for so long hobbled its role in the corporate computing arena finally achieved a breakpoint of power, price, and performance, signaling that the PC was finally a serious player in the networking game. Nearly every aspect of the PC had experienced tremendous advances, from memory and clock speed to storage, bus width, and graphics capacity. What's more, not only had the hardware itself grown beyond it's modest beginnings, so too had the array of third-party applications and various operating systems (network and otherwise) matured to such an extent as to be ready for prime-time network operations. And, thanks to the development of both the Ethernet/IEEE802.3 and TokenRing/IEEE802.5 standards back in the 1970s, the entire groundwork was seemingly in place for the birth of the modern network.

Can't We All Just Get Along?--Internetworking Protocols

Choosing the right hardware for your network is one of the most important considerations that you'll make when drawing up your network "game plan." Unfortunately, even after all those long hours analyzing and evaluating cable, processors, and architectures, there's still a good amount of work left to be done before your network can achieve greatness--or even function, for that matter. Hardware is only 50 percent of the network equation. While it is all well and good that your hardware and cable are top-notch, how in the heck are all those computers, printers, and storage devices going to communicate with one another? As we've discussed, in the earliest days they simply couldn't. DEC spoke DEC, HP understood HP, and so on. The power of modern networking comes from the fact that nearly any hardware from any manufacturer can--given enough time and patience--be made to communicate efficiently with any other piece or pieces of hardware. But how did we manage to get from forced homogeneity to the flexibility of heterogeneous networks? Three simple letters: OSI.

Open Systems Interconnection

In order to move beyond the closed, proprietary systems that had been frustrating technology managers on and off for years, the International Standards Organization (ISO) released in the late 1970s a framework for computer communication that, if adopted, would allow systems using this framework to communicate with one another. This standard, referred to as the Open Systems Interconnection (OSI) Reference Model, segregates network communication into seven distinct layers, with each layer responsible for handling specific steps in the process of cross-network data transmission. Of course, there's much, much more to the OSI model that we can realistically talk about in the next few pages, but you'll get a general idea as to the role of OSI, why it is important, and how it functions in the overall scheme of networked computing. If you're just itching to delve deeper into OSI, a trip to your local bookstore will reward you with a number of fine titles dealing strictly with OSI, it's history, and specifications.

The bottom-most layer of the OSI model is known as the physical layer. The physical layer is responsible for communicating directly with the transmission media (ISDN, twisted pair, and so on) the actual encoding/decoding of the data, and determining and accounting for specific connectors, line voltage issues, and so on.

Moving up through the layers, we next encounter the data link layer. The data link layer is the communications conscience of the OSI model, in that its job is to maintain the integrity of the transmission between various nodes. It accomplishes this by providing error control (via CRC) for all data transmissions, as well as helping to ensure that source and destination nodes are clearly identifiable by appending source and destination addresses into each frame.

The network layer is the traffic cop of the OSI, making sure that all data has a clear and efficient path to travel between the sending and receiving nodes. This is achieved by managing the transmission of packets across the network through a combination of switching, routing, and flow control.

The transport layer is charged with delivering messages from the top-most layer of the OSI model, known as the application layer. While it works in a fashion quite reminiscent of the data link layer, the transport layer's main function is to make sure that outbound transmissions are segmented in such a manner (into packets) as to be readily interpreted at the destination address.

As the fifth (from bottom to top) layer of the OSI scheme, the session layer manages inter-process communication between various hosts. This includes name resolution, inter-host synchronization, or any other variable necessary to control the general progression of the communication.

Sixth in line, the presentation layer acts as the interpreter for network communication. The presentation layer prepares the data for transmission by using one (or more) of a number of resources, including compression, encryption, or a complete translation of the data into a form more suitable for the currently-implemented communications methods.

Finally, the application layer, as the highest of the OSI levels, is tasked with providing the front-end of the computing experience for the user. The application layer is responsible for everything that the user will see, hear, and feel in the course of the networking process--everything from sending and receiving electronic mail, establishing Telnet or FTP sessions, to managing remote network resources.

Behind Every Great Network Is a Protocol...

Now that you have a better idea of how and why various combinations of hardware and software are able to communicate--thanks to OSI and the variety of other modern-day networking methods--the next important step is to take a look at the various protocols that serve as the non-physical foundation for network communication. Each protocol will offer its own unique combination of strengths and weaknesses, so it is important that you have a clear understanding of the services that are (or will be) provided by your network so that you can optimize the type and number of protocols running simultaneously on your network.


NOTE: While it is possible to have NetBEUI, TCP/IP, AppleTalk, or a variety of other protocols all bound to the same NIC, it probably isn't doing any favors for your overall level of network performance! Remember, determine the services that your network must provide to fulfill its role for your organization, then find one or two common-denominator protocols that will support the breadth of your needs. You'll save hours of troubleshooting, have fewer support headaches, and will enjoy significantly increased network response and performance.

AppleTalk

Introduced more than a decade ago as Apple's first contribution to the field of networked computing, AppleTalk was designed using the same "computing for the masses" philosophy that had been so completely successful (at least initially) for their Macintosh line of computer systems. It was easy to implement, featured relatively simple administrative requirements, and in general caused fewer headaches for network administrators than did the other network protocols popular at the time. Fortunately, the designers at Apple chose to conform to the OSI open-standards model, which has made it much easier to administer, troubleshoot, and to use networks running AppleTalk as their primary protocol.

What has come to be known as Phase 1, Apple's first implementation of the AppleTalk networking protocol, sports a variety of features that make it distinctly different--and less flexible--than later versions of AppleTalk. Phase 1 of AppleTalk supports two separate protocols, one proprietary and one open. The LocalTalk Link Access Protocol (LLAP) is a proprietary serial communications protocol that enables network communications at the less-than-fantastic speed of up to 230Kbps. Additionally, Phase 1 supports the EtherTalk Link Access Protocol (ELAP), Apple's implementation of the IEEE's 802.3 Ethernet frame, supporting the much more familiar and impressive rate of 10Mbps. At the time, Phase 1 did much to increase the power, ease, and flexibility surrounding the installation and maintenance of local and wide area networks. However, as the weeks and months passed, certain services and features that were not supported by AppleTalk Phase 1 required Apple to return to the drawing board.

While AppleTalk soon gathered a substantial following, there were a number of advances in networking technology that necessitated a newer, more feature-laden version of AppleTalk. Apple, of course, was more than happy to oblige; new systems meant (with any luck) increased profitability. After a few years of research and development, Apple in mid-1989 released a new and improved version of the AppleTalk networking standard, generally referred to as Phase 2.

Phase 2 went above and beyond the achievements of Phase 1 by including support for a number of important technological advances, most notably Token Ring networks. Apple achieved the advances for Phase 2 through the implementation of the TokenTalk Link Access Protocol (TLAP), essentially as described by the IEEE's 802.5 frame standard. TLAP faced mixed emotions in the networking community, for while it could support a low-end speed of 4Mbps or an Ethernet-crushing maximum of 16Mbps, the infrastructure needed to support Token Ring networks was administratively and financially more demanding that either LocalTalk or traditional Ethernet networks. Additionally, Apple used the opportunity to introduce augmented variations on the standard ELAP and TLAP frame standards, based on a number of specifications included in the IEEE's 802.2 Logical Link Control (LLC) header, adding increased reliability and efficiency to AppleTalk's bag of tricks.

VINES

The Virtual Networking System (VINES), courtesy of Banyan, is one of a number of other networking environments that have competed for market-share alongside AppleTalk. Unlike many of the other systems that perform similar functions, VINES diverges in a number of regards, with the most interesting being that VINES is based on the UNIX operating system. At the time of its introduction, VINES was a fairly revolutionary entrant into the networking arena. It was multi-tasking, robust, flexible, and quite powerful--just about everything you'd ever want in a computer system. However, while the VINES server is dependent on UNIX, clients are available for a variety of the more popular desktop operating systems, supporting Macintosh, DOS, OS/2, and others.

Of course, Banyan didn't want to be left behind in the proprietary protocol field. Just as Apple decided to supplement the IEEE-sanctioned protocols with LocalTalk, Banyan decided to go hog-wild and include a significant number of additional (and proprietary) protocols as part of their networking environment. Fortunately for the rest of us, however, Banyan did stick quite closely to the seven layers of the OSI model, so despite the proprietary nature of many of their protocols, it is still quite easy to understand the function and importance of each piece of the protocol stack.

Based partly on it's UNIX lineage and partly on the ambition of it's designers, VINES can tackle just about any networking job that you'd care to throw at it. Thanks to a particularly flexible physical layer supplemented by a robust data link layer, VINES is capable of supporting an incredible variety of networking hardware. You'll find that VINES will support both LAN and WAN connections, including everything from High-Level Data Link Control (HDLC) and Link Access Procedure-Balanced (LAPB) to the more familiar implementations along the IEEE/802.x standard, including Ethernet, Token Ring networks, and so on.

Of course, the lower layers are useless if not complimented by equally powerful higher layers. VINES' upper layers are quite the dichotomy, insofar as they can be readily separated into two distinct categories, proprietary and open. Banyan chose to implement a number of protocols that mimicked--sometimes superbly, sometimes not--the publicly available protocols that are part of TCP/IP. VINES' network layer includes specific protocols for address resolution (VARP), a proprietary flavor of IP (VIP), and of TCP (VICP), among others. Of course, also supported by VINES' network layer are all the protocols that you've come to know and love--TCP/IP, ICMP, ARP, and so on. This proprietary/open mix works its way up through the entire OSI model, featuring a combination of VINES-only implementations alongside the more common DOS, Macintosh, and OS/2 protocols.

Token Ring/SNA

Token Ring and SNA networking have survived and prospered by the good graces of Big Blue. Throughout their development in the 1970s and 1980s, IBM was one of the staunch supporters for the IEEE's 802.5 Token Ring network specification, and this support blended with IBM's Systems Network Architecture (SNA) scheme to achieve a remarkable synergy in the networked computing arena.

Token Ring networks are designed in such a way as to create a continuous loop for data transmission. This is most often not in the form of a physical loop, but rather a closed electrical circuit that travels in and out of every device that is attached to the network at a given time. Unlike traditional Ethernet networks that rely on the collision avoidance routines of CSMA/CD, the designers of Token Ring decided to avoid the possibility of collisions entirely by implementing a game of network `hot potato.' It works like this: Station one receives the token, giving it the opportunity to send data across the network. Assuming that station one has data, it will encode the data onto the token, add destination and other information, and send the token (with the new data) ahead to the destination. Once the destination machine receives the token, it will strip off the data and return the token to the originating station, who will strip any remaining information away from the token and release it back into the loop, providing subsequent stations the opportunity to transmit their information.

The attraction (for some, anyway) of Token Ring networks comes from its unique routing methods, which diverge rather significantly from other architectures, especially Ethernet/IEEE 802.3. Token Ring networks use the Source Routing Protocol (IEEE 802.1) that allows the originating station to determine the optimum route to its designated recipient. This is facilitated by one of two separate methodologies, specifically the All Routes Explorer (ARE) and the Spanning Tree Explorer (STE), which rely on the broadcast transmission of multiple TEST and XID frames. ARE, which is the preferred method utilized by SNA, broadcasts its frame, along with a destination address, to all of the rings of the network, accumulating routing information along the way. Once the frame reaches the destination address, it is returned to the sending machine--complete with all the collected routing information--allowing the host to select a path to the destination. STE works in a similar fashion. STE will send a single TEST or XID frame to each ring, where the destination host will respond with a data set including all available routes known between source and destination. The originating station will then select a route and retransmit the route to the destination, allowing both sides of the connection to be aware of the intended route.

The SNA architecture is quite similar to the seven layers of the OSI reference model. The minor differences in the layer names and layer functionality are due to the fact that SNA predates the release of the ISO's open-standards initiative, with it's release in 1974 pre-dating OSI by nearly a decade. However, despite this divergence, there remains a nearly one-to-one relationship between the OSI and SNA layers, though some of the functions delegated to specific OSI layers are somewhat shifted in the SNA stack.

NetWare

NetWare is probably the most popular network operating system (NOS) that is currently available anywhere in the world. It controls the lion's share of the networking market with an impressive (even mind-staggering) number of installations, despite the recent increase in popularity of Microsoft's Windows NT product line. This strength has come, in part, from a great marketing and promotional scheme. More to the point, the popularity of the NetWare NOS has been earned as a result of the strong and versatile suite of protocols that serve as the foundation for Novell's networking flagship.

Over the years, Novell has extensively rebuilt NetWare in response to the changing needs of the networking community. In it's earlier 286 incarnation, NetWare used a variety of protocols to enable network communication. These included the NetWare Control Protocol (NCP) as the heart of the NetWare product, which controlled file services; the Sequenced Packet Exchange (SPX) protocol that facilitated application-level communication; the Routing Information Protocol (RIP), Internetwork Packet Exchange (IPX) protocol, and the Service Advertising Protocol (SAP) for the routing of data. As time progressed and Novell found the need to support additional functionality and services, new protocols were integrated into the system in order to keep up with the increasing pace of internetworking.

NetWare 3 built upon the earlier version based on the 286 architecture, adding increased capacity for workstations, increasing the customization options for the NetWare file services, and expanding upon the overall number of communications methods that were previously supported by Novell's products. One of the strengths of the NetWare 3.x products centers around the Open Data-Link Interface (ODI) which substitutes for the physical and data link layers as developed by OSI. The ODI facilitates multi-protocol communication between LAN and WAN hardware and other adapters, allowing one or more protocols to be bound to the same host adapter. The ODI has greatly contributed to the success of NetWare, due primarily to its ability to integrate a wide variety of open and proprietary network and transport layer protocols into the NetWare environment, including: TCP/IP; AppleTalk; IPX/SPX; as well as others. NetWare 4 goes just a bit further, though it supplements some mildly enhanced protocols with additional core functionality, robustness, and overall performance.

TCP/IP

Although NetWare is probably the most widely implemented proprietary network operating system, there can be no doubt as to the king of the protocol hill: TCP/IP. Not only is TCP/IP probably the best-known of all networking protocols, it is also the most widely implemented, for the simple reason that it is a truly open standard. The benefits of TCP/IP are many: Everyone has access to the protocols themselves; documentation is easily obtainable; it is eminently flexible; and is supported by many legacy products as well as almost every new product developed in recent years. Additionally, because it is the heart and soul of the Internet, the future of TCP/IP is anything but dim.

The TCP/IP stack (also known as the DoD protocols) follows the OSI seven-layer model in function, though not strictly in form. Where OSI divides the stack into seven distinct layers, TCP/IP subdivides into only four, though the functionality of these four layers extends across the full range of the OSI model. The TCP/IP network access layer serves double-duty, filling in for both the physical and data link layers of OSI. Moving up, the Internet layer is the TCP/IP equivalent of the OSI network layer, with OSI's transport layer replaced by the TCP/IP host-to-host layer. Finally, the higher-level function of the OSI model are combined into the TCP/IP process/application layer.

The strengths of TCP/IP are many and varied. As we've discussed, it is extraordinarily flexible, supporting a variety of network interfaces, including: Ethernet; ARCNET; FDDI; and broadband networking options. A number of dynamic routing protocols are an integral part of TCP/IP as well, using the Xerox-designed Routing Information Protocol (RIP) and (later) the Open Shortest Path First (OSPF), which would incorporate load-balancing and other optimization functions into the process of data routing.

Moving to the heart of the matter, the two Host-to-Host level protocols that keep the TCP/IP process moving along are the Transmission Control Protocol (TCP) and the Universal Datagram Packet (UDP). TCP is the protocol used for transmissions that require highly reliable connections, including electronic mail services (SMTP), file transfer (FTP), and so on. TCP works in a streaming environment, breaking each data stream into 65K segments (octets). These segments are then tracked and transmitted, allowing the stack to provide the data-integrity services such as flow and error control. The downside to TCP is that there is a tangible tradeoff for the increased reliability of the connection: overhead. All TCP transmissions require a header whose length must be a minimum of 20 octets, which--depending on what you're sending across the network--may or may not be a reasonable amount of extra work for your machines.

UDP, on the other hand, is a perfect choice for certain applications that do not require extreme degrees of reliability. UDP sacrifices flow control, error detection and handling, as well as other functions to achieve a maximum header size of 8 octets, which allows for significantly improved network performance for certain functions.

As we move into the higher TCP/IP layers, we find that the process/application layer of the TCP/IP stack controls a number or protocols essential to the proper functioning of TCP/IP-based networks. These protocols include: the electronic messaging Simple Mail Transfer Protocol (SMTP); file transfer (FTP); the foundation of the World Wide Web, Hypertext Transfer Protocol (HTTP); as well as network management via Simple Network Management Protocol, or (SNMP). These protocols enable the feature-by-feature functionality that allows users and administrators to interact with the lower layers and obtain the information that they need.

Bridging the Gap--Internetworking Operating Systems

Before you can set up an effective network comprised of multiple operating systems, there are a number of issues you'll need to consider, including whether your internetwork will be a local (LAN) or wide area (WAN) network, how the networks will be physically connected, and how you will enable communication across multiple operating systems and hardware types. Let's take a quick look at some of the more important topics involved when preparing to connect multiple networks into one grand network. There are literally thousands of pages written on the subject, but the single-most important factor involved will be an intimate understanding of your network, the role that it plays within your organization, and the goals that you have set for yourself in your internetworking activities.

Whether your internetwork will be a LAN or a WAN is a choice that is made for you, due simply to the nature of the two beasts. LANs, by definition, are local; WANs are widely separated. This will also determine, in part, how you will physically connect networks of different types. No two networks are alike. In your particular case, the unique combination of hardware, applications, and operating systems that comprise your network will necessitate certain design considerations, including bridges, hubs, routers, gateways, and other similar equipment.

To give you a leg-up on the role of some of the most essential tools in the network engineer's arsenal of hardware, let's talk briefly about the role of some of these items. The simplest device, the repeater, is used to extend the length of a particular cable run beyond the maximum length as specified in the IEEE standards. For example, a run of twisted-pair cannot exceed 100m from hub to host. If you need to extend that run, you can insert a repeater at the end of the first 100m of cable, which will enhance the signal strength and prevent it from degrading as it covers the remaining distance on the far side of the repeater.

A bridge is a device that is used to segment various portions of a greater network into more easily controlled regions, much like the phone company divides the national network into subsets of area codes. The bridge is a useful tool when connecting a variety of networks that need to be in frequent contact with one another.

A Bridge/Router (Brouter) is a somewhat more advanced device, generally due to enhanced software, that not only segments networks into more easily controlled subsections, but also manages the overall flow of data across the network. The brouter accomplishes this based on a number of possible factors, including internal routing algorithms, specifics of the higher-layer protocols, or other factors as defined by the administrator.

These devices can all be used to make the connection between networks. But what are your options for implementing internetworking? While there are a number of options, the three most prevalent offer you the best chance for an easy and cost-effective implementation.

Your first option is to install multiple-host adapters into one server, with each host adapter handling a specific type of information provided by the upper-levels of the network operating system. One adapter could be speaking to an Ethernet network while the other communicates strictly for a Token Ring (802.5) network. This way allows you to maintain a minimum of server hardware while still supporting multiple network types.

A second option would be to install a hardware bridge between two networks. While this has been proven to work in many cases, it is far and away one of the most difficult solutions that you can implement. Unfortunately, there are many incompatibilities between the Ethernet and Token Ring standards that must be accounted for, such as frame size, addressing methodologies, and routing procedures. These can be handled with hardware and software configurations, but it can prove to be a constant headache trying to accommodate such internetworks.

Far and away the best option, as mentioned previously, is to find a common-denominator protocol that will facilitate the spectrum of services your network needs to provide for your users. While this used to be a problem when limited to the less-than-compatible offerings of LocalTalk, VIP, or IPX, the development of OSI and TCP/IP have proven to be an excellent option for an LCD-protocol, due to its popularity, ease-of-use, and huge base of installed servers. While it is not always possible to find one single protocol that will satisfy your internetworking needs, the concept itself is still quite valid. If you are able to reduce the number of protocols running on your network to the bare minimum, you'll reap endless rewards in troubleshooting, support, and general performance. If, for some reason, you want to have five or six protocols bound to your adapter at any given time, perhaps you should seek counseling. If not, spend some quality time analyzing your network, clients, services, and other eccentricities until you come up with a combination, say NetBEUI and AppleTalk, that does the trick!

Operating Systems

The key to having various network operating systems running on various hardware platforms successfully inter-operate lies solely in their ability to communicate. For the most part, "data" is cross platform, providing the application software used to manipulate the data can access and understand the type of file in question. The hard part, however, lies in extending the extension to your operating system that allowed network communication in the first place.

The goal in internetworking any group of operating systems is based entirely on the vendor of your network software. Certain operating systems simply do not support the additional protocols needed to participate in a multi-protocol environment, or the operating system is not popular enough to have third-party developers enhancing its functionality. Whatever the case, when you need two or more network operating systems to communicate, you need to have the network operating systems speak each others' protocols or (optimally) you can have each operating system speak the same protocol. In many enterprise-level operations, this can be a very intimidating and daunting task--especially if you're running older systems that require extremely proprietary communication. However in more small scale operations, internetworking various operating systems can be a rather easy task, especially if you have planned your infrastructure carefully.

As you may have already guessed, the easiest and possibly the most efficient way to have different operating systems internetwork is to have them speak the same protocol, as opposed to having each network operating system speak all of the protocols you are using. From reading past chapters, it is also probably already clear to you that because of its popularity, TCP/IP is probably the most popular protocol and most widely available across the most operating systems. In addition, from reading past chapters, you may also realize why you would take this approach. This lowest common denominator approach to internetwork communication will also aid you in your never-ending quest to optimize your network performance. Sure, it may be possible to have every operating system you run speak every communication protocol you support, but that approach increases network traffic significantly and degrades the performance of the systems that have to load four or five different protocols.

You may think that building an intranet for your organization might be more trouble than it is worth. However, if you already run TCP/IP on your network you are more than halfway there. The additional benefit you get from having all of your operating systems talking to one another--along with the performance benefit of having fewer protocols in use--will more than make up for any initial difficulties in implementing this strategy.

MacOS AppleTalk

AppleTalk, released in 1985 by Apple Computer, was the network architecture used by the Apple Macintosh product line. AppleTalk was a simple-to-use and simple-to-configure peer-to-peer networking add-on to the Apple Macintosh Operating System that provided an excellent set of networking features. The hardware was all plug-and-play and the configuration was simple, making AppleTalk an excellent workgroup-sized Network Operating System. A revision of the AppleTalk architecture, known as AppleTalk Phase 2, was released in 1989 and added increased functionality and support for new standards.

The physical and data link layers of the AppleTalk protocol support a variety of networking hardware. The older revision of AppleTalk, which became known as AppleTalk Phase 1, supported EtherTalk, Apple's name for Ethernet, and LocalTalk, Apple's proprietary Data Link protocol. LocalTalk was a 230Kbps connection that used the standard serial ports found on all Macintosh systems, meaning that if you had a serial port on your Macintosh and the ability to run the AppleTalk software, you could participate in a small workgroup networking environment. AppleTalk Phase 2 added support for larger-scale internetworks and Token Ring--Several IEEE standards--specifically, 802.2, 802.3, and 802.5 support. Additionally, the ability to connect to IBM SNA and Digital Equipment Corporation DECnet architectures became available from third-party vendors.

There are many vendors of Macintosh networking hardware. Companies like Dayna and Farralon, have excellent network adapters and interconnectivity hardware, such as hubs. Apple, however, is the primary vendor of internetworking software that enhances Macintosh connectivity to other systems. Products to allow TCP/IP services, Remote Access, and X.25 and OSI protocols are among Apple's featured internetworking products.

The latest revisions of Apple's MacOS provide integrated TCP/IP networking support. The protocol stack that runs on the Macintosh workstation implements IP, UDP, TCP, ARP, RARP, ICMP, BOOTP, RIP, and DNS. The Macintosh will also allow for remote TCP/IP network services over standard PPP or SLIP dial-up connections. With this new standard, native implementation of TCP/IP has allowed for easy internetworking with TCP/IP and made it an easy operating system to integrate into most internetworking environment. Apple is currently planning to implement native AppleShare network services to run over TCP/IP. This will allow Macintosh systems the ability to participate in a network environment and run only the TCP/IP protocol, making it an extremely flexible and highly desirable network operating system.

There are currently several products from Apple and third-party vendors to allow for connectivity to other network operating systems, as well as enhancing current network services. The Apple IP Gateway, for example, allows users running Apple's Remote Access, LocalTalk, or any standard AppleTalk interface to connect with an Ethernet-based TCP/IP network. In its first operating configuration, the Apple IP Gateway allows a network of Macintosh computers running AppleTalk to connect to an IP network. When used with the Apple Internet Router software, it can provide TCP/IP access to any Macintosh system that is also part of the same router. Although freeware and shareware PPP and SLIP clients are readily available for most Macintosh computers, the Apple IP Gateway, when used in with Apple's Remote Access Server, can provide dial-in with access to AppleTalk networks as well as TCP/IP networks.

While we are on the subject of remote access, the AppleTalk Remote Access Server and Client products allow Macintosh computers to communicate with another Macintosh computer or an entire network of Macintosh computers. Using standard dial-up, ISDN, X.25, or even cellular connections, the Apple Remote Access family of products provides a simple, easy, and efficient way to telecommute.

Apple also provides options for SNA connectivity. Using the SNA*ps Gateway, SNA*ps 3270, or SNA*ps 5250 will turn your Macintosh computer into an integrated 3270, 5250, advanced program-to-program, or advanced peer-to-peer networking gateway. The software is built in to a NuBus-based Token Ring, SLDC, or coaxial network adapter, thus freeing the systems main processor to run other applications. The 3270 software delivers full-function 3270- display terminal emulation, which allows any Macintosh system to communicate with IBM mainframes. The emulation software will work with a variety of network interfaces. Optionally, the emulation software may connect to an SNA*ps Gateway over an AppleTalk network. Similarly, the SNA*ps 5250 product will allow access to IBM AS/400 systems.

Apple also provides a connectivity product for X.25-based network connections. MacX.25 is a software product that provides all of the necessary software to link a Macintosh to an X.25-based network. MacX.25 may also be used in conjunction with other Apple products, like the Apple Internet Router, to internetwork AppleTalk networks over wide area X.25 networks. MacPAD is the MacX.25 component that allows other Macintosh systems to use a system running the MacX25 server component as a gateway. MacX.25 also includes a software component that is an OSI stack. MacOSI Transport is a component to either MacX.25 or TCP/IP to allow the operation of OSI services over TCP/IP internetworks.

Microsoft Windows

Microsoft has learned a great deal about networking from its past ventures. The OS/2 LanManager product that they developed for IBM gave them the experience they needed to create their own networking products for their own operating systems, as well as the foundation to build their own highly advanced, extremely scaleable network server product. In addition to their own efforts to build networking services for their operating systems, the insane popularity of their operating systems has lead to the availability of many, many third-party networking products, making Microsoft operating systems an excellent choice for workstations and servers of all caliber in any network operating environment.

For the most part, software vendors have done an excellent job in extending the MS-DOS and Microsoft Windows operating systems by providing support for many network protocols and services. SNA Gateways, IPX/SPX clients, and TCP/IP services are all available for the 16-bit Microsoft operating systems. Windows for Workgroups is a special version of the Microsoft Windows operating system that has integrated peer-to-peer networking, optional TCP/IP, and remote access clients as well. Windows for Workgroups filled the void of not having a truly powerful Windows-based network operating system on the PC until Microsoft was ready to ship its Windows95 product.

Internetworking the 16-bit Microsoft operating systems is not really a challenge, at least not as far as networking is concerned. The same rules apply to planning the internetwork connections as they do with any other group of operating systems, but, as far as finding the protocols and services to run, it is not a very difficult challenge. The real difficulty comes in finding the vendor who publishes the client access software or the manufacturer who builds the appropriate network adapter. In addition, since networking is not native to the 16-bit network operating systems, you will have a bit more freedom in establishing your standards and practices.

Windows 95 takes things even a bit more seriously by implementing networking with the same ease-of-use and configuration as the rest of the operating system. Offering Plug-&-Play configuration for most network interfaces, and an easy way to install and configure protocols and services, Windows 95 networking truly accentuates Microsoft's overhaul of the Windows computing environment. Additionally enhancing the networking environment of Windows 95 is the standard inclusion of many protocols from a variety of vendors. Protocols such as TCP/IP, IPX/SPX, and even NetBEUI are included in the Windows95 distribution, and the peer- to-peer file sharing services provided by Windows95 can run using most of these protocols. Windows 95 also provides a very open architecture for adding new and better as well as legacy protocols to the operating system. Third-party products to allow participation in AppleTalk environments, as well as DECnet networks, are already available.

Windows NT is the next generation in the Microsoft Windows-based operating system family. Distributed in a workstation and server version, Windows NT is intended for a more advanced type of user and professional computing environment. Providing all of the networking features of Windows 95 with a bit more power, control, and performance, the network-centric focus of Windows NT is poised to take on the current heavy hitters in the network operating market. Again, the open architecture of the Windows NT operating system allows for future expansion and additions to the operating system.

Recognizing that connectivity to legacy systems is an extremely important feature for a network operating system, Microsoft publishes a product that allows Windows NT to act as an SNA gateway to mainframe systems from IBM. It can allow an entire network, running most any protocol that can "see" the gateway server, to connect to multiple SNA hosts. The Windows NT server product also provides an integrated multi-port remote-access server that allows Windows clients access to a single server or an entire network. The remote access server and client products can also support SLIP and PPP remote dial-up networking protocols.

As you can see, from the earliest days the Microsoft family of operating systems has lent themselves to the successful implementation of various network client services and protocols, making them a powerful addition and flexible basis to any internetwork.

Novell NetWare

Novell NetWare is a network operating system that is designed to integrate heterogeneous hardware and protocols into one cohesive network operating system. Although the primary focus of NetWare is their server product, that provides file-sharing services, Novell offers a number of workstation client services for DOS and Windows systems as well as Macintosh, UNIX, and OS/2.

NetWare's Open Data-Link Interface is what allows the network adapters to communicate with the higher protocol layers of NetWare's architecture. ODI allows multiple adapters in the same workstation, or server, to interact with multiple protocols and network frame types exclusive to a single adapter or shared among multiple adapters. NetWare's OSI Network and Transport layer protocol options include, TCP/IP, AppleTalk, or IPX/SPX, Novell's own protocol derived from the XNS protocol.


NOTE: IPX/SPX stands for Internetwork Packet Exchange/Sequenced Packet Exchange. It is an extremely fast protocol and requires few system resources. In many ways, it is as simple to use as AppleTalk, but may require more configurations.

The architecture of the NetWare file server is divided into modular components that can be loaded and unloaded without having to take the server down. The NetWare Loadable Module (NLM) architecture can be used to implement additional functionality not found in the standard NetWare server, thus extending the functionality of your NetWare server. NLMs can provide new services for your network environment, or new interfaces for programming, to really extend the NetWare server.

Recent advances to Novell NetWare have continued to build on Novell's foundation and dedication to multi-platform internetworking. The release of NetWare Directory Services (NDS) for Novell NetWare provides a system to manage all of the resources on your network. Everything from users to printers to servers to applications to shared volumes can be tracked using NDS. NDS is also revolutionary in its approach to setting the permissions of resources on the network. Based on rules instead of exceptions, NDS presents network security management in a straightforward, easy-to-use way unlike anything else available today. NDS also changes the paradigm for network authentication. By logging into the Directory Tree as opposed to a specific server, users can find and use any of the resources that they have permission to use without having to authenticate themselves to other servers, printers, or any other resources.

Much of the internetworkability of the NetWare server can be implemented using NLMs on the server. The NetWare server is primarily a file server. It serves volumes of files to systems so that a client can mount them as drives on their system. This provides an extremely easy way for people to use the network to share files across hardware platforms. This not to say that the NetWare server is merely a file server. There are many NLMs available for NetWare servers that provide numerous services such as FTP server services, SNA gateway services, and even Web-server services--many of which ship with Novell's Intra-NetWare.

While the promise of homogenous client connectivity seems like a nirvana to some extent, Novell has faced many obstacles reaching toward that goal. Novell provides client access services for many platforms including DOS, Windows, Windows 95, Windows NT, OS/2, MacOS, and even UNIX. The more current Novell clients use an architecture similar to the Novell server's NLM architecture. Virtual Loadable Modules (VLM) are the client-side equivalent of the NetWare Loadable Module. Like their server-side counterpart, VLMs can be dynamically loaded and unloaded, as they are required. This is an extremely important feature in your quest to optimize client performance. Loading only the features of your network client that you need can greatly enhance client performance. However, in some cases Novell has met much difficulty. Their Macintosh client for Novell Directory Services does not rely on Apple's native AppleTalk implementation. This essential "add-on" approach to the MacOS NDS client is often fraught with system conflicts making Macintosh systems with the Novell Directory Service client very unstable. This, however, only effects NDS clients. The AppleTalk and AppleShare NLMs are excellent in terms of performance and reliability. Novell also has an IPX/SPX client for the MacOS that allows Macintosh clients to access certain resources over an IPX/SPX connection. Their UNIX client also tends to be very unstable on certain systems. This is, of course, the direct result of having so many variants of UNIX to support.

UNIX

Internetworking UNIX systems fits in quite nicely with what we have been discussing. UNIX systems support a variety of network hardware and network protocols, however, UNIX has TCP/IP networking integrated very closely into the operating system kernel. It would be extremely foolish not to leverage TCP/IP in your implementation of UNIX systems in your networking environment. For the most part, UNIX systems have the fastest implementation of TCP/IP, making it an extremely powerful and flexible system to have on your network.

If your organization has a very well-built intranet, your investment in UNIX workstations and servers can only enhance it. UNIX from most vendors has the core intranet services built-in and that can make it a powerful asset to have on your network. Because many of the services are built-in or freely available, you should have no problem sharing files with other network operating systems, providing they have the necessary client access tools such as a Web browser or FTP client.

If you do need to run additional protocols on a UNIX system, you have that option. SNA gateways, as well as IPX/SPX gateways and AppleTalk, are all available for many UNIX systems. However, because there are so many flavors of UNIX from so many different vendors, you should probably contact the vendor directly for complete, comprehensive information.

Implementation

When it comes right down to it, the best way to ensure your ability to successfully internetwork your computing environment is to plan carefully. Like most aspects of networking, planning and always focusing on the big picture will help tremendously in the immediate and distant future. The internetworking aspect of your overall network implementation plan is just one small component that should be integrated as you see fit. You may not need internetworking services at the launch of your network, but you may in the future. Therefore, you should always have your internetworking plan in the back of your mind.

With today's modern network operating systems, the task of implementing an internetwork can be quite easy. The recent infusion of TCP/IP networking products for most network operating systems can be used to your advantage.

Planning Your Internetwork

The following steps are ones that may or may not find their way into your plan. You need to decide which of the outlined steps below, or what additional steps, need to be in your plan.

Define the Requirements for Your Internetwork

Not all internetworks are created equal, partly because they will not be used equally. Deciding to internetwork a group of different operating systems can open a whole new world of services and applications to your network, or it can just be a big waste of time and money. The first and most important step in building your internetwork is deciding if you need to build one in the first place! You need to make sure that the additional connectivity will provide useful services for your users. If there is nothing to be gained from connecting groups of Windows-based PCs to a group of UNIX workstations, then do not do it! However, if establishing an SNA gateway from your mainframe to your existing PC and Macintosh network will allow you to remove dedicated terminals throughout your organization that will result in cheaper cabling maintenance and lower hardware costs, you might want to consider it.

Oh, and always remember to plan for the future. Upgrading and scaling are very, very important here, in networking, as well as in most aspects of computing, as you are more than already aware!

Develop a Network Management Strategy and Implementation Plan

Don't just start loading protocols on clients and routers. Think about how you can optimize the protocols on your LAN. Do not just haphazardly start running protocols; remember your network optimization plan. This step should complement that plan, not counteract it!

Test Your Management System, Network Applications, and Interconnectivity

Before you begin rolling out your new features to your users, make sure you can manage the additional resources efficiently. While centralized network management features are built-in to most modern network operating systems, other network operating systems will require third-party management tools. Make sure they work before you begin rolling out new features to your enterprise-wide computing environment.

The interconnectivity of your internetwork implementation plan is also of the utmost importance. Give yourself enough time to plan and implement your interconnectivity, and use standard analysis tools (as we will discuss later) to make sure that data is being sent appropriately. Getting your data packets sent and received by the appropriate piece of network hardware may take some time and tweaking, but it is the core of what your internetwork will become. Make sure it is functioning correctly and efficiently.

Make sure the network applications function correctly. Just because your gateway is up and your protocols are routing, does not mean your client applications for accessing these new resources will work correctly. Do not leave it to chance or blind faith, and do not leave it to the last minute to test.

Begin Training Support Staff

When users have problems, questions, or simply general concerns about their workstation or network they will call your support staff, not you. You need to make sure that they are appropriately equipped to handle any additional situations that may arise from the implementation of your new group of internetwork connectivity.

Additionally, quality operating procedures you have already established for your network should be updated and enhanced to account for the new architecture. Just as you need to make sure that your support staff can handle the new technology that they have, you need to make sure that the quality of your overall operating procedures remain consistently high.

Begin Training Users

Do not forget about your users! Adding new services for your users will do no good if they cannot access the new features. Even the most elementary of procedures for accessing services can baffle most users, and it is your responsibility as a network administrator to make employees using your LAN as efficient as they can be. NDS makes your job as a network manager easier, but someone unfamiliar with NDS on a UNIX workstation may need additional information. Of course, this does not mean that you have to teach all of your users how to use simple applications., It does mean, however, that you are responsible for making sure that services that can enhance productivity are being used.

Summary

The job of internetworking operating systems is anything but an easy one. There are an endless number of variables that must be taken into consideration before implementing even the most basic of internetworks: servers, clients, hardware, software, protocols, and many, many more. It's a daunting task, for certain. If done correctly, the creation of a multi-protocol network can be an impressive and incredibly useful feat. If implemented poorly, however, you may soon find yourself staring head-on at a pink slip!

Over the course of developing your internetwork, you'll be required to spec out the required hardware and software configurations, request bids from vendors, hire (and train!) network engineers and administrators, create fail-safe and backup plans, and negotiate contracts for support, connectivity, and other services.

Of course, the important thing to remember (and we continue to emphasize!) is that book learning only goes so far in designing internetworks, or in any technical arena for that matter. It takes a certain combination of experience, seasoning, technical aptitude, some reading, and a little luck before you'll be able to bring all the pieces together in the right order. It's a lot like fortune- telling. Anybody can take a crack at it, but it takes a special knack to do it well. Good luck!


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.