High-Performance Networking Unleashed

Previous chapterNext chapterContents


- 18 -

ATM

by Arthur Cooper

Like Frame Relay, Asynchronous Transfer Mode (ATM) is another technology that grew from a need to improve local and wide area communications. This technology grew out of the old Integrated Services Digital Network (ISDN) package, as did Frame Relay when it appeared on the scene. The main differences between ATM and Frame Relay are the structure and speed arrangements of the two technologies. As you recall from the discussion of Frame Relay in Chapter 17, it was originally designed to be used on narrowband D channels of ISDN circuits.

Because these circuits run only at rates of 16Kbps, the idea was scrapped rather quickly. The various vendors and providers of Frame Relay service are providing whatever amount of bandwidth the customer wants and needs in order to operate. With ATM, the strategy was very different. When the CCITT started looking at ATM technology, the original intent was to allow ATM to be the actual transport method of moving data by broadband ISDN circuits from one point to another. The first U.S. carriers to really bring forward any offerings of ATM service were Sprint and AT&T. Both of these carriers started out by offering ATM service at T-3 and higher speeds.


NOTE: T-3 (or DS3) uses rates approximately 45Mbps.

The problem in the early days of ATM (1990-1993) was the fact that very few, if any, of the users out there had equipment capable of interfacing with ATM switches and circuits.

Any migrations to ATM in those early days of the technology would have been costly to potential users. As any network manager will tell you, the most important thing associated with network add-ons, changes, or upgrades is their cost. For this reason, ATM did not seem to be catching on as well as originally predicted.

Remember, ATM was originally conceived as a scheme for transporting broadband ISDN traffic. In the early 1980s, the ISDN standards were being developed. Unfortunately, there was never a solid agreement made between any of the large players in the ISDN market at an early point in ISDN's conception period. As a result, everybody in the business went off on their own and developed ISDN switches, phones, interface cards, and devices. The problem with this was that just calling something ISDN-compatible or ISDN-capable did not necessarily mean that it would operate or function with something else calling itself ISDN-compatible or ISDN-capable.

Just because you had an ISDN phone made by GTE, it did not mean that this phone would work with your AT&T System 85 switch, even though both of these devices were supposedly ISDN-compatible or ISDN-capable. This caused a lot of confusion in the marketplace. At the time, the U.S. Air Force purchased several ISDN systems, but it was an absolute nightmare because the phone switches and phone devices themselves were not compatible with each other. Just because something was called ISDN in those days did not necessarily mean it would actually be compatible with any other device labeled ISDN. ATM has never had to deal with all of this confusion and frustration. When ATM began to come into its own as a new technology, an ATM forum was developed in order to avoid the fiasco that ensued following ISDN's introduction to the world.

The ATM Forum became a very important part of completing the puzzle as far as development of ATM was concerned. An e-mail list provided constant, efficient updates of the notes and minutes of any ATM Forum discussions, panels, or meetings. I believe that, as a result of the ATM Forum and its continued functioning, ATM will move forward with more momentum than other technologies have in the past. Once the issues of cost and availability have been properly addressed and handled, ATM will no doubt become a formidable foe to Frame Relay technology.

The next section compares Frame Relay and ATM, because Frame Relay really is the number-one wide area network (WAN) scheme being chosen for use today. ATM will no doubt surpass Frame Relay eventually, but for right now, it is important to look at ATM and Frame Relay on a comparative basis.

Comparing ATM to Frame Relay

As you recall from Chapter 17, Frame Relay is a new slant on the old X.25 technology. X.25 had reached the point where it had long outlived its usefulness as a WAN technology, so Frame Relay developed out of a need for an improved transmission scheme. ATM is really the next step as far as WAN and Local Area Network (LAN) technologies are concerned. X.25 and Frame Relay were simply WAN technologies, but ATM can bridge the gap between LAN and WAN.

ATM is a true border-crossing technology, as it will provide for LAN transmission schemes as well as WAN transmission schemes. In essence, ATM is an all-encompassing technology that will allow network managers to own a one-stop shop as far as network topology and transmission is concerned. Frame Relay does not do this. Frame Relay is a great method of connecting various LANs together. However, Frame Relay is not a technology that can be used between the various users, terminals, and equipment on a LAN.

ATM can do this and more. With ATM interface cards in user Personal Computers (PCs) and terminal equipment on a LAN, there can be a seamless flow of data through the LAN architecture and out into the WAN architecture as well (see Figure 18.1).

In other words, data that originates in a user's PC can flow through the ATM LAN topology to an ATM switch/router point on the same topological arrangement. From this ATM switch or router, the data can then be sent along either a Permanent Virtual Circuit (PVC) or a Switched Virtual Circuit (SVC) and arrive at a distant location on the ATM backbone.

FIGURE 18.1. ATM LAN/WAN basic topology.

So what's the bottom line? ATM will do away with the distinction between a LAN and a WAN. By integrating LAN and WAN resources into one network, ATM is able to provide for the networking needs of everything from telephones to computer networks. Frame Relay is not suitable for doing anything in a LAN environment. Comparing ATM to Frame Relay is not really fair. It's sort of like trying to compare apples and bananas. The following section discusses some of the reasons why a network manager would want to use ATM technology.

Why Use ATM?

One of the first things any network manager needs to do when deciding on a network topology is to look at all of the pros and cons of each technology being considered. One of the first pros of ATM and a major reason why you would want to switch to it is that it obliterates the wall between WAN and LAN. This means that ATM networks are truly one-technology networks. Most of the enterprise networks in operation today are hodgepodges of different technologies. There may be some AppleTalk technology on some of the network's LANs, Windows NT on others, and perhaps Novell or something else on the remaining LANs.

When all kinds of different technologies are being used on each or all of the network's LANs, it means that the WAN portion of the network will be responsible for providing some commonality between the LANs on the network. This is where things can get a bit trickier. If the WAN portion of the network is using X.25, then most likely the speed of the WAN itself will play a big part in network congestion and problems. On the other hand, if the network's WAN is on a Frame Relay network, there may be no congestion problems right now. In the future, as Frame Relay networks become more crowded (depending on the vendor, the network used, and so on), there could be difficulties in meeting the needs of the WAN in terms of committed information rates (CIR).


NOTE: For a better explanation of Frame Relay CIR, see Chapter 17, "Frame Relay."

The point of this entire discussion is that WAN technologies prior to ATM can hardly compare with the speed of ATM as a WAN solution. Wouldn't it be easier to have a single technology that is capable of carrying your enterprise network's traffic from all points to all others? Think of the ease of upgrade. ATM technology is completely scaleable. In other words, as a network's capacity and requirements for more systems or bandwidth increase, additional ATM nodes running at higher speeds can simply be added to the existing network. This can all be done without having to redesign the existing network itself.

On present LAN topologies, whenever a hub or switch is added to the network, it's almost always done to accommodate the addition of new users, printers, or file servers. The beauty of ATM is the fact that when you add new switches, you increase the network capacity. When hubs and switches are added in present LANs, they do allow for more users and servers to run, but guess what? They do not add to the bandwidth capacity of the network as ATM does. In fact, they actually force all of the users, including the ones being added, to share whatever bandwidth resources the network already had. ATM's ease of expansion provides more bandwidth capacity as well as the ability to add more users, servers, or printers, and so on.

This ability to expand easily means that two very important things, time and money, are saved by using ATM. In terms of sheer speed alone, ATM can only improve the capabilities of any corporate network it is being used on. To top that off, whenever you decide in favor of upgrading your network and using shared media LANs, the existing network will need to be redesigned. When you are using ATM technology as your backbone, the network design remains the same, regardless of the size of the network.

Think of the time and effort that you can save by using ATM. Suppose that you are the administrator of a LAN/WAN setup used by a major corporation. Imagine that you have six sites with LANs and all of them are connected together by a WAN consisting of Plain Old Telephone Service (POTS) lines. You have speeds of 56Kbps on these POTS lines. Each LAN is using Ethernet technology behind their routers. Now, at one location, you are subnetting the LAN into smaller chunks of users (see Figure 18.2).

In Figure 18.2, five of the six LANs are depicted as simple, connected circles on the diagram. It is the sixth LAN that you are interested in looking at. LAN 6 has a router system tied to the back end of it. Let us assume there are other users tied off of these two routers, and that they are considered users of LAN 6. Suppose all of the Ethernet hubs on the back of router 1 are full and that the hubs on the back of router 2 (the subnetted portion of the LAN) are also full. Here's where the agony will come into play for any LAN administrator. One day, management will inform you there is a need to place five more users on the back of router 2. At this point, two things will happen. You will need to put a new hub on the back of router 2, and now all of the bandwidth being used by router 2 to get over to router 1 will be shared by these five new users.

FIGURE 18.2. Example LAN/WAN (six LANs with POTS WAN).

What just happened? By placing new hubs onto an existing LAN, that medium is shared by all. In other words, the users on this new hub will have to compete with every other user on all of the other hubs connected to router 2. Everyone just lost a little chunk of bandwidth on the Ethernet backbone of router 2. With ATM in the LAN, this is not the case. An ATM switch used on a LAN arrangement will provide each of the LAN's users with a switched connection. It's not a broadcast technology like Ethernet.

As the signal leaves each user's PC and is sent to the ATM interface device on the LAN, the connection is separate from each other user's connection. So, looking back at this little example, all of the users on router 2 will begin to experience longer delays when using the LAN. An ATM LAN would have not created this new delay as a result of adding five users (see Figure 18.3).

Suppose that instead of standard routers on LAN 6, the subnetted LAN 6 will have an ATM LAN covering the areas previously covered by routers 1 and 2. If you needed to place five new users onto this LAN, they would get their own direct connection into the ATM switch, hub, or bridging device. Further, when they are connecting with someone, either on the same LAN or across the WAN, they are given an actual individual connection across the ATM medium. They are not in a situation where they are sharing a broadcast medium as they would if they were on an Ethernet LAN. As you can see, this would make no impact on the speed and service of the new users as well as the existing users. All will get their own connection-based path through the ATM medium.

FIGURE 18.3. Example LAN/WAN (LAN 6 improved with ATM).

The next section discusses the inner workings of ATM and how it passes data from end to end based on connection-oriented services.

How ATM Works

ATM is not as big a mystery as some would have you believe. When ATM was first announced as the end-all, be-all of data networking WAN technologies, many people were afraid of the enormous costs associated with implementing it. Instead of trying to figure out what all the fuss was about, people decided ATM was beyond reach as a result of its enormous cost. If they had taken the time to learn how ATM works and how it is able to attain the enormous speeds with which it can transfer data, they might have thought twice before deciding ATM was beyond reach.

In a basic sense, ATM was conceived along the same lines as Frame Relay. It depends on clean, crisp data connections between service points on the network. Like Frame Relay, ATM does not involve itself in the business of verifying any errors in the data it is transferring from place to place. Also, like Frame Relay, ATM simply relies on the sophistication of the user equipment and endpoint devices on the network when it comes to error control and retransmission.

ATM uses fixed-size packets when transferring information. These packets are known as cells in the ATM world. Each ATM cell is actually a small packet of data exactly 53 bytes long. Within the 53 bytes, 5 bytes are used for address/header/descriptor information. More details on these 5 bytes are presented a little later. The remaining 48 bytes within the cell are used for the actual data, or information, being transferred through the ATM system. ATM is very adept at transferring voice, data, and video. The connections between two endpoints on an ATM network may use either a Switched Virtual Circuit (SVC) or a Permanent Virtual Circuit (SVC).

As you may recall from the previous chapter, Frame Relay also uses virtual circuits when transferring data from place to place. There is a big difference in the way ATM does the actual connecting of the virtual circuits. Whether the circuits are SVCs or PVCs, ATM will have pre-established the actual circuits ahead of time. Not only does this make it easy, but it is a major time-saver as well. The main benefits derived here are the processor and processing time factors, as both of these are greatly improved at the User-to-Network Interface (UNI) and the Network-to-Network Interface (NNI).

In the old X.25 days (as well as with Frame Relay), all of the connections were put up at subscription time with PVCs. This was an arduous process. It did take network time, which is valuable time. Likewise, if SVCs were going to be used, these connections would be made at the time of connection between points on the network. ATM does neither. It has a predetermined, mapped network to work with. All of the circuit information needed by ATM is preprogrammed before any data transference takes place. The ideal situation with ATM is when it is used on the Switched Optical Network (SONET). The reasons for this are, of course, speed and the fact that SONET also uses predestined addressing based on where the actual fiber connections physically exist. ATM fits well on OC3 SONET carriers, and it provides a preordained address scheme for the network it carries.

How does this addressing scheme operate, and how does ATM know to get the data where it belongs within the network? As you recall, ATM has a header that is exactly 5 bytes long. This header provides the information needed by ATM at the UNI as well as at the NNI (see Figure 18.4).

FIGURE 18.4. ATM UNI and NNI cell headers.

UNI Cell Header

At the UNI, one byte of eight bits provides a provision for 256 unique physical addresses. These physical paths are called virtual paths at the UNI. Within the header, the byte (8 bits) that describes these 256 paths is called a Virtual Path Identifier (VPI). Next, there are two bytes (16 bits) used to define up to 65,536 virtual circuits on each of these virtual paths. The diagram in Figure 18.4 shows the order in which the bits are loaded into the header, in hierarchical order from top to bottom. You'll notice that some other fields are also used at the UNI level. One section of 4 bits is known as the Generic Flow Control (GFC) information field. There is also a 1-bit field called Cell Loss Priority (CLP).Three bits are reserved as the Payload Type field. However, at the NNI, this header information is definitely much different than it is at the UNI interface point.

NNI Cell Header

Reviewing Figure 18.4, you will notice that at the NNI, things are a little different. The VPI section of an NNI header (12 bits) will define 4,096 distinct paths. These VPIs will not refer to physical addresses as was the case in the UNI cell header structure. Rather, the VPI in an ATM NNI cell refers to network virtual paths between ATM switches on the ATM backbone network. The VCI is, once again, 16 bits long just as it was in the UNI cell structure. Once again, this means you have the capability of placing 65,536 virtual circuits on each virtual path. Also, there is a 3-bit Payload Type field and the 1-bit CLP field as was the case in the UNI cell header.

When comparing the two structures in Figure 18.4, you'll notice that the "bottom" 3 bytes of the cell headers are identical. It's the top 2 bytes of the cell that look different. This is, of course, due to the difference in the VPI (8 bits at UNI, 12 bits at NNI). The HEC field will be discussed in the section "The HEC: What Does It Do?"

The next section discusses the actual virtual circuits these ATM cells will follow. For the purposes of the following discussion, let's combine SVCs and PVCs and refer to the circuits in an ATM network simply as Virtual Connections (VCs). Each of these VCs will do the payload transference in the ATM network. Voice, data, and video can all be carried on ATM networks, but not at the same time. Each type of traffic will require an independent and unique VC.

What Keeps the VCs Straight in an ATM Network?

As traffic moves through an ATM network, it will no doubt traverse several network switches, hubs, or routers. As this traffic moves through the network, it will do so on one of these virtual connections, the VC referred to earlier. VCs, by nature of the header VPI and VCI information, are able to move through network nodes with ease, as each one of them will have separate and unique numbers contained within their headers. As you recall, ATM circuits are preordained before data is actually transmitted over them. At each node, buffers are assigned on the transmit and receive sides of the VCs passing through.

Therefore, there are certain characteristics we know all VCs will have. First, either a PVC or an SVC will comprise the trunk the VC resides on. Second, a VPI and VCI will be assigned at various nodes throughout the network in addition to the VPI/VCI assigned at the User-to-Network Interface (UNI) points. The third item refers to a cell speed or rate that will be accepted over the VC in question. These three items are what makes each VC unique within an ATM network. This is how the network can keep all of these cells (riding the VCs) straight.

ATM Rates and the "Leaky Bucket"

As you recall from Chapter 17, there was a so-called "leaky bucket" algorithm that allowed users to pass a certain amount of data through the Frame Relay network at any given time. ATM is no different. The only difference in an ATM network is the terminology used. Rather than calling it a Committed Information Rate (CIR) as in Frame Relay, we call it a Sustained/Sustainable Cell Rate (SCR) in ATM. ATM uses a timed buffer just as Frame Relay does.


NOTE: For a more detailed explanation of Frame Relay CIR, timed buffers, and leaky buckets, see Chapter 17.

There is a major difference, though. In ATM, the actual switching of the cells is done in hardware rather than in software. Because each cell is exactly 53 bytes in length, the buffers are much more efficient than in Frame Relay networks. The buffers can "know" exactly the length of each cell passing through. This allows for a more efficient leaky bucket to operate within ATM network nodes and equipment.

Remember the CLP bit shown in Figure 18.4? Well, that CLP bit does the same thing that Frame Relay Discard Eligibility (DE) bits do when a Frame Relay CIR rate has been surpassed. Frame Relay nodes would mark the DE bit in their packets when the packets had exceeded the CIR of the circuit. ATM does the same thing, but it uses the CLP bit. As you may have guessed, this happens when the SCR rate mentioned earlier has been surpassed.

The next section discusses the Header Error Control (HEC) byte contained in all ATM cell headers.

The HEC: What Does It Do?

Within each of the cell headers outlined in Figure 18.4, there was a byte (8 bits) called an HEC field. This field has a very important job within ATM networks. As cells are passing through the nodes on an ATM network, there is a little bit of poking around done as far as errors are concerned. Yes, it is true that ATM does not do any error checking per se. However, as each node "sees" cells passing through the network, if there are slight errors on the cells (usually involving only 1 bit), the nodes can use the HEC field and its associated 8 bits as a means to perform a bit of forward error correction.

Remember, this error correction can only work when there is a slight error in the cells passing through. It is rarely done, but the capability does exist. ATM does do a little tinkering when it comes to error control. This little bit of tinkering, coupled with the quality of today's high-speed transmission lines and ATM's speed, makes for an extremely stable means of transferring data.

This HEC field has a second use as well. As cells traverse the various nodes and points within an ATM network, the nodes themselves can use the HEC field as a good gauge of how well cells are "flying" through their ports. As long as the switches, or nodes, can constantly be allowed to monitor the HEC fields of cells passing through them (and they can), this can provide an excellent means of synchronization. Once again, the leaky bucket will apply here, but you can see where a steady flow of good HEC bytes in each cell could provide an awesome means of timing and synchronization for an ATM network.

The next section covers the actual payload section of ATM cells. After all, it is the data or payload of an ATM cell that must be reliably transmitted from place to place.

What Is This Thing Called the AAL?

Even though ATM is an extremely fast, reliable, and stable means of transmitting data from place to place, there is still going to be an amount of cell failure from point to point on the network. It simply can't be helped. This is why the ATM Forum mentioned at the beginning of the chapter has been so important in ATM's development cycle. The ATM Forum defined a set layer of ATM transmission known as the ATM Adaptation Layer (AAL). This layer is where all the rectification for bad, lost, or damaged cells with other errors takes place.

It's actually a software and/or firmware layer. It comprises two actual stages, or levels, of operation. The first stage is called the Convergence Sublayer (CS), and the second stage is called the Segmentation And Reassembly (SAR) Sublayer. The CS is actually the part that defines actual traffic types such as data, voice, or video. The CS will also take care of error control and the sequence and size of the information contained in the cells themselves. This is the first part of the process. After the CS has done its job, the SAR will come along and convert this raw data into the actual 48-byte-long pieces of data. These are the chunks of data that will make up each cell's payload section during transmission through the network.

The final part of the process is where the SAR earns its paycheck. The 5-byte header must be attached to each of these chunks the SAR just created. It is the job of the SAR to make sure that all of the information in the header is correct, in the right place, and properly attached to the 48-byte payload. After this has been done, a perfect, 53-byte cell has been created that is ready for transmission. As if we did not have enough acronyms to worry about, the ATM Forum saw fit to create five unique and independent AAL service structures. They are called AAL1, AAL2, AAL3, AAL4, and AAL5.

There's a reason why the ATM Forum came up with five adaptation layers. Each one is designed for different types of payloads. The AAL layers are put into place at the transmit node on an ATM network. It is the function of the receive node to simply receive the cells, break them down properly, and then hand them off to whatever process they belong to.

The following sections look at each of the five AALs and the types of traffic they handle.

AAL1

AAL1 is the basic service structure that handles voice traffic on ATM networks. You may think this is an easy task, but it does require a bit of work on AAL1's part. In order to keep voice competent from end to end, AAL1 must ensure that each cell being transmitted is done so in a perfect sequence. If any of the cells are out of order, the humans on each end would think each other was speaking a foreign language.

Further, if the cells are sent too close together, the humans would not be able to distinguish the beginning and ending of some of the words being spoken. AAL1 is ready for this natural pausing humans do when speaking. It ensures that the sequence numbers assigned to the cells it creates identify what parts of the cell(s) include voice traffic and what parts of the cell(s) include nothing due to silence. For this reason, many people refer to AAL1 and its ATM services as "streaming mode" ATM.

AAL2

This service structure concerns itself with video transmission service through ATM networks. Like AAL1, AAL2 must do sequencing and synchronization of the cells created by its process. However, there is also an added gimmick. There are error-checking codes called Cyclic Redundancy Checks (CRCs) used in the transmission of video. This is due to the large amount of information concerning each pixel of a video transmission.

AAL2 also does some labeling of each of the cells being transmitted. This cell labeling helps the end video device know where the information for each of the refreshed screens (or frames) of video starts and ends. This is extremely important in video transmissions. Each cell is either first, in the middle, or the last cell of each frame's transmission. When you look at the amount of pixel information transmitted during a video transmission through a network, it is dependent on how much movement is actually taking place. AAL2 can provide variable bandwidth on demand when handling video transmissions by using this labeling method. If there is little motion being made by the video's subject, then the amount of cells being transmitted will be smaller than when the video's subject is moving around a lot.

AAL3/AAL5

The reason I have grouped AAL3 and AAL5 together is that they are both designed to be used with Connection-Oriented Protocols (COP). A COP is a situation where an actual connection must be established between the sending and receiving members of a data transmission. A good example of this is the Internet, which uses TCP/IP. Whenever someone goes out on the World Wide Web and attaches to a certain resource or home page, he has put up that "connection" prior to any traffic being passed back and forth between the Web resource and himself. Hence, the term connection-oriented applies here.

Both of these AAL services will perform error control, sequencing, and identification within the cells as they are created for transmission. All of this information will be placed within the cell's structure, as the connection-oriented data process using these cells will need all of this information in order to pass data back and forth over the ATM network. AAL5 requires less overhead than AAL3, and relies on the fact that ATM is being used on clear, digital transmission facilities. For that reason alone, most ATM networks passing connection-oriented data will use the AAL5 structure when creating cells for transmission.

AAL4

This service is a lot different than AAL3 and AAL5. AAL4 is for connection-less transmissions of data. Naturally, this type of service would not work for either voice or video. Voice and video are indeed dependent on a distinct connection from end to end. Even if the connection is over an SVC or a PVC, that connection is in place. AAL4 does not rely on that connection. Instead, it just "releases" the data into the network and relies on the network to somehow get the data to its destination. There is actually a 10-bit field called a Multiplex Identifier (MI) that is used by AAL4 in order to get the data to its rightful destination.

ATM: Putting It All Together

Now that you know all about cells and such, imagine a situation from one LAN user to another through an ATM network. Assuming a non-perfect world, imagine a fictitious user (User A) on an Ethernet LAN in Chicago and a fictitious user (User B) on a Token Ring network in Colorado Springs. If the two LANs are being tied together by an ATM Wide Area Network (WAN), what will generally take place is shown in Figure 18.5.

FIGURE 18.5. User A and User B example.

User A in Chicago will send out the data destined for User B in Colorado Springs. Say it's a picture/graphics file of some sort. Perhaps User A will compose an e-mail message on the LAN in Chicago using Microsoft Mail. He will then attach the picture/graphics file to the e-mail message. After Microsoft Mail has determined that this e-mail is not addressed to anyone on its LAN, it will send the e-mail message along to the router or domain controller in charge of its LAN's inward and outbound traffic.

This was all done over the Ethernet. Once the router receives the message from User A's PC, it will then recognize (through the use of its routing tables) that the message needs to be placed out on the WAN. Remember, this fictitious WAN is ATM-based. The router would send its data to an ATM "box" of some type that would contain the software/firmware/hardware needed to get the router's data into ATM-acceptable chunks. This box may be a Customer Service Unit (CSU) or it may be an actual ATM switch with built-in CSU functions. Either way, this ATM box will then convert the data into 53-byte cells.

These cells will be "zapped" out onto the ATM Virtual Connection (VC) predetermined between User A's location to User B's location.


NOTE: Remember, ATM uses predefined virtual circuits between each location on its network.

After these cells arrive (rather rapidly) at User B's ATM box, CSU, or switch, they will all be broken down again into a form of data User B's router can recognize. After the ATM interface equipment has done that, it will pass this data over to the router controlling inward and outward traffic on User B's LAN. At that point, this router will place the information out on the Token Ring LAN in Colorado Springs that User B is a part of. The LAN will deliver the e-mail to User B's PC, where he can then open the message and view the attached picture/graphics file. That's all there is to it. Although this example may seem simplistic, it is a good example of where ATM technology will fit into the picture.

Summary

Many organizations have LANs in place that will meet their needs for years to come. Simply replacing these LANs with ATM LANs in order to keep up with technology makes absolutely no sense. However, tying these LAN's together with ATM technology makes a lot of sense. Hopefully, as new organizations begin to create their own LANs in different locations, they will base the topology of their LANs on ATM technology. This makes for a very easy transition to ATM wide area service between newly formed LANs in an organization's structure. ATM technology will no doubt be as commonplace as the telephone in a few years, but for now, it still remains the "technophile's" network of choice and the manager's last choice when considering cost. ATM will progress a lot further than ISDN because it is being handled and "groomed" much better than ISDN ever was. The ATM Forum is seeing to that. All we can do is take a wait-and-see attitude.

ATM will no doubt be the LAN and WAN technology of the future. It only makes sense to use fixed-length data cells and to carry these cells from one user to another. After all, what is the main purpose of any network? We know that the effective and efficient transfer of data from one point to another can be accomplished by using one of many technologies. However, we also know that ATM is without a doubt the most effective and efficient way to do this. Once the costs of ATM interface boards for PCs, ATM switches, bridges, hubs, and routers have reached an affordable level, network managers will jump on the ATM bandwagon with both feet.

As you recall from the information given in this chapter, data from one user's PC will basically travel the entire network path to the other user's PC with very little interaction from software processes. Hardware and firmware processes do most of the processing and routing of data in ATM networks, and this is what makes the movement of data so fast and efficient. As the line between what constitutes a LAN or a WAN becomes blurred, it is ATM that will bridge the gap and provide network managers with a one-technology solution to all of their network traffic. This will happen very soon, and many vendors are poised to begin selling ATM hardware and equipment. ATM will become the network topology of choice.


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.