High-Performance Networking Unleashed

Previous chapterContents


- 30 -

Looking to the Future

by Mark Sportack

Looking into the future is always a risky proposition. Fortune tellers make a living by predicting the future for those gullible enough to believe them. The good ones can surmise enough about their customer, based on appearance, mannerisms, and even their statements, to make open-ended but educated guesses about that customer's future. If all else fails, they can always claim that only time will tell if they were right and, as long as there is hope for a tomorrow, there will always be hope that their predictions will still materialize. The future of high- performance networks is no exception.

Trade publications abound with speculation from "experts" about what the future holds. Nevertheless, there are only two certainties in networking. The first is constant change. The second is that constant change will be driven by the customer's constantly changing requirements. Given this, it seems prudent for the techno-fortune teller to study the customer's present and recent past for clues about the future.

These clues can be found in the trends that have emerged in the networked computing industry during the past few years. Specific topics that warrant investigation are the recent evolutionary history of networked clients, distributed servers, and their application software and operating systems. The networking impacts of these trends can be identified and used as a basis for extrapolating future expected changes. This exercise creates a "window" for looking into the future of high-performance networking.

Net Trends

The first networked computing trend that can't be ignored is the Internet phenomenon. Commercialization of the Internet has proven to be a virtual Pandora's box. The millions of new Net users, both personal and corporate, have permanently altered the Internet in ways that are only now being understood.

The Internet, and its protocol the Internet Protocol (IP), were originally designed to facilitate research by interconnecting academic sites with government and private research centers. They were not designed for interconnecting millions of highly dispersed individual users nor were they prepared for the explosive growth that accompanied their commercialization.

This influx of new users threatened both the Internet and IP. The Internet found itself struggling to add capacity fast enough to meet demand, while the Internet Engineering Task Force (IETF) scrambled to develop ways to make the existing supply of IP addresses last longer and add longer-term support for expanded network layer functionality.

More importantly, commercialization of the Internet has generated a flurry of interesting new technologies, and new applications of some old technologies, that can only increase demands for bandwidth.

The Internet

The Internet itself suffered from commercialization's severe growing pains. It wasn't engineered for the massive increases in usage volumes that it experienced. Internet service providers found the booming business challenging, if not impossible, to keep up with. Consequently, the World Wide Web quickly turned into the World Wide Wait.

The Internet hype affected more than technologically proficient individuals. Even corporate America succumbed to the lure of the Internet. The Internet was viewed by many companies as a new, passive channel for reaching a focused and desirable market segment. Marketers in these companies scrambled to establish a presence on the World Wide Web (WWW or Web), lest they appeared to be "behind the times."

Numerous lessons had to be learned about using this new electronic channel. Providing too little information, or presenting it in an uninspired manner, could turn off potential customers. Providing too much information could be overwhelming and result in time-consuming access. Failure to keep the site updated could also reduce the effectiveness of the site.

As companies struggled to learn how to take advantage of the commercial Net, a greater internal issue arose: Is having a Web site worth the cost? This generated an even greater frenzy to either directly generate revenues from Web sites or, at the very least, attribute some portion of sales to it. Unfortunately, if the Web site is just an electronic billboard, this can be almost impossible.

Expanding the functionality of a corporate Internet presence beyond passive advertising was, and still is, fraught with other risks for both customers and companies. For example, using a Web site as an active marketing and/or sales tool requires some degree of integration with existing systems and applications. Database and software vendors are doing their best to enable this integration by supplying the tools necessary for accessing databases from a browser.

Unfortunately, the data contained in, and used by, these systems is usually extremely sensitive and proprietary. Companies would have to make a "hole" in their corporate network's perimeter defenses to enable any advanced marketing or customer care functionality.

Customers, too, are subjected to an increased risk just by using the Internet for commerce. To conduct any transaction, or even query an account for status, requires them to transmit enough personal information, such as Social Security Numbers or Personal Identification Numbers (PINs), to uniquely identify themselves. Plus, if they are making purchases, they may have to provide a credit card number. This sensitive personal information must traverse an unsecured morass of networking facilities.

The "net" (pardon the intentional double entendre) effect has been an increased emphasis on network perimeter defenses, like firewalls, with sacrificial machines placed outside the firewall to proxitize inbound requests for data. In the future, these physical firewalls and proxy servers may go "virtual" as their functionality becomes embedded in the network layer protocols and mechanisms. Widespread use of X.509 certificates, in conjunction with network and host layer authentication, could enable customers to directly access and perform limited manipulation of their account data. Expect this to occur later rather than sooner.

Internet Protocol (IP)

The Internet Protocol is currently at revision level 4 (IPv4). It is rapidly approaching functional obsolescence on several fronts. This protocol was originally developed nearly two decades ago as a means of interconnecting the emerging Internet's computers. The total projected growth of this original Internet was optimistically estimated in the millions of computers. Thus, IP's 32-bit addressing scheme was deemed excessive: It is mathematically capable of supporting over four billion possible addresses. Therefore, address classifications could afford to have gaps between them measured in orders of magnitude. Other inefficient practices, too, helped to squander much of IPv4's theoretical scaleability.

Beyond the address limitations, routing issues are also driving the development and deployment of a new IP protocol. IPv4 is also hampered by its two-level addressing hierarchy, and its address classes. Its addressing hierarchy consists of a host and domain name. Even with provisions for subnetting, this simply does not allow construction of efficient address hierarchies that can be aggregated by routers on the scale that today's, much less tomorrow's, global Internet requires.

Obsolescence also threatens the current IP from a functional perspective. Increasingly complex applications continuously impose new, unforeseen network performance requirements that traditional IP cannot deliver efficiently. Mobile users, individual users, streaming video and audio broadcasts, and the need for improved security are straining the limits of IPv4.

The next generation of IP, commonly known as IPng (Internet Protocol Next Generation) but more correctly identified as Internet Protocol Version 6 (IPv6), resolves all of these issues. It will offer a vastly expanded addressing scheme to support the continued expansion of the Internet, and an improved ability to aggregate routes on a large scale.

IPv6 also contains support for numerous other features such as real-time audio and/or video transmissions, host mobility, end-to-end security through network layer encryption and authentication, as well as auto-configuration and auto-reconfiguration.

Despite the potential benefits of IPv6, the migration from IPv4 is not risk-free. The extension of the address length from 32 to 128 bits automatically limits interoperability between IPv4 and IPv6. IPv4-only nodes cannot interoperate with IPv6-only nodes because the address architectures are not directly compatible.

Most companies' IP networks will be impossible to "flash cut" to IPv6. The number of users, devices, and applications that use IP makes this a logistical impossibility for all but the smallest companies. Instead, the usual processes of acquisition and maintenance will probably result in IPv6-capable devices being deployed in a highly decentralized, piecemeal fashion. It is equally likely that some individuals and/or organizations within companies will want to take advantage of IPv6 functionality before the network infrastructure is ready for it. This can result in widespread breakdowns in network connectivity that can severely disrupt connectivity and potentially cripple operations.

Given that operators of large, distributed IP networks can expect lengthy migrations from IPv4 to IPv6, this address incompatibility has the potential to severely disrupt service across the network if the conversion is not carefully planned and implemented!

This issue requires the attention of system architects, application developers, and anyone else who uses an IP network, now! IPv6 products won't be out for some time yet, but they are coming. Getting up to speed on IPv6 features and migration issues now will afford the opportunity to integrate one or more of IPv6's many powerful new features into future application development efforts.

More importantly, because IPv4 is not directly compatible with IPv6, an understanding of the differences and the various transition mechanisms will facilitate the planning of a graceful migration of clients, systems, and applications without experiencing service outages.

Intranets

As companies were experimenting with using the Internet as a marketing tool, a funny thing happened to their internal IP Wide Area Networks. They began turning into "intranets." Users, once accustomed to gathering the hyperconnected information available on the Internet's World Wide Web, began to experiment with developing and publishing their own Web content. This content, usually resident behind firewalls or other perimeter defense mechanisms, could not be accessed from outside their company's domain. Therefore, it couldn't really be considered part of the World Wide Web. For that matter, their private WAN couldn't be considered part of the Internet. The new hybrid was dubbed "intranet."

Intranets underwent a transition that brought them from their nascent days as a techno-toy/Internet access mechanism to an undeniable business tool. This transition was facilitated by the increasing availability of Web-based tools. Software manufacturers realized that the real money to be made in Internet technologies was held by companies, not individual users. Consequently, they targeted this market aggressively, and developed everything from graphical user interface tools for the creation and management of sites and their content to new programming languages specifically designed for the Web environment. Database companies, too, provided APIs for their products, as well as middleware that would enable users to extract data from their databases using a browser.

In the future, the browser will become the ubiquitous, business-oriented presentation layer for virtually all applications on a company's intranet. This is due, in large part, to the ability to develop an application that can run on every client workstation without customization or modification. Rather than the user interface of custom-developed application software being developed for specific physical platforms, the browser becomes the client's logical presentation layer. Additionally, this paradigm can even off-load some of the server workload to the clients. Small but repetitive tasks can be made into "applets" that are downloaded to, and executed at, the client workstation, rather than consuming a server's relatively more expensive CPU cycles.

Extranets

The next subtrend in the ongoing Internet saga is likely to be the evolution of extranets. An extranet is an extension of the internal intranet (that is, access only to specific, required business systems) to external business partners. In other words, a limited trust is defined and established between the networked computing resources of two or more companies. This arrangement is a hybridized compromise between using firewalls and the unsecured Internet for internetworking, and directly connecting the two companies' WANs.

Any companies with interlocking business processes, such as joint ventures, partnerships, customers and suppliers, and so on, may benefit from an extranet. Properly designed and implemented, extranets offer maximum performance internetworking between the networked computing resources of different companies, with limited risk to both parties. The key words in the preceding sentence are: "properly designed and implemented."

Given the open nature of IP, extending connectivity to other IP WANs can be an extremely risky proposition. Even using a point-to-point private line to interconnect two or more IP wide area networks creates one big IP WAN, unless access is carefully restricted at each end of the interconnecting transmission facility, using something more stringent than just router access lists. Access can be inadvertently extended (or surreptitiously gained) to all the other IP domains, and their networked computing assets, that each business partner may have access to. This can have disastrous results. However, the potential benefits can be significant enough to warrant the risks, especially because the risks can be minimized through careful planning and network management.

In the not-too-distant future, extranets will emerge as the preferred interconnectivity vehicle between business partners. In the longer term, the distinctions between the Internet, intranets, and extranets are likely to be almost completely erased by technological advance. Improvements in network layer authentication, certification, and (to a lesser extent) encryption will probably give companies the ability to tear down their firewalls and other physical perimeter defenses of their WANs. In their place will be logically defined intranets and extranets. When this happens, we will have come full circle. The various nets will have reintegrated into a single, ubiquitous Net with logical, not physical, subdivisions.

Hyperconnectivity

Yet another Internet-related trend that will have further impacts on networking in the future is hyperconnectivity. Hyperconnectivity is the enabling technology that was responsible for the success of the World Wide Web by making the Internet accessible to the masses. Hyperconnectivity uses predefined hyperlinks to specific data.

Hyperlinks are the electronic cross-referencing of networked information. Hyperlinked documents "fetch" additional information for the user with a single mouse click. With this tech-nology, a user can transparently navigate through a complex sequence of hosts searching for information without knowing anything about the host.

Although this is the very essence of hyperconnectivity's value, it is also its Achilles' heel. The user is almost completely insulated from any appreciation of the network intensity of each request. Files are accessed in exactly the same manner, regardless of whether they reside locally or reside half a continent or half a world away. The only clue that the users may receive about the network intensity of any given request is the amount of time required to fulfill their requests.

Network intensity consists of two components: the amount of bandwidth required, and the total amount of switches, routers, hubs, and every transmission facility that will support the requested transmission. Given that the Web is multimedia-capable, these requests can be for anything from another "page" created in the Hypertext Markup Language (HTML) that contains nothing but more hyperlinks, to data files to executable applets to high-resolution, true-color graphics files, audio or even video clips. In the hyperconnected World Wide Web (WWW), the user simply points at an active, or "hot," spot on the browser's screen, and the data is retrieved automatically and somewhat anonymously without any regard for network intensity.

This has already proved problematic for many companies whose WANs bogged down under the sudden traffic increase. The problem will soon get worse. Hyperconnectivity will soon transcend the Web and become the accessibility technology of the late 90s. Like the mouse a decade earlier, it has helped the masses overcome a technological barrier. It has already expanded the ranks of Internet/intranet users, driving up the demand for connectivity and bandwidth in the process.

In the future, hyperconnectivity will become the operational model for the User Interface (UI) of client operating systems. This is as dangerous as it is powerful. Having a "browser" as the operating system's UI means that a communications program is no longer a nice feature. It is the interface to all applications and data, regardless of where they reside. In fact, the contents of a user's hard drive and the contents of everything else connected to his network are presented to the user as if the sum total were resident locally.

This means that the network intensity of any given request will be almost completely shielded from the user by this UI. Such hyperconnected operating systems will almost certainly be accompanied by a significant increase in demand for LAN, and possibly even WAN, bandwidth.

Agents

The most recent branch of artificial intelligence (AI), agent software, joins the ranks of internet-related trends that can drive future network performance requirements. Agents are small, autonomous programs that the user can customize to perform very specific tasks.

In a Web environment, an agent can be used to automate routine information retrieval. For example, an agent can automatically fetch headlines from your electronic news service. Some types of agents are even capable of self-learning. These agents can recognize their user's patterns of responses, such as always deleting political stories without reading them, and learn to not fetch things that will only be deleted.

Artificial intelligence, in the form of agent software, may have finally found its niche in the marketplace. Almost every existing means of information distribution in companies requires the sender to identify the recipients and "push" information through the infrastructure to the recipients. The single exception is the Web. The Web relies on hyperlinks that are established by site/content owners in anticipation of someone needing them. Users must find these links to information and use them to "pull" information through the networked computing infrastructure.

This "pull" paradigm has been used relatively successfully to date, but is not scaleable. Finding and retrieving information will become much more difficult as the universe of available information expands. Evidence of this is found in the success of commercial search engines. These services have proven themselves invaluable in assisting the search for information.

Current engines, however, require the development and maintenance of a key word index. Inter-site search engines typically launch a self-replicating, mobile process that explores links and looks for content. This process, commonly referred to as a "spider," forwards everything it finds back to the originating host. This host stores all content until the discovery process finishes. Then, the engine's cataloging facility develops its index. This process is simultaneously and extremely disk, CPU, and network intensive. Consequently, it is impractical to keep the index up to date in anything near real time. This limits the search engine's timeliness, usefulness, and scaleability.

An intelligent agent can be used for highly individual information discovery. The result of such a search is much more up to date and focused. This use of agents would consume less bandwidth, CPU cycles, and disk space on the search engine's host. Without an automated and individual search tool, people will be hard pressed to maintain their information discovery and retrieval tasks in addition to performing normal work responsibilities.

Regardless of how they are used, agents will leverage a user's time and increase both the efficiency and effectiveness of Web usage. As the volume of Web-based information grows, an agent may well become essential.

Though agents are not limited to Internet/intranet applications, the way they are used for these applications will determine what kind of impact they will have on networks. Properly used, they will save time and network/computing resources. Unfortunately, the degree to which agents are automated can easily be misused. Even modest misuses can cause increased consumption of bandwidth across both the LAN and WAN, even when their users/owners are not at work.

Multimedia Communications

Multimedia communications, as described in Chapter 27, include voice and/or video transmissions. These technologies have long been viewed suspiciously by management as having dubious business value (that is, more "toy" than "tool"). After all, why use a networked PC to emulate a walkie-talkie when there's already a telephone on every desk? Similarly, the quarter-screen "talking heads" of desktop videoconferencing over conventional LANs and WANs is slow and jerky. Worse, it doesn't really capture the body language and other subtle non-verbal communications that can be gleaned from face-to-face interactions. So there remains little compelling reason to invest in an expensive "toy."

Part of the reason behind the limited capabilities of current generation multimedia communications technologies has to do with the networks that must support them. Neither today's LANs and WANs nor their protocols are well-suited to transporting the time-sensitive data of a multimedia communications session. In such a session, packets that are delivered late, or even out of sequence, are simply discarded.

Today's network protocols tend to be much better at guaranteeing the integrity of each packet's payload, and more efficient at transporting bulk data than they are at guaranteeing timeliness of delivery. Error checking mechanisms ensure that, when an errored packet is discovered, it is discarded and the originating host notified of the need to retransmit.

Many of today's more common network protocols also use a flexible packet size. The larger the stream of data, the better the ratio of packet-to-payload can be, as the protocol simply expands the total size of the packet. This is wonderfully efficient at transporting conventional application data, but can wreak havoc on a time-sensitive application. Therefore, today's networks are 180 degrees out of phase with the performance requirements of multimedia communications by forcing them to share the network with packets of indeterminate size, and therefore indeterminate transmission durations.

Today's networking hardware is also ill-suited to multimedia communications for two basic reasons: access method and usable bandwidth. Contemporary LANs, in particular, force networked devices to either wait their turn for an empty packet, or compete fiercely in a broadcast environment for an empty packet. Neither form of shared access is conducive to the latency requirements of multimedia communications.

The second reason that today's networks are ill-prepared for multimedia communications is that these applications can be extremely bandwidth intensive. These sessions, in addition to the "normal" workload of the LAN and/or WAN, can degrade the network's performance substantially.

To be fair, there are numerous other non-network-related factors that also impede the acceptance of multimedia communications technologies. Some of these other reasons include primitive drivers, lack of adequate desktop computing power, and the proprietary vendor product "bundles" that prevent communications with other brands. The net effect of all these factors is that multimedia communications technologies tend to be perceived as expensive "toys," rather than serious and legitimate business tools.

In the future, all the factors that are currently hindering the acceptance of multimedia communications will be removed. When this happens, it will be essential to already have a high-performance network in place that will be equally capable of satisfying the low latency requirements of multimedia communications and the data integrity requirements of traditional networked applications.

Higher performance LAN technologies will be needed that can provide the same high data rates to all connected devices, without forcing them to share, or compete for, available bandwidth. Simultaneously, they will lower network latency to support time-sensitive applications. This combination of high data rates and low latency will allow voice, data, and video communications to share the same infrastructure.

Extremely High-Performance Networks

Extremely high-performance, highly specialized "networks" will also appear in the not-too-distant future. Currently, networking computers requires several physical layers, as indicated in Figure 30.1.

FIGURE 30.1. Connecting computers to the LAN requires a sequence of physical layers, beginning with the CPU, memory, and I/O of each computer plus the LAN.

Of all the layers depicted in Figure 30.1, the LAN layer results in the greatest performance degradation. For example, a computer with a PCI bus is capable of raw I/O at just over 1Gbps. Connections to the LAN constrict this down as far as 100 or 10Mbps, depending upon the LAN technology. The computer's network interface card (NIC) is responsible for providing the protocol conversion between PCI and the LAN, as well as buffering the PCI bus down to LAN performance levels.

In the short run, this performance bottleneck will be addressed through numerous attempts at increasing the clock rate of LAN technologies. Today's 100Mbps LANs will become adequate only for general desktop connectivity. The more bandwidth-hungry servers will absolutely require several times this quantity. Consequently, LAN technologies with raw bit rates of approximately 1Gbps will emerge.

Ultimately, however, this paradigm will become obsolete by implementing a switched connection directly at the I/O layer through extension of the computer's bus. This is known as I/O switching. I/O switching transfers responsibilities for protocol translation and speed buffering to the switching hub. This relieves the computer of a substantial burden, thereby freeing it up to perform more germane duties. Figure 30.2 illustrates how I/O switching can expedite I/O by partially eliminating the LAN.

FIGURE 30.2. Connecting computers to the LAN using I/O switching is done by extending the comput-er's bus to the switching hub.

Later, switched connections directly to computer memory banks (called Direct Memory Access or DMA by one hardware manufacturer) will begin to proliferate in computer clusters and possibly even for inter-host communications within "server farms." Both I/O switching and DMA are highly specialized, high-bandwidth mechanisms that require very close physical proximity of the hosts they interconnect. Unlike I/O switching, DMA will likely only be used for inter-host communications in clustered computers.

Both of these technologies are subject to stringent distance limitations, therefore they will be limited to niche roles in larger high-performance networks. They both, however, point out the fact that today's LAN and WAN technologies are already woefully mismatched against the computers that they interconnect. And those computers will continue to experience increases in both speed and power at a fairly rapid pace for the foreseeable future.

Vision of the Future

These trends, and numerous others not mentioned here, demonstrate that the future of networks will be filled with at least as much radical change as they have experienced in the past decade. The changes will be necessary if networks are to keep pace with the rest of the information technologies.

These examples should also show that there is no single "killer app" that will determine the shape and substance of future networks. Unfortunately, that would be too easy. Instead, there will be numerous "minor" applications that will continuously push the development of ever higher performance networks.

The problem is that there will be several different definitions of the word performance. Multimedia applications will require very low-latency networks and guaranteed levels of service, whereas traditional application types will continue to place an emphasis on data integrity. Networks must become flexible enough to accommodate the many different types of applications, regardless of how conflicting their network performance requirements may be. Thus, they must simultaneously become capable of satisfying very dissimilar network performance requirements.

Today, the typical company maintains multiple communications infrastructures: one for voice, and at least one (possibly more) for data. In the future, there will only be a single, multimedia communications network infrastructure.

The multiple existing communications infrastructures will consolidate slowly, but steadily. Numerous upgrades to the LAN/WAN infrastructure will need to occur. WAN transmission facilities will eventually transition over to switched facilities. WAN routers, too, will eventually be replaced with a relatively small, high-bandwidth switch that functions as the multi-media communications vehicle at the premise edge. A premise edge vehicle is a device that interconnects the LAN with the WAN. The routing function will be driven to the edges of the network: the clients and servers.

LANs will continue to increase their switched port densities until there are no more "shared" LAN segments. They will complete their evolution into high-bandwidth switching hubs (or mini-PBXs, depending upon your perspective) when they embrace a network protocol that was designed as a high-speed, low-latency, port-switched protocol. These switching hubs will be distributed throughout the building. Installing them in telephone closets will minimize the distance to the desktop, thereby permitting higher bandwidth over most of the existing premise wiring.

This single multimedia infrastructure will contain familiar elements of today's LANs and PBXs. The PBX will become a software-driven premise edge switch responsible for "off-network" communications, that is, phone calls to destinations outside of the corporate network. Switching hubs will remain in the telephone closets and provide multimedia communications service to the users. These distributed switches will provide connectivity to a combination of telephony-capable computers. They will also interconnect with the PBX for "off-network" voice-only communications.

The users' stations will undergo a similar transformation. Currently, the user station features separate devices for access to the two communications infrastructures. These devices, the PC and telephone, will merge. They will integrate into a single multimedia information appliance. This information appliance will provide the user with an integrated platform for manipulating and sharing all types of information. Traditional desktop computing will be augmented by telephony applications, and both will enjoy a single connection from the information appliance into a common multiservice network.

These desktop telephony applications, for example, call management, voice mail management, and audio and video communications, will be available through a hyperconnected graphical interface through the PC, or a networked server. This represents the maturation of "multimedia computing" into a legitimate business tool. These applications will also drive the incremental integration of computing and telephony until a single infrastructure satisfies user requirements for both.

Having a physical networking infrastructure that is ready to meet the future expected challenges is not enough. These physical preparations must be complemented by the evolution of network protocols. As previously demonstrated, network protocols must be capable of simultaneously accommodating time-sensitive applications and legacy bulk data, queries, and transactions. These protocols also must be capable of scaling to higher transmission rates, and support migration away from routing in favor of switching across the WAN.

The emerging technologies and trends described in this chapter, as well as numerous others, all promise to make the future of high-performance networking even more exciting and dynamic than its past. The nature of the new demands that will be placed on networks are such that they can't be satisfied by following the historic precedent of simply increasing the network's clock rate. This might be adequate for some of the applications, and might even be necessary to accommodate the increased aggregate traffic load, but it is not a panacea.

The networks of the future must also provide native support for new features and functionality. These networks will be expected to provide Quality of Service (QoS) guarantees for time-sensitive traffic, network layer authentication and encryption for sensitive applications and data, as well as the continued ability to transport conventional data. As the future unfolds, networks that are not flexible and extensible and cannot be easily adapted to satisfy ever-changing customer performance requirements will quickly founder.


Previous chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.