Keynote for Internet'99 conference, Moscow, October 25-28

Challenges of the next generation networks

Christian Huitema, Telcordia Technologies Inc.

Abstract:

Next generation networks

In 1998, we started to hear a lot of talks about "next generation networks" (NGN). To many, NGN represent a fundamentally fundamental redefinition of the telecommunication industry, a revolution brought up by the merging of voice and data, transmission and computing. In fact, in the end, this new technology may lead to communication companies not even being called "communication companies" any longer, but will lead to their being viewed as a new category of Service Company that hasn’t really existed before. The NGN would carry voice and data over the same infrastructure, which would unleash the power of modern computing is on the old communication services.

Figure 1. Number of addresses in the DNS

The first rationale behind the NGN movement is indeed the growth of data communication services, first of all the growth of the Internet. One way to assess that growth is to count the number of addresses in use in the Internet. Figure 1 is drawn using data from Telcordia's Netsizer project [1], complemented by historic data before 1997, and projections after 1999. It shows the steady increase in the network's size, as it reaches an ever larger portion of the world's population. To accompany the growing demand, Internet providers keep increasing the capacity of their networks, at rate as high as 200% or 300% per year. In fact, the Internet traffic has grown so much that, according to many estimates, its volume now equals or exceeds the volume of voice traffic, at least in North America. The estimates are, however, seldom backed by actual measurements. Indeed, different analysts have written that the volume of data equaled that voice in 1997, in 1998 and then again in 1999, which is not quite compatible with a growth rate of 300%. One of the reasons for the fuzzy estimates is that it is hard to measure the actual volume of data traffic. It is however very clear that the installed transmission capacity of the Internet networks is on a par with that of telephone networks, and that it is growing much faster. It is also clear that there is a very high demand for high speed connection to business and to the home, as evidenced by the explosion of e-commerce and by the investments in cable modems or digital subscriber loops.

If data traffic is today on a par with voice traffic, and if its volume keeps doubling every year, then within a few years voice traffic will only account for a very small fraction of the networks' capacity. This means that new communications networks are built for data services, not voice. In the short term, these new networks are developed in parallel with the voice network. But we should also realize that in the long term, when the voice requirements will only be a fraction of the data networks' capacity, it will not make economical sense to maintain a parallel architecture for voice, and that the voice traffic will simply be carried as an application over the data network. Experts may debate on the exact timing of this transition. There is little reason, today, to discard the voice networks that already exist. But when new networks are being built, it makes sense to plan immediately for integration of services under the NGN architecture. The more modern packet switching technology is in fact less expensive to deploy than the classic architecture of time division switches and multiplexed transmissions, and it enables NGN services. This is the reason why it is more and more often adopted by competitive carriers. It should also provide immediate benefits to countries that need to develop their infrastructure, allowing them to jump directly to the most modern technologies. But, before the reap the benefits of the NGN transition, we must face several challenges.

The quality of service challenge

Integrating voice in a data network requires that one carries the voice stream as a succession of data packets. Data networks were not designed specifically to carry voice. The Internet routers don't make any special attempt to guarantee that a specific voice call will receive a regular and sufficient amount of service. In fact, the Internet routers don't even try to identify individual calls or connections. They simply route the packets as fast as they can, under what is known as the "best effort" service. As a result, individual packets experience different transmission delays and, in situation of congestion, may sometimes be lost. In the absence of special treatment, the variable delays would distort the voice, and the packet losses would result in intermittent cracks of noise.

Figure 2. Available modem speed

A first solution is to supplement network deficiencies by the use of computing power and software. The Real-time transport protocol, RTP [2], defines how timestamps and sequence numbers can be used to reconstitute a continuous signal even if packets incur variable delays. Missing packets can be reconstructed by using either interpolation, or even some forms of forward error correction. Congestion can be eased by the use of compression. Supplementing networks by "intelligence in edge" is in fact a fundamental principle of the Internet. It is this first set of solution that allowed for the early deployment of voice over IP.

There are indeed limits to what even the most powerful edge device can do. Jitter compensation cannot lower the network delays, compression can only reduce the bandwidth requirements to a few kilobits. To develop a fully integrated network, one must make sure that the network provides a minimal quality of service to voice calls, one way delays that should not exceed 100 milliseconds, available bandwidth of at least 20 kilobit/s per call. Some hope that the bandwidth problem will be solved naturally, as the overall network capacity increases, as more and more high speed connections get deployed. In fact, there are all kinds of historic trends that point in this direction. For example, if we plot on a log scale the available modem speeds in the past years, we see a trend that can be approximated by a doubling every 1.9 years [3]. Similar trends could be shown for the available transmission speed of optical fibers, or for the number of packets that routers can switch per second. This is the "rising tide" hypothesis, where the increase overall capacity will progressively render all kinds of applications very common on the network. Voice connections, after all, only require a narrow bandwidth, lower than the requirement of data services such as image transmission. An Internet that allowed unfettered data transmission would have few problems transmitting voice.

Figure 3. TCP connection delays

We are probably slowly reaching that point, but there remains a need to avoid transient queues, queues that are merely inconvenient for data transmission but that would impair sound reception. An example of this effect is shown on figure 3, which plots the connection delays observed during about 40,000 TCP connection attempts over the Internet in the past year. The connection delay is essentially the sum of a network round trip delay and a variable processing delay in the server. We observe that in a very large fraction of the cases, the delays are lower than 200 milliseconds, which would be entirely adequate for voice conversation. However, in a significant fraction of the cases, the delays are very long, and would have caused trouble in a voice call. (We also observe that 6% of the connection attempts lasted more than 2 seconds, which meant that there was a packet loss, and a retransmission was necessary.) To reduce the variability of the delay, we need to deploy additional controls in the Internet. We have all reasons to believe that this challenge will be met. The work on differentiated services is progressing very fast, and existing products already allow to provide assurances that a privileged traffic would not suffer from congestion.

The management challenge

Today, the society relies heavily on the telephone network. We take it for granted that the telephone network will be "always there," that in case of emergency we can immediately call the police, the fire department or an ambulance. On the other hand, few of us would trust their live to the Internet. We have seen many servers remaining unreachable for arcane reasons, too many instances of network failures caused by electric power outages or the proverbial back-hoe. Failures that would be brushed aside when they only affected a small number of adventurous users become society problems if a whole city is left without communication. When a network grows in scale to becomes ubiquitous, when it grows in importance to become a central infrastructure, it must become extremely reliable.

In fact, one of the primary design goals of the Internet Protocol was reliability. Most Internet applications are performed end to end, which means that they will be available as long as the two end of the connection are up, and as long as the network is capable of sending some data. The condition that the two ends be available is indeed hardly a condition at all -- obviously, if your phone is broken, you will not be able to communicate through it. This architecture, however, keeps the network simple, making it easy to automatically correct failures. Automatic routing protocol such as OSPF ensure that, when an element fails, new routes will be computed in a very short delay. Indeed, OSPF can only compute these routes if the network has been built with enough redundancy. As the network grows in importance, the Internet Service Provider will have to ensure that their infrastructure is truly redundant, that no single failure can break the network. Many Internet backbones already have that characteristic, but there are still some problem areas. OSPF, for example, corrects errors very quickly, but not quite as fast as the repair procedures between SONET rings. The inter-network routing protocol, BGP, is notoriously slow and hard to parameterize. But these are known problems, problems that the Internet engineering community is used to solve.

Reliability, however, is only one part of the management challenge. As a network grows, as the number of user grows, operation procedures can easily become a bottleneck. An Internet Service Provider that only serves a few thousands customers can rely on manual operation, letting staff organize delivery of connections, change of subscription parameters, or network maintenance. But the telephone industry has learned over the years that growing to a large size requires well specified procedures, automated with reliable operation support systems (OSS). These procedures enhance the network operation, increasing reliability and reducing service delays, but they also reduce the cost of operation. As a result, the productivity of the telephone companies is very high. Obviously, one would not want to loose this high productivity when moving to a next generation architecture. We will have to automate the operation of the Internet in much the same way that we automated the operation of the telephone network. In fact, we can probably recycle a large fraction of the telephony OSS into the next generation networks. For example, systems that support ordering and delivery of services, network configuration and maintenance, can easily be adapted to the NGN, and will enable network providers to reach a new scale. The management challenge is indeed important, but I have no doubt that it will be solved.

The transition challenge

More important, and indeed more challenging, is the need to ensure a smooth transition from a telephone infrastructure to a data network. There would be few customers for a new Internet telephone that provided a host of brand new services but could not interact with the existing network. During the first years of deployment, most telephone calls NGN will in fact either originate from or terminate into the plain old telephone system (POTS). Ensuring a smooth transition implies that such calls can be dialed using existing phone numbers, that the signalling would not take any longer than in the current network, that services such as Caller-Id or Call-Waiting could be offered transparently across the interconnection. If we are successful, there will be a moment when almost half the phone calls in the world will go through a gateway between the POTS and the NGN. This means that we will have to manage very large gateways.

Figure 4. The Call Agent Architecture

In practice, the scaling requirement and the performance requirement can only be met if the NGN gateways have a direct access to the telephony signalling network, to the SS7 network. For this reason, we proposed the "call agent" architecture [4]. In this architecture, we find three types of elements:

In short, the idea was to explode the functions that are currently performed by a telephone switch. The call agent corresponds to the switch's central processing unit, the gateways to the line cards, and the signalling gateways to the attachments to the SS7 network. There is no switching matrix in the exploded architecture, as this function is performed by the data network. To make this architecture work, we needed to define how the control information would be exchanged between the various elements. The most visible part of the work was the definition of the "Media Gateway Control Protocol" (MGCP) [5], through which the call agent can instruct the gateways to set up connections and to look for events such as the dialing of numbers. (The MGCP work triggered the work on Megaco in the IETF, and H.248 in the ITU.) We also need to define transport protocols for carrying the signalling information between the signalling gateway and the call agent: the IETF is currently working on a protocol that would provide better performances than TCP for this application. Finally, we also need to adopt protocols for passing calls between the call agents. There are multiple choices, from adaptations of the ISDN Signalling protocol, ISUP, to an extension of the multimedia device protocol, H.323, but the industry seem to be converging towards the Session Initiation Protocol, SIP, and IETF standard. Convergence on a single protocol will facilitate the interconnection of Internet Telephony services.

The security challenge

The security challenge is in part caused by the layering of applications, including telephony, on top of a fully connected general purpose data network. In the case of the PSTN, commands are exchanged through a separate signalling network, which is in theory not reached by ordinary users. This kind of separated design is not really appropriate in the NGN, since, by definition, all gateways must be able to carry voice and data over the Internet. Even if we sent network configuration commands over a separate network, we would only have the illusion of separation, since this separate network would have thousands of contact points with the Internet, one for each gateway.

If network elements such as gateway and call agents are accessible through the Internet, there is a risk of attacks. For example, malfeasants could try to subvert the call signalling procedure, in order to establish calls without having to pay. In fact, this kind of risk has been recognized since the inception of Internet telephony, and protocol generally include protections through the use of IP security [6] or other cryptographic techniques. But network managers are still very concerned by the fact that Call Agents and Gateways are typically developed on top of standard computer platforms. Many of these platforms have been victims of attacks in the recent past. Hackers would use design features or programming mistakes to gain control of the platform. This is especially scary in the case of the Call Agent, which could be used as an access ramp for launching attacks on the phone network. Many switches assume that the SS7 network are secure, and thus would execute any command that is received through that network. A hacker that controls the Call Agent could send messages through the SS7 gateways, and could do considerable damage in the infrastructure. Obviously, the operators of Call Agents should try to protect against such damage, for example by auditing the platforms and by using firewalls for additional protections.

In addition to traditional attacks against the gateways and the call agents, we must also deal with attacks against the voice channels. These attacks fall under several categories, such as denial of service attacks, monitoring attacks and voice insertion attacks. Denial of service attacks try to disrupt voice conversations, for example by creating spots of congestion on the network paths used by the voice packets. Protection against this kind of attacks is, in fact, a variant of the "quality of service" problem. The same methods that isolate a guaranteed communication from the general traffic should also isolate it from denial of service attacks. However, these methods will only be effective if the routers that process the packets are robust, and cannot be themselves subverted.

Monitoring attacks are performed by obtaining a copy of the voice packets, in order to listen to the conversation. This copy can be obtained either by monitoring a communication channel, or by somehow instructing an intermediate router to duplicate packets. The protection against monitoring attacks is indeed the encryption of the voice packets. This suppose that the partners in the communication can securely negotiate a session key at the beginning of the conversation. One should indeed observe that a variant of the monitoring attack is often performed by law enforcement agencies, and that there is a tension between the public's desire for privacy and the police's request for wiretap. In fact, all measures that allow wiretapping by the police also weaken the network and make it more susceptible of illegal monitoring. For example, if packet duplication systems are installed in routers in order to comply with police requirements, there is a risk that the same systems will be overtaken by hackers and used to listen to conversations.

The voice insertion attacks are, in a sense, specific of the packet networks. They are achieved by sending to the gateways voice packets whose headers have been forged to appear as if they belonged to an existing conversation. It is very hard for gateways to check the source address of packets. In some call set up procedures, there is a desire to "open the circuit" as soon as possible, in order for example to not miss the first words of an announcement. This means that gateways may in some cases have to accept packets without even knowing the identity of their partners. Hackers could wait for the right moment, and insert an offensive message during the first seconds of a call. In addition, in the absence of cryptographic procedures, it is hard to authenticate the source of a packet. Hackers could forge that source address to overlay their own messages in existing conversations. Protection against these attacks require that the partners in a conversation exchange keys before exchanging voice packets, and use these keys to authenticate their messages.

Achieving NGN security will require an extensive deployment of cryptography, to protect the network infrastructure, the signalling protocols, and to assure the privacy of communications. There is however some reason to be optimistic. Once proper technologies such as IP security are deployed, the NGN will be more robust than the current telephone network, and the privacy of communications will be better protected.

The economic challenge

The NGN architecture poses an economic challenge to the service providers. At the root of the problem is the constant drop in the price of bandwidth. We mentioned previously that the capacity of data networks was, in 1998 or 1999, about equal to the capacity of the phone network, and that the capacity of data networks doubles every day. But these progress in capacity is accompanied by a drop in prices. For example, a company like Worldcom announced in 1999 that the capacity of its data networks has more than doubled in the year, and that its data revenues have increased by 40%. Comparing these two numbers implies that the price per bit has declined by about 40% in the year, and would be divided by 2 over a two year period.

This drop in price is not a problem for data network providers, as the volume of business would be multiplied by 4 in the same period, while the price of the equipment would drop, following Moore's law. But it is definitely a problem for the companies that derive most of their revenues from the voice service. Today, with similar installed capacity, the voice revenues are about 9 times the data revenues, which implies that a bit of voice is resold for about 9 times the price of a bit of data. This situation cannot last, specially on an NGN where voice is carried as yet another data application. Figure 5 shows how the total revenues of the industry would evolve if the price of a voice bit converged towards the dropping price of a data bit over a period of 8 years. In this particular hypothesis, the data revenue grow very fast, but the voice revenues diminish even faster, resulting in an initial drop of revenues in the coming years. After 5 years, the voice revenues only represent a tiny fraction of the total revenues, which means that at this point voice communication would, essentially, be free.

We may or may not believe that the particular hypothesis of Figure 5 will occur in reality. But it is very clear that the industry will have to develop new services, in order to compensate a potential lack in revenues. Wireless communication is one type of new service, but the NGN should open the gates to much more. In fact, the French Minitel and then the World Wide Web demonstrate that an open network is a powerful platform for these new services. About 20,000 services were deployed on the Minitel network in the years that followed its opening, and there are now more than 2,000,000 web servers, providing all kinds of services. This far exceeds the number of services such as "call waiting" or "voice mail" that were developed in the "Intelligent Networking" architecture.

Communication services can be thought of as a rendezvous process, followed by a transmission of signals. The NGN will allow a reinvention of the rendezvous process. We could harness web applications such as instant messaging or chat rooms, we could integrate voice communication within business applications, learning systems and video games. The NGN also offer new transmission capabilities. Current telephones encode voice in a 3.3 kHz bandwidth; the increased network capacity will allow us to transmit High Fidelity signals. The same bandwidth, combined with increased computation power in the handset, will allow for multimedia transmission, displaying for example a video screen of your correspondent through your wireless handset. The transmission through a data network allows for easy interface with computers, and the inventions of many sorts of new agents. In short, the possibilities are infinite.

The existing industry could benefit greatly from the NGN, if it provides these new services to the users. But the NGN architecture removes the automatic linkage between networks and applications. Services can be provided by third parties such as AOL, E-Bay or Amazon.Com. Services be provided by enterprise servers, in much the same way that telephony services are provided today by PBX. Services can also be implemented directly in the appliances, in much the same way that applications are developed today in personal computers. The networking industry will have to compete or cooperate with all these new players. It could provide back office services for the third party providers, outsourcing services to enterprises, support services to appliances. However, this is still a big challenge.

Reinventing telecommunications

Challenges are, in essence, what motivates research. The next generation networks, in that respect, will be a great motivator. Some of the challenge, such as Quality of Service, are in fact already being resolved. The work is less advanced in other, such as the need for reliability and automated management, the design of architectures for the transition between a telephony and an NGN infrastructure, or the provision of security in the NGN. And, finally, there is an obvious opportunity to invent the telecommunication services of the 21st century. But the payback will be enormous. We have to reinvent telecommunications. Doing so, we should not only provide high bandwidth, smooth transition, managed networks, a better security for everyone, innovative services to those user populations that are already well equipped. We should seize the opportunity to immediately deploy the next generation networks in the countries where infrastructures are most needed. There is no need, there, to repeat a stepwise development from telegraphy to telephony, then to the Internet and finally to the NGN. Let's jump immediately in the future.

Reference:

  1. Telcordia's Netsizer project is available on the web at: http://www.netsizer.com/
  2. H. Schulzrinne, S. Casner, R. Frederick, V.Jacobson, "RTP: A Transport Protocol for Real-Time Applications." RFC 1889, January 1996.
  3. C.A. Eldering, M.L. Sylla, J.A. Eisenach, "Is There a Moore's Law for Bandwidth?" IEEE Communications Magazine, October 1999, Vol. 37 No. 10.
  4. Christian Huitema, Jane Cameron, Petros Mouchtaris, Darek Smyk, "An Architecture for Residential Internet Telephony Service." IEEE Internet Computing, Volume 3, Number 3, May-June 1999.
  5. M. Arango, A. Dugan, I. Elliott, C. Huitema, S. Pickett, "Media Gateway Control Protocol (MGCP) Version 1.0." RFC 2705, October 1999.
  6. S. Kent, R. Atkinson, "Security Architecture for the Internet Protocol." RFC 2401, November 1998.