Ethernet’s Star Rises Again

Posted: 08/2000


Delivering QoS via Application-Based Routing
By Alex V. Petrov

The IP has become the de facto platform for the long-hailed convergence of communications, computers and content.

On the forefront of this revolutionary transformation is ferocious competition in the voice and fax services markets, where VoIP technology has made deep inroads into PSTN. Close optimization of IP networks for voice and fax traffic has enabled VoIP carriers to match the PSTN quality–and on occasion even edge it.

Continued success of the new carriers depends on sustained capability to scale the networks up by 100 percent to 400 percent a year without deterioration in QoS. Although today’s fragmented networks disallow true end-to-end QoS solutions, a lot can be accomplished by controlling service quality at least edge-to-edge (so-called system QoS).

Typically, the issue is addressed by either boosting bandwidth (backbone access and dedicated lines) or through resource reservation protocol (RSVP), including differential services (DiffServ) and multiprotocol label switching (MPLS).

Despite the indisputable technical merits of all these approaches, their use in today’s networks has been limited because of cost, end-to-end control and operational complexities. Besides, focus on bandwidth is more appropriate for circuit switching, whereas the problem at hand is IP’s “best-effort” delivery mechanism.

A natural but barely tapped QoS strategy is to explore the benefits of statistical multiplexing, that is, to diversify IP traffic mix, differentiate applications by their QoS requirements and start sending packets to IP routes that exhibit acceptable performance.

Although application-aware Layer 4 switching and policy-based routing are not new to the LAN/WAN environment, application-based routing at the edge would shift the emphasis from manipulating priorities and classes of service to real-time matching of QoS tolerances with performance of IP routes.


A VoIP network (see “VoIP Network for Integrated Services” diagram) includes edge devices and an IP transmission core. The customers’ traffic at the edge includes pulse code modulated (PCM) voice and fax diluted by TCP/IP traffic such as e-mail, unified messaging, file transfer, audio and video streaming and multimedia conferencing. SS7 messages transmitted through the IP cloud also may be on the application list.

Depending on the applications and customers on the edge of the network, a service provider may deploy VoIP gateways, gatekeepers, edge routers, CPE (both enterprise-grade and residential), application servers and other devices used to aggregate, process, packetize and route customer traffic.

Diagram: VoIP Network for Integrated Services

From the edge, composite IP traffic is terminated either on-net over IP backbones (private or operated by third parties) or off-net over PSTN (voice/fax rerouting) and SS7 network (signaling).

From the QoS standpoint, each
application–and therefore each packet at the network edge–can be characterized with a specific set of tolerances on latency, packet loss and jitter. Particular tolerance values depend on the customer’s expectations, hardware specifications and network topology.

For example, it may be postulated that for voice calls, edge-to-edge latency must be under 250 milliseconds, packet loss must be under 5 percent, and jitter must be under 10 ms. If fixed delay for selected VoIP gateways with 10 milliseconds jitter buffers and (say) G.729 codec is 100 milliseconds, then any IP route with delay under 150 milliseconds and 5 percent packet loss will be acceptable.

On a relative scale, latency tolerances for messaging and store-and-forward faxes may be more relaxed, whereas those for business videoconferencing may be more stringent.

To maintain QoS for any given application, the edge device notes TCP/user datagram protocol (UDP) ports of incoming packets, compares service quality tolerances with present transmission parameters for available IP routes and sends the packets over a route that satisfies all the QoS tolerances.

If several acceptable routes are available, then the one with the least incremental cost is selected. (Albeit unusual, if a VoIP carrier negotiates preferential treatment of its traffic with a backbone access provider, it may come with a multitiered, traffic-sensitive pricing scheme.)

If no routes match the tolerances, then the session is rerouted off-net, rejected or gracefully terminated.

In particular, with temporarily congested backbones, incoming voice calls may be rerouted over PSTN, whereas some message retrieval and file transfer still could terminate over IP.

In a way, fitting applications with routes is similar to the popular game of Tetris. In each case, the point is to minimize the
mismatch between drop-in “blocks” and the “landscape.”

In fact, some Tetris tricks may be of interest to routing strategists. For instance, delay-insensitive packets may be deliberately buffered (within their latency brackets) and up to 5 percent of voice packets may be intentionally dropped to weather congestion.

More fundamentally, by analyzing and predicting congestion patterns, edge devices would be able to reschedule session and avert premature termination.


To implement application-based routing at the edge, service providers have to partner with vendors and coordinate routing
decisions with the QoS data. Such data can be derived with comprehensive IP monitoring tools such as NeTrue’s Communications Inc.’s
( NeTrueQoS.

For a medium-size VoIP carrier, a development of such scale could be a challenge. This is why it is encouraging to see that several internetworking startups, most notably Aplion Networks, Inc.
( are building application awareness into the network edge and plan to unveil their products later this year.

As a natural next step, service providers may choose to integrate routing with other MIS modules. These may include generation of call detail recording (CDR), Internet protocol detail recording (IPDR), wholesale least- cost routing and application-based firewalls.

The major advantages of the outlined approach are moderate overhead, flexibility and potential for adaptive learning. The bandwidth overhead is small since only the aggregate parameters have to be measured and passed around.

Furthermore, the approach does not require full control over all the IP routes and detailed knowledge of the backbone status. That is, routing will be stable as long as the composite performance of some IP routes matches voice requirements and as long as the QoS updates are timely and accurate.

The Tetris analogy gives some basic insights in how congestion can be prevented or mitigated by stretching delivery parameters for diverse applications within their
QoS brackets.

As for the implementation costs, they would probably be too network specific to draw a fair comparison with today’s QoS-enabled routers.

However, for carriers and ASPs that believe that progress in integrated circuitry and fiber optics continues unabated, and that PSTN/IP proliferation eventually will yield to the “IP everywhere” world, focusing on applications may make more sense than focusing on transmission technologies.

After all, the cost of future growth is one of the biggest factors in the game.

Alex V. Petrov is in charge of network planning and VoIP projects at DirectNet Telecommunications
( He was graduated magna cum laude with a degree in optical engineering from Bauman Moscow State Technical University and earned a Ph.D. in electrical engineering from Penn State University. He can be reached at

Leave a comment

Your email address will not be published. Required fields are marked *

The ID is: 68595