Decoupling Echo Cancellation from DSPs in VoP Gateways

EETimes

Decoupling Echo Cancellation from DSPs in VoP Gateways
By Doug Morrissey and Francois Morel, Octasic, CommsDesign.com
September 18, 2003 (10:36 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030918S0025

Today, migrating voice traffic onto data-centric packet networks makes real business sense. It costs less, uses fewer network resources, and opens the way to new services and therefore revenue streams. But there are obstacles: voice service demands voice quality. Packet networks must maintain the carrier-grade voice quality that customers have come to expect from traditional circuit-switched networks.

A key network element affecting voice quality is the gateway, where circuit-switched time division multiplexed (TDM streams become voice packets. Current implementations of voice-over-packet (VoP) gateways rely heavily on digital signal processors (DSPs) for basic telephony functions such as packet processing, compression, voice encoding/decoding, tone detection and echo cancellation. General-purpose DSPs get the job done, but they come at a high price: limited scalability, density roadblocks, and steep manufacturing costs.

Is there a better gatew ay solution? Yes. An emerging option involves specialized co-processors that rework basic DSP architecture without compromising future requirements. By decoupling mature telephony functions such as echo cancellation from DSPs and offloading these functions onto dedicated semiconductors, carriers can implement VoP gateways with a lower per-channel cost while increasing the quality and reducing total power and area requirements. The result is increased density, reduced power consumption, and a lower-cost product.

Cancellation Ins and Outs
Among key telephony functions, echo cancellation can be one of the most challenging. Designers count on robust echo cancellation to address divergence, double talk, clipping, and annoying changes in background noise. Early echo cancellers were designed to preserve voice quality in the face of long-distance delay on circuit-switched networks. But algorithms in today's echo cancellers must meet new voice quality challenges—including conversations in noisy env ironments and wireless calls—all for a far more demanding digital packetized network. Due to the delay intrinsic to packetizing voice, echo cancellers are now implemented for all voice channels in a packet network. In fact, echo cancellation is a primary cost factor in today's VoP solutions.

Echo cancellers rely on sophisticated yet robust algorithms; the right software is crucial for optimum performance. The design of the DSP architecture and the types of instructions available can greatly influence channel density and power consumption per channel.

To make matters complicated, G.168 recommendations describe echo cancellation needs without addressing the complete voice-quality picture including adaptive noise reduction. Accordingly, G.168 compliance doesn't ensure toll-quality voice, so vendors come up with proprietary solutions.

Currently, echo cancellers are integrated alongside packetization/aggregation and compression engines into generic DSPs. The result is a generic DSP architecture that cannot be optimized for the processing functions required for echo cancellation. In fact, if manufacturers optimized generic DSP architecture specifically for echo cancellation, other processing functions (for example, codecs) would be penalized—as would the device cost. Even worse, the echo cancellation software would have to be engineered to work with the other algorithms running on the device. These piecemeal changes to generic DSP architecture would greatly complicate product modifications or improvements to meet specific customer requirements.

This is where co-processing can be an elegant solution. If echo cancellation is offloaded onto dedicated co-processors, designers can optimize the algorithm, as well as power consumption/dissipation and device area.

Why Co-Process?
In typical VoP gateways, echo cancellation, compression functions, and some amount of packetization share DSP software loads and hardware. By offloading MIPS-intensive echo cancellation to a dedicated, opt imized co-processor, designers gain deployment flexibility and efficiency for a total gateway solution. For one thing, the architectural split enables designers to size up or down to meet system requirements. Designers also get to pick and choose, using best-of-breed components and algorithms for voice quality, power, density, and cost, while scaling to the highest level on each component. The same number of DSPs, for instance, can handle additional channels, enabling a greater number of low bit rate codecs and fax relays.

But the most compelling reason to integrate co-processors into VoP gateway architecture is this: well-implemented co-processors deliver remarkable rewards in voice quality, channel density and power consumption.

Here's why. Today, power is the limiting factor in boosting density for MIPS-intensive algorithms needed for echo cancellation, codecs, and voice quality enhancement (VQE). The major power roadblocks are constraints on heat dissipation, air flow, and raw cost of power. Tr aditionally, system power reductions have come with each new generation of silicon technology. But as the technology moves from the 0.13-micron process used for today's parts to the next-generation 90-nm process, the incremental return on power savings is far more limited. Therefore, reductions in power consumption must come from a shift in the fundamental architecture of the gateway, giving rise to co-processing.

A co-processor strategy also gets the most out of existing DSPs and standards. For instance, standards have optimized low bit rate codecs for traditional DSP architectures, so it makes design and economic sense to keep codec processing on these DSPs. However, echo cancellation and voice quality enhancement algorithms are frequency domain based, making them ideal for optimization with vector DSP engines.

In terms of memory usage, low bit rate codecs require less memory density per channel. On the other hand, echo cancellation and voice quality enhancement require short periods of more inten se memory use. To leverage these differences, devices require an architecture that places the processing on separate devices.

The Legacy VoP Architecture
To illustrate the benefits of the co-processor approach, let's compare a traditional VoP architecture versus an architecture with the echo cancellation function decoupled. We'll start by looking at the legacy VoP approach.

Current DSP-centric architectures for high-density, carrier-grade voice gateways convert TDM voice streams into packets for ATM or IP transmission. Typically TDM channels are provided directly to DSP devices through a time slot interchange (TSI) which maps channels to DSPs. In this architecture each DSP provides all the processing requirements for a limited number of voice channels.


Figure 1: Traditional voice gateway architecture.

Using legacy DSP-centric gateway architecture, a carrier imp lementing a 672-channel DS-3 line requires a bank of four state-of-the-art DSPs in which a given number of MIPS is split between echo cancellation and compression functions. If the echo tail length increased, for instance, more processing power would go to echo cancellation. What's more, additional MIPS would be needed for standards-compliant acoustic quality control and noise reduction. While G.729A/B reliably defines compression, G.168 doesn't address all echo cancellation issues. So algorithms tend to be vendor-specific and variable in their power consumption.

The Co-Processor Strategy
Developers looking for a cost-effective way to make gains in power consumption and density are rethinking processor architecture. General-purpose engines are not moving technology limits; they're simply re-arranging them. For instance, some engines increase generic processing capabilities at the expense of channels while others sacrifice processing functions to push channel density. With co-processing, desig ners can increase density and MIPS simultaneously while using less power (Figure 2).


Figure 2: Diagram of an optimized gateway architecture.

For instance, a dedicated echo cancellation co-processor enables a bank of three DSPs to treat 672 channels while cutting power consumption by one-third and real estate by one-half, compared to a traditional solution. Because echo cancellation occurs outside the DSP, designers can select the algorithms best suited to their design challenges, including long echo tail length, noise reduction, acoustic quality control, and more. Algorithms are easy to change, upgrade, and deploy, so signalling, tone detection, audio conferencing, and buffer playback can be deployed as needed.

In an optimized architecture, DSPs provide the muscle needed for G.711, G.723.1, G.729A/B codecs and group 3 fax relays. This results in carriers preserving inve stments in DSP and voice processing software while well-developed, mature functions such as echo cancellation are offloaded onto co-processors.

There's another benefit to offloading echo cancellation. Specialized co-processors provide a value-added platform for significantly improved voice quality while meeting the same design and cost constraints as the legacy architecture. Because of the complexity of echo cancellation parameters, carrier-grade voice quality can be elusive and expensive. With the ability to implement high-quality specialized echo cancellation devices, communications designers can deliver a more robust product at a lower cost.

Software Matters
Migrating to the co-processor architecture requires changes to the system control software. But changes are minimal and easily implemented (Figure 3)


Figure 3: Typical VoP software blocks.

Co-proces sor architecture involves few changes to software, affecting only a small portion of the functional control software, which is where echo cancellation functions are managed. These routine functions include echo cancellation provisioning and control, modem detection, and queries on echo cancellation status/statistics. To migrate to a co-processor architecture, these functions must be shifted from the DSP interface/API to the co-processor interface/API.

Luckily, this is an easy task. That's because echo cancellation parameters are typically independent of other DSP features, enabling software engineers to easily extract and move these parameters to the new co-processor API.

To further simplify co-processor migration, some designers include an abstraction layer to shield the gateway's generic control software from the specific echo cancellation API in use. Migration then becomes a minor question of modifying generic functions in the abstraction layer.

The new co-processor strategy also affects the DSP bundled software. In fact, by shifting MIPS-intensive functions such as echo cancellation out of the DSP, co-processing reduces demand on the DSP-embedded software bundle. The shift frees DSP processing power for other functions such as compression/decompression and enables support for additional TDM time slots. On the other hand, engineers must exercise caution here, since the DSP software bundle may require changes before profiting from the newly available MIPS.

The Bottom Line
On a per channel basis, co-processing echo cancellation can save designers half the space and power of top-of-the-line DSP solutions. Co-processing is also future-proof, since the architecture shifts only mature specialized functions. Co-processing enables greater design and implementation flexibility. Best of all, co-processing helps trim VoP costs while delivering the carrier-grade voice quality that keeps customers coming back.

About the Authors
Doug Morrissey is vice president and CTO of Octasic. Doug holds a BSc from Rochester Institue of Technology and can be reached at doug.morrissey@octasic.com.

Francois Morel is the manager of software development at Octasic. Francois holds a BSc from the University of Sherbrooke and can be reached at francois.morel@octasic.com.

×
Semiconductor IP