SoCs: DSP World, Cores -> Bridging chasm between DSP, RISC

Bridging chasm between DSP, RISC

EETimes

Bridging chasm between DSP, RISC
By Brian Murray, Vice President of Engineering, and Irving Gold, Vice President of Marketing, Massana Inc., Campbell, Calif., EE Times
April 11, 2000 (4:38 p.m. EST)
URL: http://www.eetimes.com/story/OEG20000411S0042

By now it should be clear to everyone that communications is the driver behind the phenomenal growth of the electronics industry, and bandwidth is a key ingredient in communications-the more, the better. Unfortunately, there never is enough bandwidth to go around; you could almost think of bandwidth as a precious commodity. And bandwidth is no longer infinite-the amount and allocation of bandwidth are governed by the Federal Communications Commission in the case of wireless communications and by the laws of physics in the case of wireline communications. Both of those constraints, when taken together, mean in reality that bandwidth is never infinite. How are we going to maximize the available bandwidth under our control to increase its communications capacity? Answer: by using modern digital techniques through the use of a digital signal processor (DSP).

However, there is a continuing chasm in the communications industry-a "holy war," you m ight say-between the RISC processor camp and the DSP processor camp.

In order to do the intensive signal processing required in digital communications, the RISC providers are adding DSP extensions to their processors. Just look at the recent offerings by Lexra Inc. (Waltham, Mass.), ARM Ltd. (Cambridgeshire, U.K.), MIPS Technologies Inc. (Mountain View, Calif.) and others. On the flip side of the coin, the DSP providers are great on the signal processing but poor on control and managing the process flow. So the DSP processor camp is adding control and bit manipulation functions to their DSPs. Just look into the recent products from Texas Instruments Inc. (Dallas), Analog Devices Inc. (Norwood, Mass.) and Lucent Technologies (Murray Hill, N.J.) as examples.

What's wrong with this picture?

The paradigm is broken. The resulting products are convoluted: they're hybrid processors, optimized neither for RISC nor for DSP tasks. Furthermore, they consume significant power, have large die are as, are difficult to use and so forth.

Three options

For a moment, let's backtrack and zoom up to 40,000 feet for a look at the whole problem of embedded RISC/DSP. The options are threefold:

  • Implement DSP on a RISC processor. Problem: Communications techniques are very mathematically intensive; they require very large signal processing capabilities from the RISC. So implementing DSP on a RISC is not a good solution.

  • Use separate RISC and DSP processors. Problem: It results in a higher-cost solution and difficulties in integrating the RISC and the DSP. It requires new and complicated expertise from the designer (DSP language skills and a use of new development tool sets).

  • Use a hybrid RISC/DSP processor. Problem: It's an inefficient solution, resulting in higher cost. It also requires DSP language skills and learning a new development tool set.

    How do we attain the best possible solution? There is a unique, innovative and differenti ated solution. The process: Completely separate the control and the signal processes.

    Each process would then be implemented by a specific engine optimized for exactly what it does best. The control processes are implemented on the RISC engine, and the intensive signal processes are implemented on a dedicated DSP engine.

    The RISC/DSP engines would be loosely coupled, implementing the DSP as a coprocessor to the RISC, wherein the RISC offloads the signal processes to the DSP.

    The approach fixes what's broken and has several immediate advantages. For example, the architecture and instruction set of each processing engine can independently be optimized for what it does best. In addition, the resulting power dissipation and die areas are optimal because functions and features are not duplicated.

    The architectures and instruction sets of the engines can also be tailored to the exact needs of the applications without carrying excess baggage. Furthermore, when the RISC/DSP are implem ented as "cores," they can easily be used in systems-on-chip, also easing time-to-market pressures.

    Most designers have worked many years with RISCs. However, the whole concept of DSP is still considered by some to be black magic. The RISC/DSP chasm exists in the software arena too, not just in the hardware arena as previously discussed. For example, most RISC programs are written in C and compiled onto whatever flavor of RISC processor is used in the project.

    Writing in C and afterwards compiling onto the RISC can easily be done because of the extensive research that resulted in very efficient C compilers over the years. Not so with DSPs. Because of the complexity of the mathematics involved and the parallel processing nature of DSP, C compilers produce bloated and very inefficient code. Further, most DSPs are used in real-time applications where 80 percent or more of the DSP horsepower (DSP Mips) are used by the tight inner loops. All this means that most DSP code, even today, is still writ ten in assembly language. Assembly language is an anathema to the designer.

    One solution that is designed to bridge the chasm is Massana's FILU series of DSP coprocessor cores. By its very nature of being a coprocessor, the FILU comes with a run-time library of preprogrammed, built-in DSP functions. Namely the most common and useful routines such as FFT, FIR, IIR, convolution, correlation and others, are provided for in the run-time library. How does this ease the life of developers? Simple. Developers are able to implement key DSP functionality of their applications by cascading these built-in functions to encompass the algorithm required, thereby avoiding the forbidding task of handcrafting the routine in assembly language.

    Copyright © 2003 CMP Media, LLC | Privacy Statement
  • ×
    Semiconductor IP