The 'what' and 'why' of transaction level modeling
Bryan Bowyer, Mentor Graphics
02/27/2006 9:00 AM EST, EE Times
Advances in both the physical properties of chips and in design tools allow us build huge systems into “just a few” square millimeters. The problem is that modeling these systems at the register-transfer level (RTL) is labor intensive, and simulation runtimes are so long they have become impractical. If this is a problem today, just imagine trying to design, integrate and verify the even more massive systems we will build 10 years from now.
Transaction level models (TLMs) can help with design, integration and verification issues associated with large, complex systems. TLMs allow designers to model hardware at a higher level of abstraction, helping to smooth the integration process by providing fast simulation and simplifying the debugging process during integration.
Designers start with a variety of parts at different levels of abstraction, often including algorithmic models written in pure ANSI C++. These models are combined with a detailed specification of how they should be brought together into a system. Then the models are divided among several design teams for implementation into RTL. Other pieces — often the majority of the system — consist of existing blocks reused in the new design.
Algorithmic synthesis tools help RTL designers quickly implement new, original content for various blocks. This allows a fast path from a collection of algorithms to a set of verified RTL blocks that need to be integrated. But, any errors or misunderstanding in the specifications for the systems or for the IP blocks will still lead to a system that doesn’t work.
Transaction level models could be used to simplify the integration and testing, but where to get the models? Attempts to manually create TLMs in SystemC by adding hardware details to the pure ANSI C++ source are often as error-prone and time consuming as manually writing RTL.
While this effort is certainly justified for reusable blocks, someone still has to maintain these models. For the original signal-processing content, however, the best approach is for the algorithmic synthesis tool to simply generate the TLM models as part of the design and verification flow.
An added benefit of this approach is that system modeling and integration can now be used to refine each block in your system. Information gathered during integration is fed back into the algorithmic synthesis flow, allowing blocks to be re-optimized based on the system.
To read the full article, click here
Related Semiconductor IP
- Root of Trust (RoT)
- Fixed Point Doppler Channel IP core
- Multi-protocol wireless plaform integrating Bluetooth Dual Mode, IEEE 802.15.4 (for Thread, Zigbee and Matter)
- Polyphase Video Scaler
- Compact, low-power, 8bit ADC on GF 22nm FDX
Related White Papers
- Advancing Transaction Level Modeling -- Linking the OSCI and OCP-IP Worlds at Transaction Level
- Functional Transaction Level Modeling simplifies heterogeneous multiprocessor software development
- Why VAD and what solution to choose?
- Colibri, the codec for perfect quality and fast distribution of professional AV over IP
Latest White Papers
- Reimagining AI Infrastructure: The Power of Converged Back-end Networks
- 40G UCIe IP Advantages for AI Applications
- Recent progress in spin-orbit torque magnetic random-access memory
- What is JESD204C? A quick glance at the standard
- Open-Source Design of Heterogeneous SoCs for AI Acceleration: the PULP Platform Experience