Can programmable processors really be smaller than hardwired logic?
We run into this question quite often in customer meetings. We license our v-MP6000UDX processor and run applications like CNNs, computer vision algorithms and video codecs on it. Frequently, such algorithms are implemented in hard-wired logic, instead of running in software on a processor, like we do. Intuitively, you’d think such a hard-wired approach results in much smaller and lower power implementations, however, we’ve seen many designs and we’ve found that often the reverse is true: using our processor results in smaller and lower power solutions than using hard-wired designs. In this article we will highlight some reasons of why that can be the case.
Silicon reuse
The first reason a processor-based design can result in a smaller solution is that a processor reuses silicon a lot more. In hard-wired designs, each function in an application becomes its own individual circuit. When using a processor, each function just becomes some code that resides in a memory. This code can then be executed on the processor, giving the processor virtually unlimited functionality. The more code there is to run in silicon, the more efficient a processor-based approach becomes compared to implementing it in hard-wired logic. Try implementing all of Android in hardware for instance. It’s simply impossible.
To read the full article, click here
Related Blogs
- Can Tabula and Tier Logic be successful?
- CXL Controller with Zero Latency IDE: You Can't Do Better Than Zero
- What Can We Learn From The iPad About Chip Design?
- Need really big FPGAs? Xilinx will be taking the "3D" route for initial Virtex 7 parts
Latest Blogs
- Using AI to Accelerate Chip Design: Dynamic, Adaptive Flows
- Locking When Emulating Xtensa LX Multi-Core on a Xilinx FPGA
- Design IP Market Increased by All-time-high: 20% in 2024!
- Embarrassingly Parallel Problems: Definitions, Challenges and Solutions
- What’s Your Vector? Synopsys Introduces New ARC VPX6 Digital Signal Processor