Reduce Time to Market for FPGA-Based Communication and Datacenter Applications
Joe Mallet, Synopsys
EETimes (9/13/2017 03:50 PM EDT)
As FPGA-based realizations become bigger and more complex, synthesis tools that deliver an automated flow are the obvious choice for creating optimized designs in a timely manner.
The market is changing for FPGAs due to advancements in low power, high performance, and lower cost, all of which are increasing FPGA adoption into datacenter applications like network switches, CPUs, and network acceleration.
The growing need for FPGAs in these types of applications is being driven by the fact that they can achieve the desired processing throughput and latency requirements. With increasing amounts of data to process from very large data sets, FPGAs are a good fit to handle the acceleration required by these types of applications. While the flexibility of FPGAs is an advantage for FPGA designers, it also poses challenges in that -- in addition to the hardware -- the designers must implement the drivers, software, and application layers for these applications. Furthermore, they will need to achieve the best quality of results (QoR) for performance and area, accelerated runtimes, and deep debug to help accelerate system design and get to the market quickly.
To read the full article, click here
Related Semiconductor IP
- USB 20Gbps Device Controller
- AGILEX 7 R-Tile Gen5 NVMe Host IP
- 100G PAM4 Serdes PHY - 14nm
- Bluetooth Low Energy Subsystem IP
- Multi-core capable 64-bit RISC-V CPU with vector extensions
Related White Papers
- Paving the way for the next generation of audio codec for True Wireless Stereo (TWS) applications - PART 5 : Cutting time to market in a safe and timely manner
- Methodology to reduce Run Time of Timing/Functional Eco
- From a Lossless (~1.5:1) Compression Algorithm for Llama2 7B Weights to Variable Precision, Variable Range, Compressed Numeric Data Types for CNNs and LLMs
- Optimal OTP for Advanced Node and Emerging Applications
Latest White Papers
- Combating the Memory Walls: Optimization Pathways for Long-Context Agentic LLM Inference
- Hardware Acceleration of Kolmogorov-Arnold Network (KAN) in Large-Scale Systems
- CRADLE: Conversational RTL Design Space Exploration with LLM-based Multi-Agent Systems
- On the Thermal Vulnerability of 3D-Stacked High-Bandwidth Memory Architectures
- OmniSim: Simulating Hardware with C Speed and RTL Accuracy for High-Level Synthesis Designs