Neural Networks and the Future
The recent embedded neural network symposium held at Cadence wrapped up with a panel session. Chris Rowen was the moderator and I think the panelists were Han Song, Ren Wu, Forest Iandola, Kai Yu and Jeff Bier (who all presented earlier). I didn't really note down who said what so I'll just report on some of the points that were made. Stuff in [square brackets] are my additional comments, not something any of the panelists said explicitly.
During the sessions, several speakers talked about how 8 bits (or even 4 bits or, in some cases, 2) are precise enough, and 32-bit floating point isn't really needed. But all of the real-world applications seem to be sticking with GPUs. The panelists figured that it was the lack of experience, it is only just showing up in the literature now. Everyone is excited by how fast the field is moving but the approaches actually being deployed are changing much more slowly. It only takes one highly visible success to move people, but going from 0 to 1 is really hard.
Related Blogs
- Take your neural networks to the next level with Arm's Machine Learning Inference Advisor
- The Future of PCIe Is Optical: Synopsys and OpenLight Present First PCIe 7.0 Data-Rate-Over-Optics Demo
- Navigating the Future of EDA: The Transformative Impact of AI and ML
- DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers
Latest Blogs
- Why Choose Hard IP for Embedded FPGA in Aerospace and Defense Applications
- Migrating the CPU IP Development from MIPS to RISC-V Instruction Set Architecture
- Quintauris: Accelerating RISC-V Innovation for next-gen Hardware
- Say Goodbye to Limits and Hello to Freedom of Scalability in the MIPS P8700
- Why is Hard IP a Better Solution for Embedded FPGA (eFPGA) Technology?