How Will Deep Learning Change SoCs?
Junko Yoshida, EETimes
3/30/2015 00:00 AM EDT
MADISON, Wis. – Deep Learning is already changing the way computers see, hear and identify objects in the real world.
However, the bigger -- and perhaps more pertinent -- issues for the semiconductor industry are: Will “deep learning” ever migrate into smartphones, wearable devices, or the tiny computer vision SoCs used in highly automated cars? Has anybody come up with SoC architecture optimized for neural networks? If so, what does it look like?
To read the full article, click here
Related Semiconductor IP
- NFC wireless interface supporting ISO14443 A and B with EEPROM on SMIC 180nm
- DDR5 MRDIMM PHY and Controller
- RVA23, Multi-cluster, Hypervisor and Android
- HBM4E PHY and controller
- LZ4/Snappy Data Compressor
Related News
- Wally Rhines: Deep Learning Will Drive Next Wave of Chip Growth
- How Will 5G Advanced Change RF Design?
- Reading the tea leaves: How deep will EDA losses go?
- 64-bit MIPS Warrior core will change the game for CPUs from mobile devices to datacenter servers
Latest News
- CAST Releases First Dual LZ4 and Snappy Lossless Data Compression IP Core
- Arteris Wins “AI Engineering Innovation Award” at the 2025 AI Breakthrough Awards
- SEMI Forecasts 69% Growth in Advanced Chipmaking Capacity Through 2028 Due to AI
- eMemory’s NeoFuse OTP Qualifies on TSMC’s N3P Process, Enabling Secure Memory for Advanced AI and HPC Chips
- AIREV and Tenstorrent Unite to Launch Advanced Agentic AI Stack