How Will Deep Learning Change SoCs?
Junko Yoshida, EETimes
3/30/2015 00:00 AM EDT
MADISON, Wis. – Deep Learning is already changing the way computers see, hear and identify objects in the real world.
However, the bigger -- and perhaps more pertinent -- issues for the semiconductor industry are: Will “deep learning” ever migrate into smartphones, wearable devices, or the tiny computer vision SoCs used in highly automated cars? Has anybody come up with SoC architecture optimized for neural networks? If so, what does it look like?
To read the full article, click here
Related Semiconductor IP
- Root of Trust (RoT)
- Fixed Point Doppler Channel IP core
- Multi-protocol wireless plaform integrating Bluetooth Dual Mode, IEEE 802.15.4 (for Thread, Zigbee and Matter)
- Polyphase Video Scaler
- Compact, low-power, 8bit ADC on GF 22nm FDX
Related News
- Wally Rhines: Deep Learning Will Drive Next Wave of Chip Growth
- How Will 5G Advanced Change RF Design?
- Reading the tea leaves: How deep will EDA losses go?
- How Much Will That Chip Cost?
Latest News
- How hardware-assisted verification (HAV) transforms EDA workflows
- BrainChip Provides Low-Power Neuromorphic Processing for Quantum Ventura’s Cyberthreat Intelligence Tool
- Ultra Accelerator Link Consortium (UALink) Welcomes Alibaba, Apple and Synopsys to Board of Directors
- CAST to Enter the Post-Quantum Cryptography Era with New KiviPQC-KEM IP Core
- InPsytech Announces Finalization of UCIe IP Design, Driving Breakthroughs in High-Speed Transmission Technology