Startup Claims AI Design Wins
Gyrfalcon's inference chip has broad targets
Rick Merritt, EETimes
1/25/2018 00:01 AM EST
SAN JOSE, Calif. — Startup Gyrfalcon is moving fast with a chip for inferencing on deep neural networks, but it faces an increasingly crowded market in AI silicon. A year after it got its first funding, the company is showing a working chip and claiming design wins in smartphones, security cameras and industrial automation equipment.
Data centers typically train deep neural networks and run inference tasks on them using banks of servers. Increasingly, client and embedded systems from cars to handsets are adopting accelerators to speed the inferencing jobs.
Apple, Google and Huawei are already shipping smartphones with inferencing blocks in their custom SoCs. Google and Microsoft built inference accelerators for their data centers.
To read the full article, click here
Related Semiconductor IP
- NFC wireless interface supporting ISO14443 A and B with EEPROM on SMIC 180nm
- DDR5 MRDIMM PHY and Controller
- RVA23, Multi-cluster, Hypervisor and Android
- HBM4E PHY and controller
- LZ4/Snappy Data Compressor
Related News
- Start-up claims breakthrough in reconfigurable logic
- Arasan Claims Worldwide Dominance in Design Wins For Flash Memory Card Interface IP
- Startup Claims Low-power IoT Geolocation Without a Positioning Chipset
- Europe takes a major step towards digital autonomy in supercomputing and AI with the launch of DARE project
Latest News
- CAST Releases First Dual LZ4 and Snappy Lossless Data Compression IP Core
- Arteris Wins “AI Engineering Innovation Award” at the 2025 AI Breakthrough Awards
- SEMI Forecasts 69% Growth in Advanced Chipmaking Capacity Through 2028 Due to AI
- eMemory’s NeoFuse OTP Qualifies on TSMC’s N3P Process, Enabling Secure Memory for Advanced AI and HPC Chips
- AIREV and Tenstorrent Unite to Launch Advanced Agentic AI Stack