Expedera深度学习加速器 IP针对消费类设备实现首批量产出货
Santa Clara, California -- March 1, 2022 — Expedera Inc, a leading provider of scalable Deep Learning Accelerator (DLA) semiconductor intellectual property (IP), today announced that a global consumer device maker is now in production with its Origin™ DLA solution.
Many consumer devices include video capabilities. However, at resolutions of 4K and up, much of the image processing must now be handled on the device rather than in the cloud. Functions such as low light video denoising require that data must be processed in real time, but at higher image resolutions, it is no longer feasible to transfer the volume of data to and from the cloud fast enough. To meet the expanding need for advanced on-device image processing and other new deep learning applications, device manufacturers are adding highly efficient specialized accelerators such as Expedera’s.
“I am delighted to announce the first shipping consumer product with Expedera IP,” said Da Chuang, founder and CEO of Expedera. “A key advantage of our DLA architecture is the capability to finely tune a solution to meet the unique design requirements of new and emerging customer applications. Our ability to adapt our IP to any device architecture and optimize for any design space enables customers to create extremely efficient solutions with industry-leading performance.”
In a recent Microprocessor Report, editor-in-chief Linley Gwennap noted, “Expedera’s Origin deep-learning accelerator provides industry-leading performance per watt for mobile, smart-home, and other camera-based devices. Its architecture is the most efficient at up to 18 TOPS per watt in 7nm, as measured on the test chip.”
Expedera takes a network-centric approach to AI acceleration, whereby the architecture segments the neural network into packets, which are essentially command streams. These packets are then efficiently scheduled and executed by the hardware in a very fast, efficient and deterministic manner. This enables designs that reduce total memory requirements to the theoretical minimum and eliminate memory bottlenecks that can limit application performance. Expedera’s co-design approach additionally enables a simpler software stack and provides a system-aware design and a more productive development experience. The platform supports popular AI frontends including TensorFlow, ONNX, Keras, Mxnet, Darknet, CoreML and Caffe2 through Apache TVM.
For more information on the Expedera Origin family of deep learning accelerators, visit our website at https://www.expedera.com/products-overview/
About Expedera
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI-inference applications. Third-party silicon validated, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive and data centers. Expedera’s Origin deep learning accelerator products are easily integrated, readily scalable, and can be customized to application requirements. The company is headquartered in Santa Clara, California. Visit expedera.com
Related Semiconductor IP
- Deep Learning Accelerator
- High performance-efficient deep learning accelerator for edge and end-point inference
Related News
- SiFive宣布推出首款采用NVIDIA深度学习加速器技术的开源RISC-V SoC平台
- Mentor的Catapult HLS使Chips&Media提供深度学习硬件加速器IP的时间节省一半
- Altek授权CEVA图像和视觉DSP用于移动设备中的深度学习
- 哈佛研究人员选择Flex Logix的嵌入式FPGA技术来设计深度学习SoC