How audio development platforms can take advantage of accelerated ML processing
The Arm® Cortex®-M55 processor and Ethos -U55 microNPU (Neural Processing Unit) have brought forth new machine learning (ML) opportunities for edge and endpoint devices. ML is becoming more common in audio applications in the form of voice user interfaces, voice identification and security, and natural language communication systems. As such, this hardware is best paired with a powerful and flexible audio development platform that can take advantage of accelerated ML processing.
The Arm Cortex-M55 processor is the most AI-capable Cortex-M processor, bringing endpoint AI to billions more devices. Featuring the Helium vector instruction set that enables a significant increase in digital signal processing (DSP) and ML capability. This processor improves throughput and maximizes the use of processor resources.
Arm Ethos -U55 is a new class of machine learning processor specifically designed to accelerate ML inference in embedded and IoT devices. When combined with a Cortex-M55 processor, the Ethos-U55 provides ML performance that is hundreds of times faster than existing Cortex-M-based systems.
To read the full article, click here
Related Semiconductor IP
- Configurable CPU tailored precisely to your needs
- Ultra high-performance low-power ADC
- HiFi iQ DSP
- CXL 4 Verification IP
- JESD204E Controller IP
Related Blogs
- How Head Tracking Can Elevate Your Spatial Audio Experience
- How Arasan’s SoundWire PHY Can Elevate Your Next Audio SoC
- Enhancing Audio Quality with Environmental Noise Cancellation in Sound Processing - Part 1 - Introduction
- What the Apple-Samsung lawsuit tells us about IP and standards development
Latest Blogs
- Leadership in CAN XL strengthens Bosch’s position in vehicle communication
- Validating UPLI Protocol Across Topologies with Cadence UALink VIP
- Cadence Tapes Out 32GT/s UCIe IP Subsystem on Samsung 4nm Technology
- LPDDR6 vs. LPDDR5 and LPDDR5X: What’s the Difference?
- DEEPX, Rambus, and Samsung Foundry Collaborate to Enable Efficient Edge Inferencing Applications