Axelera AI Platform Accelerates Edge Application Deployment
By Maurizio Di Paolo Emilio, EETimes (November 21, 2023)
Conventional AI applications frequently require local devices to transmit data to a centralized cloud server for analysis and processing. Although this methodology exhibits efficacy across various scenarios, it’s not without its constraints, encompassing latency, bandwidth consumption, and privacy and security considerations. Relocating AI processing near the location where the data is generated—otherwise known as the “bringing AI to the edge” approach—resolves these concerns through the execution of computations locally on the device or in close proximity to the data source.
In an interview with EE Times, Axelera AI co-founder and CEO Fabrizio Del Maffeo noted recent industry milestones the company achieved, as well as the appointment of a former Arm executive to Axelera’s board of directors.
To read the full article, click here
Related Semiconductor IP
- Multi-channel, multi-rate Ethernet aggregator - 10G to 400G AX (e.g., AI)
- Multi-channel, multi-rate Ethernet aggregator - 10G to 800G DX
- 200G/400G/800G Ethernet PCS/FEC
- 50G/100G MAC/PCS/FEC
- 25G/10G/SGMII/ 1000BASE-X PCS and MAC
Related News
- Arm Accelerates Edge AI with Latest Generation Ethos-U NPU and New IoT Reference Design Platform
- Arteris Deployed by Menta for Edge AI Chiplet Platform
- X-Silicon Revolutionizes AI and Graphics at the Edge with “Constellation” Software Platform
- Arm Drives Next-Generation Performance for IoT with World’s First Armv9 Edge AI Platform
Latest News
- How CXL 3.1 and PCIe 6.2 are Redefining Compute Efficiency
- Secure-IC at Computex 2025: Enabling Trust in AI, Chiplets, and Quantum-Ready Systems
- Automotive Industry Charts New Course with RISC-V
- Xiphera Partners with Siemens Cre8Ventures to Strengthen Automotive Security and Support EU Chips Act Sovereignty Goals
- NY CREATES and Fraunhofer Institute Announce Joint Development Agreement to Advance Memory Devices at the 300mm Wafer Scale