Paul Williamson on Edge AI, Llama 3.2 on Arm
By Nitin Dahad, EETimes (September 27, 2024)
EE Times caught up with Paul Williamson, senior VP and general manager of the IoT business for Arm, for an exclusive virtual interview after his keynote talk at the Edge Impulse Imagine conference in Mountain View, Calif., this week.
During the interview, Williamson provided an overview of his talk at the Edge Impulse event. He then touched on examples of edge AI and why it is all about small language models (SLMs) trained for specific tasks at the edge. “The edge is increasingly about expert systems rather than large generic models,” Williamson said.
He helped answered the question, “How do we make edge AI real in real world applications?” and discussed the significance of the other announcement Arm made this week: the company’s collaboration with Meta to enable Llama 3.2 LLMs on Arm CPUs.
Related Semiconductor IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
Related News
- Arm Accelerates Edge AI with Latest Generation Ethos-U NPU and New IoT Reference Design Platform
- Arm Accelerates AI From Cloud to Edge With New PyTorch and ExecuTorch Integrations to Deliver Immediate Performance Improvements for Developers
- Edge Impulse Deploys its State-of-the-Art Edge AI Models to Arm Microcontrollers Tools
- Google Cloud Delivers Customized Silicon Powered by Arm Neoverse for General-Purpose Compute and AI Inference Workloads
Latest News
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy