Building security into an AI SoC using CPU features with extensions
Marco Ciaffi (Dover Microsystems), John Min (Andes Technology)
embedded.com (April 12, 2021)
With the rapid deployment of artificial intelligence (AI), the focus of AI system on chip (SoC) design has been on building smarter, faster and cheaper devices rather than safer, trusted, and more secure. Matheny is founding director of the Center for Security and Emerging Technology at Georgetown University.
Before we look at how to build security into AI SoCs at silicon level, consider what an AI system is. It comprises three elements:
- an inference engine that processes data, makes decisions, and sends commands;
- training data and a set of weights created during the machine learning phase;
- the physical device that carries out the commands.
For example, a Nest thermostat can set a user’s preferred temperature by analyzing and learning from the user’s behavior. Eventually, it can predict that the user likes to set the temperature 10 degrees cooler at night, and the inference engine will then send a command to the thermostat to lower the temperature at the same time every day.
To read the full article, click here
Related White Papers
- Accelerating SoC Evolution With NoC Innovations Using NoC Tiling for AI and Machine Learning
- SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models
- Building high performance interrupt responses into an embedded SoC design
- Implement a VXLAN-based network into an SoC
Latest White Papers
- OmniSim: Simulating Hardware with C Speed and RTL Accuracy for High-Level Synthesis Designs
- Balancing Power and Performance With Task Dependencies in Multi-Core Systems
- LLM Inference with Codebook-based Q4X Quantization using the Llama.cpp Framework on RISC-V Vector CPUs
- PCIe 5.0: The universal high-speed interconnect for High Bandwidth and Low Latency Applications Design Challenges & Solutions
- Basilisk: A 34 mm2 End-to-End Open-Source 64-bit Linux-Capable RISC-V SoC in 130nm BiCMOS