Building security into an AI SoC using CPU features with extensions
Marco Ciaffi (Dover Microsystems), John Min (Andes Technology)
embedded.com (April 12, 2021)
With the rapid deployment of artificial intelligence (AI), the focus of AI system on chip (SoC) design has been on building smarter, faster and cheaper devices rather than safer, trusted, and more secure. Matheny is founding director of the Center for Security and Emerging Technology at Georgetown University.
Before we look at how to build security into AI SoCs at silicon level, consider what an AI system is. It comprises three elements:
- an inference engine that processes data, makes decisions, and sends commands;
- training data and a set of weights created during the machine learning phase;
- the physical device that carries out the commands.
For example, a Nest thermostat can set a user’s preferred temperature by analyzing and learning from the user’s behavior. Eventually, it can predict that the user likes to set the temperature 10 degrees cooler at night, and the inference engine will then send a command to the thermostat to lower the temperature at the same time every day.
Related White Papers
- Building high performance interrupt responses into an embedded SoC design
- Implement a VXLAN-based network into an SoC
- An Efficient Device for Forward Collision Warning Using Low Cost Stereo Camera & Embedded SoC
- Add Security And Supply Chain Trust To Your ASIC Or SoC With eFPGAs
Latest White Papers
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference
- CANsec: Security for the Third Generation of the CAN Bus