Side-Channel Attacks Target Machine Learning (ML) Models
A team of North Carolina State University researchers recently published a paper that highlights the vulnerability of machine learning (ML) models to side-channel attacks. Specifically, the team used power-based side-channel attacks to extract the secret weights of a Binarized Neural Network (BNN) in a highly-parallelized hardware implementation.
“Physical side-channel leaks in neural networks call for a new line of side-channel analysis research because it opens up a new avenue of designing countermeasures tailored for deep learning inference engines,” the researchers wrote. “On a SAKURA-X FPGA board, [our] experiments show that the first-order DPA attacks on [an] unprotected implementation can succeed with only 200 traces.”
According to Jeremy Hsu of IEE Spectrum, machine learning algorithms that enable smart home devices and smart cars to automatically recognize various types of images or sounds such as words or music are among the artificial intelligence (AI) systems “most vulnerable” to such attacks.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- SoC QoS gets help from machine learning
- NetSpeed Leverages Machine Learning for Automotive IC End-to-End QoS Solutions
- Protecting electronic systems from side-channel attacks
- Machine Learning And Design Into 2018 - A Quick Recap
Latest Blogs
- ML-KEM explained: Quantum-safe Key Exchange for secure embedded Hardware
- Rivos Collaborates to Complete Secure Provisioning of Integrated OpenTitan Root of Trust During SoC Production
- From GPUs to Memory Pools: Why AI Needs Compute Express Link (CXL)
- Verification of UALink (UAL) and Ultra Ethernet (UEC) Protocols for Scalable HPC/AI Networks using Synopsys VIP
- Enhancing PCIe6.0 Performance: Flit Sequence Numbers and Selective NAK Explained
