Side-Channel Attacks Target Machine Learning (ML) Models
A team of North Carolina State University researchers recently published a paper that highlights the vulnerability of machine learning (ML) models to side-channel attacks. Specifically, the team used power-based side-channel attacks to extract the secret weights of a Binarized Neural Network (BNN) in a highly-parallelized hardware implementation.
“Physical side-channel leaks in neural networks call for a new line of side-channel analysis research because it opens up a new avenue of designing countermeasures tailored for deep learning inference engines,” the researchers wrote. “On a SAKURA-X FPGA board, [our] experiments show that the first-order DPA attacks on [an] unprotected implementation can succeed with only 200 traces.”
According to Jeremy Hsu of IEE Spectrum, machine learning algorithms that enable smart home devices and smart cars to automatically recognize various types of images or sounds such as words or music are among the artificial intelligence (AI) systems “most vulnerable” to such attacks.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- SoC QoS gets help from machine learning
- NetSpeed Leverages Machine Learning for Automotive IC End-to-End QoS Solutions
- Protecting electronic systems from side-channel attacks
- Machine Learning And Design Into 2018 - A Quick Recap
Latest Blogs
- lowRISC Tackles Post-Quantum Cryptography Challenges through Research Collaborations
- How to Solve the Size, Weight, Power and Cooling Challenge in Radar & Radio Frequency Modulation Classification
- Programmable Hardware Delivers 10,000X Improvement in Verification Speed over Software for Forward Error Correction
- The Integrated Design Challenge: Developing Chip, Software, and System in Unison
- Introducing Mi-V RV32 v4.0 Soft Processor: Enhanced RISC-V Power