The backpropagation algorithm implemented on spiking neuromorphic hardware

By Alpha Renner 1,2, Forrest Sheldon 3,4, Anatoly Zlotnik 5, Louis Tao 6,7 & Andrew Sornborger 8
1 Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich 8057, Switzerland. 
2 Forschungszentrum Jülich, Jülich 52428, Germany. 
3 Physics of Condensed Matter & Complex Systems (T-4), Los Alamos National Laboratory, Los Alamos, NM 87545, USA. 
4 London Institute for Mathematical Sciences, Royal Institution, London W1S 4BS, UK. 
5 Applied Mathematics & Plasma Physics (T-5), Los Alamos National Laboratory, Los Alamos, NM 87545, USA. 
6 Center for Bioinformatics, National Laboratory of Prote in Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China. 
7 Center for Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China. 
8 Information Sciences (CCS-3), Los Alamos National Laboratory, Los Alamos, NM 87545, USA.

The capabilities of natural neural systems have inspired both new generations of machine learning algorithms as well as neuromorphic, very large-scale integrated circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. This study presents a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing implemented on Intel’s Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits and clothing items from the MNIST and Fashion MNIST datasets. To our knowledge, this is the first work to show a Spiking Neural Network implementation of the exact backpropagation algorithm that is fully on-chip without a computer in the loop. It is competitive in accuracy with off-chip trained SNNs and achieves an energy-delay product suitable for edge computing. This implementation shows a path for using in-memory, massively parallel neuromorphic processors for low-power, low-latency implementation of modern deep learning applications.

To read the full article, click here

×
Semiconductor IP