Equivalent-accuracy accelerated neural-network training using analogue memory - PubMed
. 2018 Jun;558(7708):60-67.
doi: 10.1038/s41586-018-0180-5. Epub 2018 Jun 6.
Pritish Narayanan 1 , Hsinyu Tsai 1 , Robert M Shelby 1 , Irem Boybat 2 3 , Carmelo di Nolfo 1 3 , Severin Sidler 1 3 , Massimo Giordano 1 , Martina Bodini 1 3 , Nathan C P Farinha 1 , Benjamin Killeen 1 , Christina Cheng 1 , Yassine Jaoudi 1 , Geoffrey W Burr 4
Affiliations
- PMID: 29875487
- DOI: 10.1038/s41586-018-0180-5
Equivalent-accuracy accelerated neural-network training using analogue memory
Stefano Ambrogio et al. Nature. 2018 Jun.
Abstract
Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply-accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware-software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with 'polarity inversion' to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today's graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers.
Comment in
-
Two artificial synapses are better than one.
Adam GC. Adam GC. Nature. 2018 Jun;558(7708):39-40. doi: 10.1038/d41586-018-05297-5. Nature. 2018. PMID: 29872204 No abstract available.
Similar articles
-
van Doremaele ERW, Stevens T, Ringeling S, Spolaor S, Fattori M, van de Burgt Y. van Doremaele ERW, et al. Sci Adv. 2024 Jul 12;10(28):eado8999. doi: 10.1126/sciadv.ado8999. Epub 2024 Jul 12. Sci Adv. 2024. PMID: 38996020 Free PMC article.
-
Cost-effective stochastic MAC circuits for deep neural networks.
Sim H, Lee J. Sim H, et al. Neural Netw. 2019 Sep;117:152-162. doi: 10.1016/j.neunet.2019.04.017. Epub 2019 May 20. Neural Netw. 2019. PMID: 31170575
-
In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory.
Li Y, Xiao TP, Bennett CH, Isele E, Melianas A, Tao H, Marinella MJ, Salleo A, Fuller EJ, Talin AA. Li Y, et al. Front Neurosci. 2021 Apr 8;15:636127. doi: 10.3389/fnins.2021.636127. eCollection 2021. Front Neurosci. 2021. PMID: 33897351 Free PMC article.
-
Haensch W, Raghunathan A, Roy K, Chakrabarti B, Phatak CM, Wang C, Guha S. Haensch W, et al. Adv Mater. 2023 Sep;35(37):e2204944. doi: 10.1002/adma.202204944. Epub 2023 Mar 2. Adv Mater. 2023. PMID: 36579797 Review.
-
The prospects for analogue neural VLSI.
Murray AF, Woodburn R. Murray AF, et al. Int J Neural Syst. 1997 Oct-Dec;8(5-6):559-79. doi: 10.1142/s0129065797000525. Int J Neural Syst. 1997. PMID: 10065836 Review.
Cited by
-
Xu Z, Tang B, Zhang X, Leong JF, Pan J, Hooda S, Zamburg E, Thean AV. Xu Z, et al. Light Sci Appl. 2022 Oct 6;11(1):288. doi: 10.1038/s41377-022-00976-5. Light Sci Appl. 2022. PMID: 36202804 Free PMC article.
-
Temperature-resilient solid-state organic artificial synapses for neuromorphic computing.
Melianas A, Quill TJ, LeCroy G, Tuchman Y, Loo HV, Keene ST, Giovannitti A, Lee HR, Maria IP, McCulloch I, Salleo A. Melianas A, et al. Sci Adv. 2020 Jul 3;6(27):eabb2958. doi: 10.1126/sciadv.abb2958. Print 2020 Jul. Sci Adv. 2020. PMID: 32937458 Free PMC article.
-
Seo S, Kim B, Kim D, Park S, Kim TR, Park J, Jeong H, Park SO, Park T, Shin H, Kim MS, Choi YK, Choi S. Seo S, et al. Nat Commun. 2022 Oct 28;13(1):6431. doi: 10.1038/s41467-022-34178-9. Nat Commun. 2022. PMID: 36307483 Free PMC article.
-
Multi-state MRAM cells for hardware neuromorphic computing.
Rzeszut P, Chȩciński J, Brzozowski I, Ziȩtek S, Skowroński W, Stobiecki T. Rzeszut P, et al. Sci Rep. 2022 May 3;12(1):7178. doi: 10.1038/s41598-022-11199-4. Sci Rep. 2022. PMID: 35504980 Free PMC article.
-
Giant nonvolatile resistive switching in a Mott oxide and ferroelectric hybrid.
Salev P, Del Valle J, Kalcheim Y, Schuller IK. Salev P, et al. Proc Natl Acad Sci U S A. 2019 Apr 30;116(18):8798-8802. doi: 10.1073/pnas.1822138116. Epub 2019 Apr 11. Proc Natl Acad Sci U S A. 2019. PMID: 30975746 Free PMC article.
LinkOut - more resources
Full Text Sources
Other Literature Sources