patents.google.com

Resch, 2021 - Google Patents

  • ️Fri Jan 01 2021
Binary Neural Networks in Spintronic Memory

Resch, 2021

View PDF
Document ID
4437703802110379145
Author Publication year
2021

External Links

Snippet

Neural networks span a wide range of applications of industrial and commercial significance. Binary neural networks (BNN) are particularly effective in trading accuracy for performance, energy efficiency or hardware/software complexity. In this thesis, I demonstrate …

Continue reading at conservancy.umn.edu (PDF) (other versions)
  • 230000015654 memory 0 title abstract description 78

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/30Arrangements for executing machine-instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/523Multiplying only
    • G06F7/53Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/02Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F2207/00Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F2207/38Indexing scheme relating to groups G06F7/38 - G06F7/575
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C15/00Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
    • G11C15/04Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2211/00Indexing scheme relating to digital stores characterized by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C2211/56Indexing scheme relating to G11C11/56 and sub-groups for features not covered by these groups
    • G11C2211/564Miscellaneous aspects
    • G11C2211/5641Multilevel memory having cells with different number of storage levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass

Similar Documents

Publication Publication Date Title
Resch et al. 2019 PIMBALL: Binary neural networks in spintronic memory
Angizi et al. 2018 Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator
Zabihi et al. 2018 In-memory processing on the spintronic CRAM: From hardware design to application mapping
Jhang et al. 2021 Challenges and trends of SRAM-based computing-in-memory for AI edge devices
Angizi et al. 2018 IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network
Umesh et al. 2019 A survey of spintronic architectures for processing-in-memory and neural networks
CN109766309B (en) 2020-07-28 Spin-memory-computing integrated chip
Chang et al. 2019 PXNOR-BNN: In/with spin-orbit torque MRAM preset-XNOR operation-based binary neural networks
Luo et al. 2020 Accelerating deep neural network in-situ training with non-volatile and volatile memory based hybrid precision synapses
Angizi et al. 2019 Accelerating deep neural networks in processing-in-memory platforms: Analog or digital approach?
He et al. 2017 Exploring STT-MRAM based in-memory computing paradigm with application of image edge extraction
Angizi et al. 2018 Dima: a depthwise cnn in-memory accelerator
Pham et al. 2022 STT-BNN: A novel STT-MRAM in-memory computing macro for binary neural networks
Angizi et al. 2019 Parapim: a parallel processing-in-memory accelerator for binary-weight deep neural networks
Yu et al. 2014 Energy efficient in-memory machine learning for data intensive image-processing by non-volatile domain-wall memory
Jain et al. 2018 Computing-in-memory with spintronics
Yan et al. 2019 iCELIA: A full-stack framework for STT-MRAM-based deep learning acceleration
CN114388021A (en) 2022-04-22 Program-Assisted Ultra-Low-Power Inference Engine Using External Magnetic Fields
Angizi et al. 2022 Pisa: A binary-weight processing-in-sensor accelerator for edge image processing
Zhao et al. 2023 NAND-SPIN-based processing-in-MRAM architecture for convolutional neural network acceleration
CN114341981A (en) 2022-04-12 Memory with artificial intelligence mode
Ollivier et al. 2022 CORUSCANT: Fast efficient processing-in-racetrack memories
CN114286977B (en) 2024-08-16 Artificial Intelligence Accelerator
Song et al. 2019 Rebnn: in-situ acceleration of binarized neural networks in reram using complementary resistive cell
Angizi et al. 2017 Imc: energy-efficient in-memory convolver for accelerating binarized deep neural network