patents.google.com

Angizi et al., 2018 - Google Patents

  • ️Mon Jan 01 2018
IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network

Angizi et al., 2018

View PDF
Document ID
8212378218705846570
Author
He Z
Parveen F
Fan D
Publication year
2018
Publication venue
2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC)

External Links

Snippet

In this paper, we pave a novel way towards the concept of bit-wise In-Memory Convolution Engine (IMCE) that could implement the dominant convolution computation of Deep Convolutional Neural Networks (CNN) within memory. IMCE employs parallel computational …

Continue reading at par.nsf.gov (PDF) (other versions)
  • 230000001537 neural 0 title abstract description 10

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/523Multiplying only
    • G06F7/53Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/544Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
    • G06F7/5443Sum of products
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/50Adding; Subtracting
    • G06F7/505Adding; Subtracting in bit-parallel fashion, i.e. having a different digit-handling circuit for each denomination
    • G06F7/506Adding; Subtracting in bit-parallel fashion, i.e. having a different digit-handling circuit for each denomination with simultaneous carry generation for, or propagation over, two or more stages
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5009Computer-aided design using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F2207/00Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F2207/38Indexing scheme relating to groups G06F7/38 - G06F7/575
    • G06F2207/3804Details
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/02Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/30Arrangements for executing machine-instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored programme computers

Similar Documents

Publication Publication Date Title
Angizi et al. 2018 IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network
Angizi et al. 2018 Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator
Fan et al. 2017 Energy efficient in-memory binary deep neural network accelerator with dual-mode SOT-MRAM
Bavikadi et al. 2020 A review of in-memory computing architectures for machine learning applications
Jhang et al. 2021 Challenges and trends of SRAM-based computing-in-memory for AI edge devices
Wang et al. 2019 A 28-nm compute SRAM with bit-serial logic/arithmetic operations for programmable in-memory vector computing
Wang et al. 2019 14.2 A compute SRAM with bit-serial integer/floating-point operations for programmable in-memory vector acceleration
Yin et al. 2019 Vesti: Energy-efficient in-memory computing accelerator for deep neural networks
Bojnordi et al. 2016 Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning
Kim et al. 2022 An overview of processing-in-memory circuits for artificial intelligence and machine learning
Umesh et al. 2019 A survey of spintronic architectures for processing-in-memory and neural networks
Resch et al. 2019 PIMBALL: Binary neural networks in spintronic memory
Yue et al. 2022 STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse
US20180095930A1 (en) 2018-04-05 Field-Programmable Crossbar Array For Reconfigurable Computing
Jain et al. 2020 TiM-DNN: Ternary in-memory accelerator for deep neural networks
Angizi et al. 2019 Accelerating deep neural networks in processing-in-memory platforms: Analog or digital approach?
Agrawal et al. 2021 IMPULSE: A 65-nm digital compute-in-memory macro with fused weights and membrane potential for spike-based sequential learning tasks
Angizi et al. 2018 Dima: a depthwise cnn in-memory accelerator
Angizi et al. 2019 Parapim: a parallel processing-in-memory accelerator for binary-weight deep neural networks
Wang et al. 2020 A new MRAM-based process in-memory accelerator for efficient neural network training with floating point precision
Ali et al. 2022 Compute-in-memory technologies and architectures for deep learning workloads
Roohi et al. 2019 Processing-in-memory acceleration of convolutional neural networks for energy-effciency, and power-intermittency resilience
Samiee et al. 2019 Low-energy acceleration of binarized convolutional neural networks using a spin hall effect based logic-in-memory architecture
Angizi et al. 2017 Imc: energy-efficient in-memory convolver for accelerating binarized deep neural network
Agrawal et al. 2020 CASH-RAM: Enabling in-memory computations for edge inference using charge accumulation and sharing in standard 8T-SRAM arrays