patents.google.com

Angizi et al., 2019 - Google Patents

  • ️Tue Jan 01 2019
Accelerating deep neural networks in processing-in-memory platforms: Analog or digital approach?

Angizi et al., 2019

View PDF
Document ID
13041337725728828934
Author
He Z
Reis D
Hu X
Tsai W
Lin S
Fan D
Publication year
2019
Publication venue
2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)

External Links

Snippet

Nowadays, research topics on AI accelerator designs have attracted great interest, where accelerating Deep Neural Network (DNN) using Processing-in-Memory (PIM) platforms is an actively-explored direction with great potential. PIM platforms, which simultaneously aims to …

Continue reading at par.nsf.gov (PDF) (other versions)
  • 230000001537 neural 0 title abstract description 27

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/523Multiplying only
    • G06F7/53Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored programme computers
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/12Computer systems based on biological models using genetic models
    • G06N3/126Genetic algorithms, i.e. information processing using digital simulations of the genetic system
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/02Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/06Sense amplifiers; Associated circuits, e.g. timing or triggering circuits
    • G11C7/067Single-ended amplifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C15/00Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
    • G11C15/04Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2211/00Indexing scheme relating to digital stores characterized by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C2211/56Indexing scheme relating to G11C11/56 and sub-groups for features not covered by these groups

Similar Documents

Publication Publication Date Title
Angizi et al. 2019 Accelerating deep neural networks in processing-in-memory platforms: Analog or digital approach?
US10877752B2 (en) 2020-12-29 Techniques for current-sensing circuit design for compute-in-memory
Jhang et al. 2021 Challenges and trends of SRAM-based computing-in-memory for AI edge devices
Agrawal et al. 2019 Xcel-RAM: Accelerating binary neural networks in high-throughput SRAM compute arrays
Angizi et al. 2019 MRIMA: An MRAM-based in-memory accelerator
US11061646B2 (en) 2021-07-13 Compute in memory circuits with multi-Vdd arrays and/or analog multipliers
Angizi et al. 2018 Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator
Chi et al. 2016 Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory
US10748603B2 (en) 2020-08-18 In-memory multiply and accumulate with global charge-sharing
Bavikadi et al. 2020 A review of in-memory computing architectures for machine learning applications
US10346347B2 (en) 2019-07-09 Field-programmable crossbar array for reconfigurable computing
Angizi et al. 2018 IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network
Umesh et al. 2019 A survey of spintronic architectures for processing-in-memory and neural networks
Roohi et al. 2019 Apgan: Approximate gan for robust low energy learning from imprecise components
Kim et al. 2022 An overview of processing-in-memory circuits for artificial intelligence and machine learning
Luo et al. 2020 Accelerating deep neural network in-situ training with non-volatile and volatile memory based hybrid precision synapses
Jain et al. 2020 TiM-DNN: Ternary in-memory accelerator for deep neural networks
Zidan et al. 2017 Field-programmable crossbar array (FPCA) for reconfigurable computing
Angizi et al. 2019 Parapim: a parallel processing-in-memory accelerator for binary-weight deep neural networks
Angizi et al. 2018 Dima: a depthwise cnn in-memory accelerator
Ali et al. 2022 Compute-in-memory technologies and architectures for deep learning workloads
Wang et al. 2020 A new MRAM-based process in-memory accelerator for efficient neural network training with floating point precision
Shanbhag et al. 2022 Benchmarking in-memory computing architectures
Jain et al. 2018 Computing-in-memory with spintronics
Kang et al. 2021 S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity