patents.google.com

Zhang et al., 2022 - Google Patents

  • ️Sat Jan 01 2022
PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference

Zhang et al., 2022

Document ID
489980844642874745
Author
Yin S
Kim M
Saikia J
Kwon S
Myung S
Kim H
Kim S
Seo J
Seok M
Publication year
2022
Publication venue
IEEE Journal of Solid-State Circuits

External Links

Snippet

This article presents a programmable in-memory computing accelerator (PIMCA) for low- precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in- memory computing (IMC) macro has four additional transistors and one capacitor to perform …

Continue reading at ieeexplore.ieee.org (other versions)
  • 239000003990 capacitor 0 abstract description 49

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored programme computers
    • G06F15/78Architectures of general purpose stored programme computers comprising a single central processing unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5009Computer-aided design using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5045Circuit design
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/523Multiplying only
    • G06F7/53Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/30Arrangements for executing machine-instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a programme unit and a register, e.g. for a simultaneous processing of several programmes
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F1/00Details of data-processing equipment not covered by groups G06F3/00 - G06F13/00, e.g. cooling, packaging or power supply specially adapted for computer application
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F2217/00Indexing scheme relating to computer aided design [CAD]
    • G06F2217/78Power analysis and optimization
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/41Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming static cells with positive feedback, i.e. cells not needing refreshing or charge regeneration, e.g. bistable multivibrator or Schmitt trigger

Similar Documents

Publication Publication Date Title
Zhang et al. 2022 PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference
Yin et al. 2019 Vesti: Energy-efficient in-memory computing accelerator for deep neural networks
Kim et al. 2021 Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks
Wang et al. 2019 A 28-nm compute SRAM with bit-serial logic/arithmetic operations for programmable in-memory vector computing
Peng et al. 2020 DNN+ NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training
Verma et al. 2019 In-memory computing: Advances and prospects
Koppula et al. 2019 EDEN: Enabling energy-efficient, high-performance deep neural network inference using approximate DRAM
Valavi et al. 2019 A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute
Long et al. 2018 ReRAM-based processing-in-memory architecture for recurrent neural network acceleration
Marinella et al. 2018 Multiscale co-design analysis of energy, latency, area, and accuracy of a ReRAM analog neural training accelerator
Knag et al. 2020 A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS
Kim et al. 2022 An overview of processing-in-memory circuits for artificial intelligence and machine learning
Yue et al. 2022 STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse
Giacomin et al. 2018 A robust digital RRAM-based convolutional block for low-power image processing and learning applications
Lu et al. 2021 NeuroSim simulator for compute-in-memory hardware accelerator: Validation and benchmark
Ranjan et al. 2019 X-mann: A crossbar based architecture for memory augmented neural networks
Jia et al. 2018 A microprocessor implemented in 65nm CMOS with configurable and bit-scalable accelerator for programmable in-memory computing
Zidan et al. 2017 Field-programmable crossbar array (FPCA) for reconfigurable computing
Sutradhar et al. 2021 Look-up-table based processing-in-memory architecture with programmable precision-scaling for deep learning applications
Kim et al. 2023 A 1-16b reconfigurable 80Kb 7T SRAM-based digital near-memory computing macro for processing neural networks
Long et al. 2018 A ferroelectric FET based power-efficient architecture for data-intensive computing
Shanbhag et al. 2022 Benchmarking in-memory computing architectures
Singh et al. 2021 SRIF: Scalable and reliable integrate and fire circuit ADC for memristor-based CIM architectures
Kang et al. 2021 S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity
Klein et al. 2022 ALPINE: Analog in-memory acceleration with tight processor integration for deep learning