Zhang et al., 2022 - Google Patents
- ️Sat Jan 01 2022
Zhang et al., 2022
-
Document ID
- 489980844642874745 Author
- Yin S
- Kim M
- Saikia J
- Kwon S
- Myung S
- Kim H
- Kim S
- Seo J
- Seok M Publication year
- 2022 Publication venue
- IEEE Journal of Solid-State Circuits
External Links
Snippet
This article presents a programmable in-memory computing accelerator (PIMCA) for low- precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in- memory computing (IMC) macro has four additional transistors and one capacitor to perform …
- 239000003990 capacitor 0 abstract description 49
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
- G06F15/78—Architectures of general purpose stored programme computers comprising a single central processing unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
- G06F17/5009—Computer-aided design using simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
- G06F17/5045—Circuit design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/30—Arrangements for executing machine-instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a programme unit and a register, e.g. for a simultaneous processing of several programmes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F1/00—Details of data-processing equipment not covered by groups G06F3/00 - G06F13/00, e.g. cooling, packaging or power supply specially adapted for computer application
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2217/00—Indexing scheme relating to computer aided design [CAD]
- G06F2217/78—Power analysis and optimization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
- G11C11/41—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming static cells with positive feedback, i.e. cells not needing refreshing or charge regeneration, e.g. bistable multivibrator or Schmitt trigger
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | 2022 | PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference |
Yin et al. | 2019 | Vesti: Energy-efficient in-memory computing accelerator for deep neural networks |
Kim et al. | 2021 | Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks |
Wang et al. | 2019 | A 28-nm compute SRAM with bit-serial logic/arithmetic operations for programmable in-memory vector computing |
Peng et al. | 2020 | DNN+ NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training |
Verma et al. | 2019 | In-memory computing: Advances and prospects |
Koppula et al. | 2019 | EDEN: Enabling energy-efficient, high-performance deep neural network inference using approximate DRAM |
Valavi et al. | 2019 | A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute |
Long et al. | 2018 | ReRAM-based processing-in-memory architecture for recurrent neural network acceleration |
Marinella et al. | 2018 | Multiscale co-design analysis of energy, latency, area, and accuracy of a ReRAM analog neural training accelerator |
Knag et al. | 2020 | A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS |
Kim et al. | 2022 | An overview of processing-in-memory circuits for artificial intelligence and machine learning |
Yue et al. | 2022 | STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse |
Giacomin et al. | 2018 | A robust digital RRAM-based convolutional block for low-power image processing and learning applications |
Lu et al. | 2021 | NeuroSim simulator for compute-in-memory hardware accelerator: Validation and benchmark |
Ranjan et al. | 2019 | X-mann: A crossbar based architecture for memory augmented neural networks |
Jia et al. | 2018 | A microprocessor implemented in 65nm CMOS with configurable and bit-scalable accelerator for programmable in-memory computing |
Zidan et al. | 2017 | Field-programmable crossbar array (FPCA) for reconfigurable computing |
Sutradhar et al. | 2021 | Look-up-table based processing-in-memory architecture with programmable precision-scaling for deep learning applications |
Kim et al. | 2023 | A 1-16b reconfigurable 80Kb 7T SRAM-based digital near-memory computing macro for processing neural networks |
Long et al. | 2018 | A ferroelectric FET based power-efficient architecture for data-intensive computing |
Shanbhag et al. | 2022 | Benchmarking in-memory computing architectures |
Singh et al. | 2021 | SRIF: Scalable and reliable integrate and fire circuit ADC for memristor-based CIM architectures |
Kang et al. | 2021 | S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity |
Klein et al. | 2022 | ALPINE: Analog in-memory acceleration with tight processor integration for deep learning |