Angizi et al., 2018 - Google Patents
- ️Mon Jan 01 2018
Angizi et al., 2018
View PDF-
Document ID
- 8212378218705846570 Author
- He Z
- Parveen F
- Fan D Publication year
- 2018 Publication venue
- 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC)
External Links
Snippet
In this paper, we pave a novel way towards the concept of bit-wise In-Memory Convolution Engine (IMCE) that could implement the dominant convolution computation of Deep Convolutional Neural Networks (CNN) within memory. IMCE employs parallel computational …
- 230000001537 neural 0 title abstract description 10
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/544—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
- G06F7/5443—Sum of products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/50—Adding; Subtracting
- G06F7/505—Adding; Subtracting in bit-parallel fashion, i.e. having a different digit-handling circuit for each denomination
- G06F7/506—Adding; Subtracting in bit-parallel fashion, i.e. having a different digit-handling circuit for each denomination with simultaneous carry generation for, or propagation over, two or more stages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
- G06F17/5009—Computer-aided design using simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2207/00—Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F2207/38—Indexing scheme relating to groups G06F7/38 - G06F7/575
- G06F2207/3804—Details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/02—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/30—Arrangements for executing machine-instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Angizi et al. | 2018 | IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network |
Angizi et al. | 2018 | Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator |
Fan et al. | 2017 | Energy efficient in-memory binary deep neural network accelerator with dual-mode SOT-MRAM |
Bavikadi et al. | 2020 | A review of in-memory computing architectures for machine learning applications |
Jhang et al. | 2021 | Challenges and trends of SRAM-based computing-in-memory for AI edge devices |
Wang et al. | 2019 | A 28-nm compute SRAM with bit-serial logic/arithmetic operations for programmable in-memory vector computing |
Wang et al. | 2019 | 14.2 A compute SRAM with bit-serial integer/floating-point operations for programmable in-memory vector acceleration |
Yin et al. | 2019 | Vesti: Energy-efficient in-memory computing accelerator for deep neural networks |
Bojnordi et al. | 2016 | Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning |
Kim et al. | 2022 | An overview of processing-in-memory circuits for artificial intelligence and machine learning |
Umesh et al. | 2019 | A survey of spintronic architectures for processing-in-memory and neural networks |
Resch et al. | 2019 | PIMBALL: Binary neural networks in spintronic memory |
Yue et al. | 2022 | STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse |
US20180095930A1 (en) | 2018-04-05 | Field-Programmable Crossbar Array For Reconfigurable Computing |
Jain et al. | 2020 | TiM-DNN: Ternary in-memory accelerator for deep neural networks |
Angizi et al. | 2019 | Accelerating deep neural networks in processing-in-memory platforms: Analog or digital approach? |
Agrawal et al. | 2021 | IMPULSE: A 65-nm digital compute-in-memory macro with fused weights and membrane potential for spike-based sequential learning tasks |
Angizi et al. | 2018 | Dima: a depthwise cnn in-memory accelerator |
Angizi et al. | 2019 | Parapim: a parallel processing-in-memory accelerator for binary-weight deep neural networks |
Wang et al. | 2020 | A new MRAM-based process in-memory accelerator for efficient neural network training with floating point precision |
Ali et al. | 2022 | Compute-in-memory technologies and architectures for deep learning workloads |
Roohi et al. | 2019 | Processing-in-memory acceleration of convolutional neural networks for energy-effciency, and power-intermittency resilience |
Samiee et al. | 2019 | Low-energy acceleration of binarized convolutional neural networks using a spin hall effect based logic-in-memory architecture |
Angizi et al. | 2017 | Imc: energy-efficient in-memory convolver for accelerating binarized deep neural network |
Agrawal et al. | 2020 | CASH-RAM: Enabling in-memory computations for edge inference using charge accumulation and sharing in standard 8T-SRAM arrays |