Angizi et al., 2019 - Google Patents
- ️Tue Jan 01 2019
Angizi et al., 2019
View PDF-
Document ID
- 13041337725728828934 Author
- He Z
- Reis D
- Hu X
- Tsai W
- Lin S
- Fan D Publication year
- 2019 Publication venue
- 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)
External Links
Snippet
Nowadays, research topics on AI accelerator designs have attracted great interest, where accelerating Deep Neural Network (DNN) using Processing-in-Memory (PIM) platforms is an actively-explored direction with great potential. PIM platforms, which simultaneously aims to …
- 230000001537 neural 0 title abstract description 27
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/56—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/02—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/06—Sense amplifiers; Associated circuits, e.g. timing or triggering circuits
- G11C7/067—Single-ended amplifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C15/00—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
- G11C15/04—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2211/00—Indexing scheme relating to digital stores characterized by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C2211/56—Indexing scheme relating to G11C11/56 and sub-groups for features not covered by these groups
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Angizi et al. | 2019 | Accelerating deep neural networks in processing-in-memory platforms: Analog or digital approach? |
US10877752B2 (en) | 2020-12-29 | Techniques for current-sensing circuit design for compute-in-memory |
Jhang et al. | 2021 | Challenges and trends of SRAM-based computing-in-memory for AI edge devices |
Agrawal et al. | 2019 | Xcel-RAM: Accelerating binary neural networks in high-throughput SRAM compute arrays |
Angizi et al. | 2019 | MRIMA: An MRAM-based in-memory accelerator |
US11061646B2 (en) | 2021-07-13 | Compute in memory circuits with multi-Vdd arrays and/or analog multipliers |
Angizi et al. | 2018 | Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator |
Chi et al. | 2016 | Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory |
US10748603B2 (en) | 2020-08-18 | In-memory multiply and accumulate with global charge-sharing |
Bavikadi et al. | 2020 | A review of in-memory computing architectures for machine learning applications |
US10346347B2 (en) | 2019-07-09 | Field-programmable crossbar array for reconfigurable computing |
Angizi et al. | 2018 | IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network |
Umesh et al. | 2019 | A survey of spintronic architectures for processing-in-memory and neural networks |
Roohi et al. | 2019 | Apgan: Approximate gan for robust low energy learning from imprecise components |
Kim et al. | 2022 | An overview of processing-in-memory circuits for artificial intelligence and machine learning |
Luo et al. | 2020 | Accelerating deep neural network in-situ training with non-volatile and volatile memory based hybrid precision synapses |
Jain et al. | 2020 | TiM-DNN: Ternary in-memory accelerator for deep neural networks |
Zidan et al. | 2017 | Field-programmable crossbar array (FPCA) for reconfigurable computing |
Angizi et al. | 2019 | Parapim: a parallel processing-in-memory accelerator for binary-weight deep neural networks |
Angizi et al. | 2018 | Dima: a depthwise cnn in-memory accelerator |
Ali et al. | 2022 | Compute-in-memory technologies and architectures for deep learning workloads |
Wang et al. | 2020 | A new MRAM-based process in-memory accelerator for efficient neural network training with floating point precision |
Shanbhag et al. | 2022 | Benchmarking in-memory computing architectures |
Jain et al. | 2018 | Computing-in-memory with spintronics |
Kang et al. | 2021 | S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity |