Knag et al., 2020 - Google Patents
- ️Wed Jan 01 2020
Knag et al., 2020
-
Document ID
- 3049563256452971351 Author
- Chen G
- Sumbul H
- Kumar R
- Hsu S
- Agarwal A
- Kar M
- Kim S
- Anders M
- Kaul H
- Krishnamurthy R Publication year
- 2020 Publication venue
- IEEE journal of solid-state circuits
External Links
Snippet
A binary neural network (BNN) chip explores the limits of energy efficiency and computational density for an all-digital deep neural network (DNN) inference accelerator. The chip intersperses data storage and computation using computation near memory (CNM) …
- 230000001537 neural 0 title abstract description 22
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/544—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
- G06F15/80—Architectures of general purpose stored programme computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
- G06F15/8007—Architectures of general purpose stored programme computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
- G06F15/78—Architectures of general purpose stored programme computers comprising a single central processing unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Knag et al. | 2020 | A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS |
Kim et al. | 2021 | Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks |
Jhang et al. | 2021 | Challenges and trends of SRAM-based computing-in-memory for AI edge devices |
Yu et al. | 2022 | A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks |
Kang et al. | 2018 | An in-memory VLSI architecture for convolutional neural networks |
Jiang et al. | 2020 | C3SRAM: An in-memory-computing SRAM macro based on robust capacitive coupling computing mechanism |
Giacomin et al. | 2018 | A robust digital RRAM-based convolutional block for low-power image processing and learning applications |
Lee et al. | 2021 | A charge-domain scalable-weight in-memory computing macro with dual-SRAM architecture for precision-scalable DNN accelerators |
Chen et al. | 2021 | Multiply accumulate operations in memristor crossbar arrays for analog computing |
Umesh et al. | 2019 | A survey of spintronic architectures for processing-in-memory and neural networks |
Ueyoshi et al. | 2018 | QUEST: Multi-purpose log-quantized DNN inference engine stacked on 96-MB 3-D SRAM using inductive coupling technology in 40-nm CMOS |
Du et al. | 2018 | An analog neural network computing engine using CMOS-compatible charge-trap-transistor (CTT) |
Seo et al. | 2022 | Digital versus analog artificial intelligence accelerators: Advances, trends, and emerging designs |
Kang et al. | 2020 | Deep in-memory architectures in SRAM: An analog approach to approximate computing |
Zhang et al. | 2022 | PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference |
Kim et al. | 2023 | A 1-16b reconfigurable 80Kb 7T SRAM-based digital near-memory computing macro for processing neural networks |
Lin et al. | 2022 | A review on SRAM-based computing in-memory: Circuits, functions, and applications |
Kim et al. | 2018 | Input-splitting of large neural networks for power-efficient accelerator with resistive crossbar memory array |
Nasrin et al. | 2021 | Mf-net: Compute-in-memory sram for multibit precision inference using memory-immersed data conversion and multiplication-free operators |
Kim et al. | 2022 | SOT-MRAM digital PIM architecture with extended parallelism in matrix multiplication |
Cheon et al. | 2023 | A 2941-TOPS/W charge-domain 10T SRAM compute-in-memory for ternary neural network |
Sie et al. | 2021 | MARS: Multimacro architecture SRAM CIM-based accelerator with co-designed compressed neural networks |
Nobari et al. | 2023 | FPGA-based implementation of deep neural network using stochastic computing |
Lu et al. | 2023 | An RRAM-based computing-in-memory architecture and its application in accelerating transformer inference |
Alam et al. | 2021 | Exact stochastic computing multiplication in memristive memory |