patents.google.com

Knag et al., 2020 - Google Patents

  • ️Wed Jan 01 2020
A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS

Knag et al., 2020

Document ID
3049563256452971351
Author
Chen G
Sumbul H
Kumar R
Hsu S
Agarwal A
Kar M
Kim S
Anders M
Kaul H
Krishnamurthy R
Publication year
2020
Publication venue
IEEE journal of solid-state circuits

External Links

Snippet

A binary neural network (BNN) chip explores the limits of energy efficiency and computational density for an all-digital deep neural network (DNN) inference accelerator. The chip intersperses data storage and computation using computation near memory (CNM) …

Continue reading at ieeexplore.ieee.org (other versions)
  • 230000001537 neural 0 title abstract description 22

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/523Multiplying only
    • G06F7/53Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/544Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored programme computers
    • G06F15/80Architectures of general purpose stored programme computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8007Architectures of general purpose stored programme computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored programme computers
    • G06F15/78Architectures of general purpose stored programme computers comprising a single central processing unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6267Classification techniques
    • G06K9/6268Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches

Similar Documents

Publication Publication Date Title
Knag et al. 2020 A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS
Kim et al. 2021 Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks
Jhang et al. 2021 Challenges and trends of SRAM-based computing-in-memory for AI edge devices
Yu et al. 2022 A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks
Kang et al. 2018 An in-memory VLSI architecture for convolutional neural networks
Jiang et al. 2020 C3SRAM: An in-memory-computing SRAM macro based on robust capacitive coupling computing mechanism
Giacomin et al. 2018 A robust digital RRAM-based convolutional block for low-power image processing and learning applications
Lee et al. 2021 A charge-domain scalable-weight in-memory computing macro with dual-SRAM architecture for precision-scalable DNN accelerators
Chen et al. 2021 Multiply accumulate operations in memristor crossbar arrays for analog computing
Umesh et al. 2019 A survey of spintronic architectures for processing-in-memory and neural networks
Ueyoshi et al. 2018 QUEST: Multi-purpose log-quantized DNN inference engine stacked on 96-MB 3-D SRAM using inductive coupling technology in 40-nm CMOS
Du et al. 2018 An analog neural network computing engine using CMOS-compatible charge-trap-transistor (CTT)
Seo et al. 2022 Digital versus analog artificial intelligence accelerators: Advances, trends, and emerging designs
Kang et al. 2020 Deep in-memory architectures in SRAM: An analog approach to approximate computing
Zhang et al. 2022 PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference
Kim et al. 2023 A 1-16b reconfigurable 80Kb 7T SRAM-based digital near-memory computing macro for processing neural networks
Lin et al. 2022 A review on SRAM-based computing in-memory: Circuits, functions, and applications
Kim et al. 2018 Input-splitting of large neural networks for power-efficient accelerator with resistive crossbar memory array
Nasrin et al. 2021 Mf-net: Compute-in-memory sram for multibit precision inference using memory-immersed data conversion and multiplication-free operators
Kim et al. 2022 SOT-MRAM digital PIM architecture with extended parallelism in matrix multiplication
Cheon et al. 2023 A 2941-TOPS/W charge-domain 10T SRAM compute-in-memory for ternary neural network
Sie et al. 2021 MARS: Multimacro architecture SRAM CIM-based accelerator with co-designed compressed neural networks
Nobari et al. 2023 FPGA-based implementation of deep neural network using stochastic computing
Lu et al. 2023 An RRAM-based computing-in-memory architecture and its application in accelerating transformer inference
Alam et al. 2021 Exact stochastic computing multiplication in memristive memory