Kim et al., 2022 - Google Patents
- ️Sat Jan 01 2022
Kim et al., 2022
View PDF-
Document ID
- 4969503506706290698 Author
- Yu C
- Xie S
- Chen Y
- Kim J
- Kim B
- Kulkarni J
- Kim T Publication year
- 2022 Publication venue
- IEEE Journal on Emerging and Selected Topics in Circuits and Systems
External Links
Snippet
Artificial intelligence (AI) and machine learning (ML) are revolutionizing many fields of study, such as visual recognition, natural language processing, autonomous vehicles, and prediction. Traditional von-Neumann computing architecture with separated processing …
- 238000010801 machine learning 0 title abstract description 21
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/50—Adding; Subtracting
- G06F7/505—Adding; Subtracting in bit-parallel fashion, i.e. having a different digit-handling circuit for each denomination
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
- G11C11/41—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming static cells with positive feedback, i.e. cells not needing refreshing or charge regeneration, e.g. bistable multivibrator or Schmitt trigger
- G11C11/412—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming static cells with positive feedback, i.e. cells not needing refreshing or charge regeneration, e.g. bistable multivibrator or Schmitt trigger using field-effect transistors only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
- G06F15/78—Architectures of general purpose stored programme computers comprising a single central processing unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/56—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C15/00—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
- G11C15/04—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kim et al. | 2022 | An overview of processing-in-memory circuits for artificial intelligence and machine learning |
Agrawal et al. | 2019 | Xcel-RAM: Accelerating binary neural networks in high-throughput SRAM compute arrays |
Kim et al. | 2021 | Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks |
Jhang et al. | 2021 | Challenges and trends of SRAM-based computing-in-memory for AI edge devices |
Wang et al. | 2019 | A 28-nm compute SRAM with bit-serial logic/arithmetic operations for programmable in-memory vector computing |
US10346347B2 (en) | 2019-07-09 | Field-programmable crossbar array for reconfigurable computing |
Bavikadi et al. | 2020 | A review of in-memory computing architectures for machine learning applications |
Bojnordi et al. | 2016 | Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning |
Yin et al. | 2019 | Vesti: Energy-efficient in-memory computing accelerator for deep neural networks |
Knag et al. | 2020 | A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS |
Lee et al. | 2021 | A charge-domain scalable-weight in-memory computing macro with dual-SRAM architecture for precision-scalable DNN accelerators |
Chang et al. | 2019 | PXNOR-BNN: In/with spin-orbit torque MRAM preset-XNOR operation-based binary neural networks |
Yue et al. | 2022 | STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse |
Seo et al. | 2022 | Digital versus analog artificial intelligence accelerators: Advances, trends, and emerging designs |
Ankit et al. | 2020 | Circuits and architectures for in-memory computing-based machine learning accelerators |
Zhang et al. | 2022 | PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference |
Kang et al. | 2020 | Deep in-memory architectures in SRAM: An analog approach to approximate computing |
Yavits et al. | 2021 | GIRAF: General purpose in-storage resistive associative framework |
Jaiswal et al. | 2020 | I-SRAM: Interleaved wordlines for vector Boolean operations using SRAMs |
Ali et al. | 2022 | Compute-in-memory technologies and architectures for deep learning workloads |
Heo et al. | 2022 | T-PIM: An energy-efficient processing-in-memory accelerator for end-to-end on-device training |
Shanbhag et al. | 2022 | Benchmarking in-memory computing architectures |
Agrawal et al. | 2020 | CASH-RAM: Enabling in-memory computations for edge inference using charge accumulation and sharing in standard 8T-SRAM arrays |
Angizi et al. | 2022 | Pisa: A binary-weight processing-in-sensor accelerator for edge image processing |
Kang et al. | 2021 | S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity |