Resch, 2021 - Google Patents
- ️Fri Jan 01 2021
Resch, 2021
View PDF-
Document ID
- 4437703802110379145 Author Publication year
- 2021
External Links
Snippet
Neural networks span a wide range of applications of industrial and commercial significance. Binary neural networks (BNN) are particularly effective in trading accuracy for performance, energy efficiency or hardware/software complexity. In this thesis, I demonstrate …
- 230000015654 memory 0 title abstract description 78
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/30—Arrangements for executing machine-instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/02—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/56—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2207/00—Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F2207/38—Indexing scheme relating to groups G06F7/38 - G06F7/575
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C15/00—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
- G11C15/04—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2211/00—Indexing scheme relating to digital stores characterized by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C2211/56—Indexing scheme relating to G11C11/56 and sub-groups for features not covered by these groups
- G11C2211/564—Miscellaneous aspects
- G11C2211/5641—Multilevel memory having cells with different number of storage levels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Resch et al. | 2019 | PIMBALL: Binary neural networks in spintronic memory |
Angizi et al. | 2018 | Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator |
Zabihi et al. | 2018 | In-memory processing on the spintronic CRAM: From hardware design to application mapping |
Jhang et al. | 2021 | Challenges and trends of SRAM-based computing-in-memory for AI edge devices |
Angizi et al. | 2018 | IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network |
Umesh et al. | 2019 | A survey of spintronic architectures for processing-in-memory and neural networks |
CN109766309B (en) | 2020-07-28 | Spin-memory-computing integrated chip |
Chang et al. | 2019 | PXNOR-BNN: In/with spin-orbit torque MRAM preset-XNOR operation-based binary neural networks |
Luo et al. | 2020 | Accelerating deep neural network in-situ training with non-volatile and volatile memory based hybrid precision synapses |
Angizi et al. | 2019 | Accelerating deep neural networks in processing-in-memory platforms: Analog or digital approach? |
He et al. | 2017 | Exploring STT-MRAM based in-memory computing paradigm with application of image edge extraction |
Angizi et al. | 2018 | Dima: a depthwise cnn in-memory accelerator |
Pham et al. | 2022 | STT-BNN: A novel STT-MRAM in-memory computing macro for binary neural networks |
Angizi et al. | 2019 | Parapim: a parallel processing-in-memory accelerator for binary-weight deep neural networks |
Yu et al. | 2014 | Energy efficient in-memory machine learning for data intensive image-processing by non-volatile domain-wall memory |
Jain et al. | 2018 | Computing-in-memory with spintronics |
Yan et al. | 2019 | iCELIA: A full-stack framework for STT-MRAM-based deep learning acceleration |
CN114388021A (en) | 2022-04-22 | Program-Assisted Ultra-Low-Power Inference Engine Using External Magnetic Fields |
Angizi et al. | 2022 | Pisa: A binary-weight processing-in-sensor accelerator for edge image processing |
Zhao et al. | 2023 | NAND-SPIN-based processing-in-MRAM architecture for convolutional neural network acceleration |
CN114341981A (en) | 2022-04-12 | Memory with artificial intelligence mode |
Ollivier et al. | 2022 | CORUSCANT: Fast efficient processing-in-racetrack memories |
CN114286977B (en) | 2024-08-16 | Artificial Intelligence Accelerator |
Song et al. | 2019 | Rebnn: in-situ acceleration of binarized neural networks in reram using complementary resistive cell |
Angizi et al. | 2017 | Imc: energy-efficient in-memory convolver for accelerating binarized deep neural network |