pubmed.ncbi.nlm.nih.gov

An efficient automated parameter tuning framework for spiking neural networks - PubMed

  • ️Wed Jan 01 2014

An efficient automated parameter tuning framework for spiking neural networks

Kristofor D Carlson et al. Front Neurosci. 2014.

Abstract

As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.

Keywords: GPU programming; STDP; evolutionary algorithms; parameter tuning; self-organizing receptive fields; spiking neural networks.

PubMed Disclaimer

Figures

Figure 1
Figure 1

A simplified diagram of NVIDIA CUDA GPU architecture (adapted from Nageswaran et al., 2009a,b). Our simulations used an NVIDIA Tesla 2090 GPU that had 16 streaming multiprocessors (SM) made up of 32 scalar processors (SPs) and 6 GB of global memory.

Figure 2
Figure 2

(A) Flow chart for the execution of an Evolutionary Algorithm (EA). A population of individuals (μ) is first initialized and then evaluated. After evaluation, the most successful individuals are selected to reproduce via recombination and mutation to create an offspring generation (λ). The offspring then become parents for a new generation of the EA. This continues until a termination condition is reached. The light blue boxes denote operations that are carried out serially on the CPU while the light brown box denotes operations carried out in parallel on the GPU. The operations inside the dotted gray box are described in greater detail in (B). (B) Description of the automated parameter tuning framework consists of the CARLsim SNN simulator (light brown), the EO computational framework (light blue), and the Parameter Tuning Interface (PTI) (light green). The PTI passes tuning parameters (PN) to CARLsim for evaluation in parallel on the GPU. After evaluation, fitness values (FN) are passed from CARLsim back to EO via the PTI.

Figure 3
Figure 3

Network architecture of the SNN tuned by the parameter tuning framework to produce V1 simple cell response and SORFs. N represents the number of neurons used in different groups. E → E and E → I STDP curves are included to describe plastic On(Off)Buffer → Exc and Exc → Inh connections. Tuned parameters are indicated with dashed arrows and boxes.

Figure 4
Figure 4

Plot of best and average fitness vs. generation number for entire simulation run (287 generations, 4104 neuron SNNs, 10 parallel configurations). All values were normalized to the best fitness value. The error bars denote the standard deviation for the average fitness at intervals of once per 20 generations. Initially the standard deviation of the average fitness is large as the EA explores the parameter space, but over time, the standard deviation decreases as the EA finds better solutions.

Figure 5
Figure 5

Plot of the firing rate response of Exc group neurons vs. grating presentation orientation angle. The blue lines indicate the firing rate of a neuron in the simulation while the dotted red lines indicate idealized Gaussian tuning curves. Together, the four excitatory neurons cover the stimulus space of all the possible presentation angles.

Figure 6
Figure 6

Synaptic weights for the On(Off)Buffer → Exc connections of a high fitness SNN individual. (A) Initial weight values before training. (B) After training for approximately 100 simulated minutes with STDP and homeostasis, the synaptic weight patterns resemble Gabor filters. (C) Four example orientation grating patterns are shown.

Figure 7
Figure 7

The responses of the Exc group neurons (identified by their neuron id on the y-axis) were tested for all 40 grating orientations. One orientation was presented per second and the test ran for 40 s (x-axis). (A) Neuronal spike responses of 400 neurons trained with the highest fitness SNN parameters found using the parameter tuning framework. (B) Neuronal spike responses of 400 neurons trained using a single set of low fitness parameters. The neurons were arranged such that those neurons responding to similar orientations were grouped together for both (A,B). This accounts for the strong diagonal pattern found in (A) and the very faint diagonal pattern found in (B). Neuronal spike responses in (A) are sparse in that relatively few neurons code for one orientation while neuronal spike responses in (B) are not sparse. Additional, many of the neuronal spike responses in part (A) employ a wide range of firing rates to describe a subset of the orientation stimulus space while spike responses in (B) have similar firing responses across all angles in all cases.

Figure 8
Figure 8

Plot of the target homeostatic firing rate parameters for Exc group and Inh group for high fitness SNNs shown in (A) and low fitness SNNs shown in (B). The Exc group homeostatic target firing rate is significantly more constrained (between the ranges of 10–14 Hz) for the high fitness SNNs as opposed to the corresponding parameters for the low fitness SNNs. There were 128 high fitness SNNs and 2752 low fitness SNNs out of a total of 2880 individuals. EAs allow parent individuals to pass high value parameter values directly to their offspring, because of this, there are many offspring with identical high fitness values. This explains why there are not 128 distinct points distinguishable in (A).

Figure 9
Figure 9

The time windows in which STDP occurs are often modeled as decaying exponentials and each of the LTP and LTD windows can be characterized by single decay constant. The degree to which the weight is increased during LTP or decreased during LTD is often called the LTP/LTD amplitude or magnitude. (A) Ratio of the STDP LTD/LTP decay constant for the Buffer to Exc group connections (blue) and the Exc to Inh group connections (red) for high fitness SNNs. (B) The ratio of the STDP LTD/LTP amplitude for the Buffer to Exc group connections (blue) and the Exc to Inh group connections (blue) for high fitness SNNs.

Figure 10
Figure 10

Population decoding of eight test presentation angles. The test presentation angle θ, is shown above each population decoding figure. 100 simulation runs, each with identical parameter values but different training presentation orders, were conducted and the firing rates of the Exc group neurons were recorded. The individual responses of each of the 400 neurons (4 Exc neurons × 100 runs) are shown with solid black arrows. These individuals were summed to give a population vector (shown with a blue arrow) that was compared to the correct presentation angle (shown with a red arrow). Both the population vectors and correct presentation angle vectors were normalized while the component vectors were scaled down by a factor of 2 for display purposes (see text for details).

Figure 11
Figure 11

Plot of GPU speedup over CPU vs. number of SNNs run in parallel for different sized SNNs and different numbers of SNNs run in parallel. Three different SNN sizes were used, the blue line denotes SNNs with 1032 neurons, the green line denotes SNNs with 2312 neurons, and the red line denotes SNNs with 4104 neurons.

Similar articles

Cited by

References

    1. Abbott L. F., Nelson S. B. (2000). Synaptic plasticity: taming the beast. Nat. Neurosci. 3, 1178–1183 10.1038/81453 - DOI - PubMed
    1. Ahmadi A., Soleimani H. (2011). A GPU based simulation of multilayer spiking neural networks, in Proceedings of the 2011 Iranian Conference on Electrical Engineering (ICEE) (Tehran: ), 1–5
    1. Amir A., Datta P., Risk W. P., Cassidy A. S., Kusnitz J. A., Esser S. K., et al. (2013). Cognitive computing programming paradigm: a corelet language for composing networks of neurosynaptic cores, in Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN) (Dallas, TX: ). 10.1109/IJCNN.2013.6707078 - DOI
    1. Avery M., Krichmar J. L., Dutt N. (2012). Spiking neuron model of basal forebrain enhancement of visual attention, in Proccedings of the 2012 International Joint Conference on Neural Networks (IJCNN) (Brisbane, QLD: ), 1–8 10.1109/IJCNN.2012.6252578 - DOI
    1. Baladron J., Fasoli D., Faugeras O. (2012). Three applications of GPU computing in neuroscience. Comput. Sci. Eng. 14, 40–47 10.1109/MCSE.2011.119 - DOI