GeNN  1.1
GPU enhanced Neuronal Networks (GeNN)
 All Classes Files Functions Variables Typedefs Macros Pages

Introduction

GeNN is a software library for facilitating the simulation of neuronal network models on NVIDIA CUDA enabled GPU hardware. It was designed with computational neuroscience models in mind rather than artificial neural networks. The main philosophy of GeNN is two-fold:

  • GeNN relies heavily on code generation to make it very flexible and allow adjusting simulation code to the model of interest and the GPU hardware that is detected at compile time.
  • GeNN is lightweight in that it provides code for running models of neuronal networks on GPU hardware but it leaves it to the user to write a final simulation engine. It so allows maximal flexibility to the user who can use any of the provided code but can fully choose, inspect, extend or otherwise modify the generated code. She can also introduce her own optimisations and in particular control the data flow from and to the GPU in any desired granularity.

This User guide gives an overview of how to use GeNN for a novice user and tries to lead the user to more expert use later on. With this we jump right in.

Defining a network model

A network model is defined by the user by providing the function

void modelDefinition(NNmodel &model)

In this function, the following tasks must be completed:

  • The name of the model must be defined:
    model.setName("MyModel");
  • neuron populations (at least one) must be added (see Defining neuron populations). The user may add as many neuron populations as she wishes. If resources run out, there will not be a warning but GeNN will fail. All populations should have a unique name.
  • synapse populations (at least one) must be added (see Defining synapse populations). Again, the number of synaptic connection populations is unlimited other than by resources.

Defining neuron populations

Neuron populations are added using the function

model.addNeuronPopulation("name", n, TYPE, para, ini);

where the arguments are:

  • const char* name: Name of the neuron population
  • int n: number of neurons in the population
  • int TYPE: Type of the neurons, refers to either a standard type (see Neuron models) or user-defined type
  • float *para: Parameters of this neuron type
  • float *ini: Initial values for variables of this neuron type

The user may add as many neuron populations as the model necessitates. They should all have unique names.

Defining synapse populations

Synapse populations are added with the command

model.addSynapsePopulation("name", sType, sConn, gType, delaySteps, postSyn, "preName", "postName", sParam, VpostSyn,PpostSyn);

where the arguments are

  • const char* name: The name of the synapse population
  • int sType: The type of synapse to be added (i.e. learning mode). See Models below for the available predefined synapse types.
  • int sConn: The type of synaptic connectivity. the options currently are "ALLTOALL", "DENSE", "SPARSE" (see Connectivity types )
  • int gType: The way how the synaptic conductivity g will be defined. Options are "INDIVIDUALG", "GLOBALG", "INDIVIDUALID". For their meaning, see Conductance definition methods below.
  • int delaySteps: Number of delay slots.
  • int postSyn: Postsynaptic integration method. See Post-synaptic integration methods for predefined types.
  • char* preName: Name of the (existing!) per-synaptic neuron population.
  • char* postName: Name of the (existing!) post-synaptic neuron population.
  • float* sParam: A C-type array of floats that contains parameter values (common to all synapses of the population) which will be used for the defined synapses. The array must contain the right number of parameters in the right order for the chosen synapse type. If too few, segmentation faults will occur, if too many, excess will be ignored.

Warnings: 1. If the synapse conductance definition type is "GLOBALG" the global value of the synapse conductances must be set with setSynapseG().

Synaptic update can be done by using true spikes (one point per spike after the threshold) or by using spike events (all points above the threshold).

Neuron models

There is a number of predefined models which can be chosen in the addNeuronGroup function by their unique cardinal number, starting from 0. For convenience, C constants with readable names are predefined:

MAPNEURON (Map Neurons)

The MAPNEURON type is a map based neuron model as defined in [3] .

It has 2 variables:

  • V - the membrane potential
  • preV - the membrane potential at the previous time step

and it has 4 parameters:

  • Vspike - the membrane potential at the top of the spike
  • alpha - determines the shape of the iteration function
  • y - "shift / excitation" parameter, also determines the iteration function
  • beta - roughly regulates the scale of the input into the neuron

POISSONNEURON (Poisson Neurons)

Poisson neurons have constant membrane potential (Vrest) unless they are activated randomly to the Vspike value if (t- SpikeTime ) > trefract.

It has 3 variables:

  • V - Membrane potential
  • Seed - Seed for random number generation
  • SpikeTime - Time at which the neuron spiked for the last time

and 4 parameters:

  • therate - Firing rate
  • trefract - Refractory period
  • Vspike - Membrane potential at spike (mV)
  • Vrest - Membrane potential at rest (mV)

TRAUBMILES (Hodgkin-Huxley neurons with Traub & Miles algorithm)

It has 4 variables:

  • V - membrane potential E
  • m - probability for Na channel activation m
  • h - probability for not Na channel blocking h
  • n - probability for K channel activation n

and 7 parameters:

  • gNa - Na conductance in 1/(mOhms * cm^2)
  • ENa - Na equi potential in mV
  • gK - K conductance in 1/(mOhms * cm^2)
  • EK - K equi potential in mV
  • gl - Leak conductance in 1/(mOhms * cm^2)
  • El - Leak equi potential in mV
  • Cmem - Membrane capacity density in muF/cm^2

IZHIKEVICH (Izhikevich neurons with fixed parameters)

This is the Izhikevich model with fixed parameters [1].

Variables are:

  • V - Membrane potential
  • U - Membrane recovery variable

Parameters are:

  • a - time scale of U
  • b - sensitivity of U
  • c - after-spike reset value of V
  • d - after-spike reset value of U

IZHIKEVICH_V (Izhikevich neurons with variable parameters)

This is the same model as IZHIKEVICH (Izhikevich neurons with fixed parameters) IZHIKEVICH but parameters defined as variables in order to provide a parameter range instead of fixed values for every neuron in the population.

Defining your own neuron model

In order to define a new neuron model for use in a GeNN application, it is necessary to populate an object of class neuronModel and append it to the global vector nModels. The neuronModel class has several data members that make up the full description of the neuron model:

  • simCode of type string: This needs to be assigned a C++ string that contains the code for executing the integration of the model for one time step. Within this code string, variables need to be referred to by , where NAME is the name of the variable as defined in the vector varNames. The code may refer to the predefined primitives "DT" for the time step size and "Isyn" for the total incoming synaptic current.
  • varNames of type vector<string>: This vector needs to be filled with the names of variables that make up the neuron state. The variables defined here as "NAME" can then be used in the syntax in the code string.
  • varTypes of type vector<string>: This vector needs to be filled with the variable type (e.g. "float", "double", etc) for the variables defined in varNames. Types and variables are matched to each other by position in the respective vectors.
  • pNames of type vector<string>: This vector will contain the names of parameters relevant to the model. If defined as "NAME" here, they can then be referenced as "$(NAME)" in the code string. The length of this vector determines the expected number of parameters in the initialisation of the neuron model. Parameters are currently assumed to be always of type float.
  • dpNames of type vector<string>: Names of dependant parameters. This is a mechanism for enhanced efficiency for running neuron models. If parameters with model-side meaning, such as time constants or conductances always appear in a certain combination in the model, then it is more efficient to pre-compute the combination and define it as a dependent parameter. This vector contains the names of such dependent parameters.
  • tmpVarNames of type vector<string>: This vector can be used to request additional variables that are not part of the state of the neuron but used only as temporary storage during evaluation of the integration time step.
  • tmpVarTypes of type vector<string>: This vector will contain the variable types of the temporary variables.

Once the completed neuronModel object is appended to the nModels vector it can be used in network descriptions by referring to its cardinal number in the nModels vector. I.e., if the model is added as the 4th entry, it would be model "3" (counting starts at 0 in usual C convention). The information of the cardinal number of a new model can be obtained by referring to nModels.size() right before appending the new model.

For dependent parameters in a user-defined model, it is the user's responsibility to intitialize the derived parameters from the relevant entries of the global neuronPara vector entries, to store them in a vector and append this vector to the global vector dnp. This needs to be done right after the call to addNeuronPopulation that involves the neuron model in questions. For an example of what needs to be done, refer to the member function initDerivedNeuronPara() of class NNmodel which does this for Rulkov map neurons.

Explicit current input to neurons

The user can decide whether a neuron group receives external input current or not in addition to the synaptic input that it receives from the network. External input to a neuron group is activated by calling activateDirectInput function. It receives two arguments: The first argument is the name of the neuron group to receive input current. The second parameter defines the type of input. Current options are:

  • 0: NOINP: Neuron group receives no input. This is the value by default when the explicit input is not activated.
  • 1: CONSTINP: All the neurons receive a constant input value. This value may also be defined as a variable.
  • 2: MATINP: Input is read from a file containing the input matrix.
  • 3: INPRULE: Input is defined as a rule. It differs from CONSTINP by the way that the input is injected: CONSTINP creates a value which is modified in real time, hence complex instructions are limited. INPRULE creates an input array which will be copied into the device memory.

Synapse models

Models

Currently 3 predefined synapse models are available:

NSYNAPSE (No Learning)

If this model is selected, no learning rule is applied to the synapse. The model has 3 parameters:

  • tau_S* - decay time constant for S [ms]
  • Epre: Presynaptic threshold potential
  • Erev* - Reversal potential

(* DEPRECATED: Decay time constant and reversal potential in synapse parameters are not used anymore and they will be removed in the next release. They should be defined in the postsynaptic mechanisms.)

NGRADSYNAPSE (Graded Synapse)

In a graded synapse, the conductance is updated gradually with the rule:

\[ gSyn= g * tanh((E - E_{pre}) / V_{slope} \]

The parameters are:

  • Erev*: Reversal potential
  • Epre: Presynaptic threshold potential
  • tau_S*: Decay time constant for S [ms]
  • Vslope: Activation slope of graded release

(* DEPRECATED: Decay time constant and reversal potential in synapse parameters are not used anymore and they will be removed in the next release. They should be defined in the postsynaptic mechanisms.)

LEARN1SYNAPSE (Learning Synapse with a Primitive Role)

This is a simple STDP rule including a time delay for the finite transmission speed of the synapse, defined as a piecewise function.

This model has 13 parameters:

  • Erev: Reversal potential
  • Epre: Presynaptic threshold potential
  • tau_S: Decay time constant for S [ms]
  • TLRN: Time scale of learning changes
  • TCHNG: Width of learning window
  • TDECAY: Time scale of synaptic strength decay
  • TPUNISH10: Time window of suppression in response to 1/0
  • TPUNISH01: Time window of suppression in response to 0/1
  • GMAX: Maximal conductance achievable
  • GMID: Midpoint of sigmoid g filter curve
  • GSLOPE: Slope of sigmoid g filter curve
  • TAUSHiFT: Shift of learning curve
  • GSYN0: Value of syn conductance g decays to

For more details, see [2].

Defining a new synapse model

There is a way to define a synapse model the same way a neuronModel can be defined. This feature is still experimental and supported only with the SpineML and development branches.

A synapse model is a weightUpdateModel object which consists of variables, parameters, and three string objects which will be substituted in the code. These three strings are:

  • simCode: Simulation code that is used when a true spike is detected (only one point after Vthresh)
  • simCodeEvnt: Simulation code that is used for spike events (all the instances where Vm > Vthres)
  • simLearnPost: Simulation code which is used in the learnSynapsesPost kernel/function, where postsynaptic neuron spikes before the presynaptic neuron in the STDP window. Usually this is simply the conductance update rule defined by the other simCode elements above, with negative timing. This code is needed as simCode and simCodeEnvt are used after spanning the presynaptic spikes, where it is not possible to detect where a postsynaptic neuron fired before the presynaptic neuron.

These codes would include update functions for adding up conductances for that neuron model and for changes in conductances for the next time step (learning). Condutance in this code should be referred to as $("G")

Connectivity types

If INDIVIDUALG is used with ALLTOALL connectivity, the conductance values are stored in an array called "ftype * gp<synapseName>", defined and initialised in runner.cc in the generated code.

If INDIVIDUALID is used with ALLTOALL connectivity, the connectivity matrix is stored in an array called "unsigned int * gp<synapseName>"

If connectivity is of SPARSE type, connectivity and conductance values are stored in a struct in order to occupy minimum memory needed. The struct Conductance contains 3 array members and one integer member: 1: unsigned int connN: number of connections in the population. This value is needed for allocaton of arrays. 2: floattype * gp: Values of conductances. The indices that correspond to these values are defined in a pre-to-post basis by the following arrays: 4: unsigned int gInd, of size connN: Indices of corresponding postsynaptic neurons concatenated for each presynaptic neuron. 3: unsigned int *gIndInG, of size model.neuronN[model.synapseSource[synInd]]+1: This array defines from which index in the gp array the indices in gInd would correspond to the presynaptic neuron that corresponds to the index of the gIndInG array, with the number of connections being the size of gInd. More specifically, gIndIng[n+1]-gIndIng[n] would give the number of postsynaptic connections for neuron n, and the conductance values are stored in gp[gIndIng[n]] to gp[gIndIng[n]+1]. if there are no connections for a presynaptic neuron, then gp[gIndIng[n]]=gp[gIndIng[n]+1].

For example, consider a network of two presynaptic neurons connected to three postsynaptic neurons: 0th presynaptic neuron connected to 1st and 2nd postsynaptic neurons, the 1st presynaptic neuron connected to 0th and 2nd neurons. The struct Conductance should have these members, with indexing from 0: ConnN = 4 gp= gPre0-Post1 gpre0-post2 gpre1-post0 gpre1-post2 gInd= 1 2 0 2 gIndIng= 0 2 4

See tools/gen_syns_sparse_IzhModel used in Izh_sparse project to see a working example.

Conductance definition methods

The available options work as follows:

  • INDIVIDUALG: When this option is chosen in the addSynapsePopulation command, GeNN reserves an array of size n_pre x n_post float for individual conductance values for each combination of pre and postsynaptic neuron. The actual values of teh conductances are passed at runtime from the user side code, using the copyGToDevice function.
  • GLOBALG: When this option is chosen, the addSynapsePopulation command must be followed within the modelDefinition function by a call to setSynapseG for this synapse population. This option can only be sensibly combined with connectivity type ALLTOALL.
  • INDIVIDUALID: When this option is chosen, GeNN expects to use the same maximal conductance for all existing synaptic connections but which synapses exist will be defined at runtime from the user side code.

Post-synaptic integration methods

The shape of the postsynaptic current can be defined by either using a predefined method or by adding a new postSynModel object.

  • EXPDECAY: Exponential decay. Decay time constant and reversal potential parameters are needed for this postsynaptic mechanism.