GeNN  1.1
GPU enhanced Neuronal Networks (GeNN)
 All Classes Files Functions Variables Typedefs Macros Pages
GeNN Documentation

GeNN is a software package to enable neuronal network simulations on NVIDIA GPUs by code generation.

This documentation is under construction. If you cannot find what you are looking for, please contact the project developers.

Download

You can download GeNN either as a zip file of a stable release or a snapshot of the most recent stable version or the unstable, potentially buggy development version using the Mercurial version control system.

Downloading a release

Point your browser to http://sourceforge.net/projects/genn/ and download the latest release using the green download button. Then continue as described in the Installation section.

Obtaining a Mercurial snapshot

Install mercurial (http://mercurial.selenic.com/) on your system. Then point your browser to http://sourceforge.net/p/genn/code/ and use the suggested command line to clone a copy of the mercurial snapshot:

hg clone http://hg.code.sf.net/p/genn/code genn-default

This will clone the default branch which always contains a working (though unlikely bug free) version. If you want the bleeding edge development version (which may or may not be fully functional at any given time), use

hg clone http://hg.code.sf.net/p/genn/code genn-development -b development

In both cases, "genn-default" and "genn-development" are the target directory name which you can choose to your liking.

Alternatively you can click the "Download snapshot" button (http://sourceforge.net/p/genn/code/ci/default/tarball). You can then skip the unzip step and continue as described in Installation.

Installation

Installation of GeNN:

(i) Unpack GeNN.zip in a convenient location.

(ii) Define the environment variable "GeNNPATH" to point to the main GeNN directory, e.g. if you extracted GeNN to /usr/local/GeNN, then you can add "export GeNNPATH=/usr/local/GeNN" to your login script. If you are using CYGWIN, the path should be a windows path or a mixed path (i.e. with normal slashes instead of backslashes) as it will be interpreted by cl.

(iii) Add $GeNNPATH/lib/bin to your PATH variable, e.g. "export PATH=$PATH:$GeNNPATH/lib/bin". Under CYGWIN, it is safer to enter full linux path (such as: export PATH=$PATH:/usr/local/GeNN/lib/bin)

(iv) Get a fresh installation of the Nvidia cuda toolkit from https://developer.nvidia.com/cuda-downloads

(v) Set the CUDA_PATH variable in $GeNNPATH/lib/include/makefile_common.mk to the location of your Nvidia cuda toolkit installation, if it is not already set by the system. For most people, the default value of /usr/local/cuda is fine.

(vi) Modify Makefile examples under $GeNNPATH/lib/src/ and $GeNNPATH/userproject/ to add extra linker-, include- and compiler-flags on a per-project basis, or modify global default flags in $GeNNPATH/lib/include/makefile_common.mk.

This completes the installation.

If you are using GeNN under CYGWIN, you need cl which comes with Visual Studio. You need a Visual Studio version that is supported by your CUDA Toolkit. Before launching CYGWIN, you have to source vscvsrs.bat script under Visual Studio directory first. Note that the debugging option is not available under CYGWIN, as it uses cuda-gdb.

CYGWIN compatibility is still experimental at the moment. We are currently working on this point, please refer to this page later if you experience any issues and/or contact the developers.

Quickstart

In order to get a quick start and run a provided model, go to GeNN/tools

type "make".

This will compile additional tools for creating and running example projects.

For a first complete test, the system is best used with a full driver program such as in the Insect Olfaction Model example:

tools/generate_run [CPU/GPU] [#AL] [#KC] [#LH] [#DN] [gscale] [DIR] [EXE] [MODEL] [DEBUG OFF/ON].

To use it, navigate to the "userproject/MBody1_project" directory and type

../../tools/generate_run 1 100 1000 20 100 0.00117 outname classol_sim MBody1 0

which would generate a model of the locust olfactory system.

The tool generate_run will generate connectivity files for the model MBody1, compile and run it on the GPU, with 100 antennal lobe neurons, 1000 mushroom body Kenyon cells, 20 lateral horn interneurons and 100 mushroom body output neurons. All output files will be prefixed with "outname" and will be created under the "outname" directory.

This is already a quite highly integrated example.

More in the User Manual

How to use GeNN

The conventional way to use GeNN is to use a program such as tools/generate_run.

In more details, what tools/generate_run and similar programs do is: a) use some other tools to generate the appropriate connectivity matrices and store them in files.

b) build the source code for the model by writing neuron numbers into userproject/include/sizes.h, and executing "buildmodel MBody1 [DEBUG OFF/ON]".

c) compile the generated code by invoking "make clean && make" running the code, e.g. "linux/release/classol_sim r1 1".

  • the simulation code is then produced in two steps: "buildmodel Model1 [DEBUG OFF/ON]" and "make clean && make"

Example projects

GeNN comes with several example projects which show how to use some basic features. These can be found in Example projects.

Defining your own model

If one was to use the library for GPU code generation only, the following would be done:

a) The model in question is defined in a file, say "Model1.cc".

b) this file needs to

  1. define "DT"
  2. include "modelSpec.h" and "modelSpec.cc"
  3. define the values for initial variables and parameters for neuron and synapse populations
  4. contain the model's definition in the form of a function
    void modelDefinition(NNmodel &model);
    "MBody1.cc" shows a typical example.

c) The programmer defines his/her own modeling code along similar lines as "map_classol.*" together with "classol_sim.*". In this code,

  • she defines the connectivity matrices between neuron groups. (In the example here those are read from files).
  • she defines input patterns (e.g. for Poisson neurons like in the example)
  • she uses "stepTimeGPU(x, y, z);" to run one time step on the GPU or "stepTimeCPU(x, y, z);" to run one on the CPU. (both versions are always compiled). However, mixing the two does not make too much sense. The host version uses the same memory whereto results from the GPU version are copied (see next point)
  • she uses functions like "copyStateFromDevice();" etc to obtain results from GPU calculations.