Version 23.1 by shailesh on 2021/12/09 21:40

Hide last authors
adavison 2.1 1 (% class="box warningmessage" %)
2 (((
3 tutorial under development
4 )))
adavison 3.1 5
6 == Learning objectives ==
7
adavison 5.1 8 In this tutorial, you will learn how to build a simple network of integrate-and-fire neurons using PyNN, how to run simulation experiments with this network using different simulators, and how to visualize the data generated by these experiments.
adavison 3.1 9
10 == Audience ==
11
annedevismes 16.1 12 This tutorial is intended for people with at least a basic knowledge of neuroscience (high-school level or above) and basic familiarity with the Python programming language. It should also be helpful for people who already have advanced knowledge of neuroscience and neural simulation, who simply wish to learn how to use PyNN and how it differs from other simulation tools they know.
adavison 3.1 13
14 == Prerequisites ==
15
annedevismes 16.1 16 To follow this tutorial, you need a basic knowledge of neuroscience (high-school level or greater), basic familiarity with the Python programming language, and either a computer with PyNN, NEST, NEURON, and Brian 2 installed or an EBRAINS account and basic familiarity with Jupyter notebooks. If you don't have these tools installed, see one of our previous tutorials which guide you through the installation.
adavison 3.1 17
18 == Format ==
19
adavison 8.1 20 This tutorial will be a video combining slides, animations, and screencast elements. The intended duration is 10 minutes.
adavison 3.1 21
22 == Script ==
adavison 4.1 23
adavison 10.2 24 (% class="box successmessage" %)
25 (((
26 **Slide** showing tutorial title, PyNN logo, link to PyNN service page.
27 )))
adavison 4.1 28
adavison 9.1 29 Hello, my name is X.
adavison 4.1 30
adavison 9.1 31 This video is one of a series of tutorials for PyNN, which is Python software for modelling and simulating spiking neural networks.
32
33 For a list of the other tutorials in this series, you can visit ebrains.eu/service/pynn, that's p-y-n-n.
34
adavison 10.2 35 (% class="box successmessage" %)
36 (((
37 **Slide** listing learning objectives
38 )))
adavison 4.1 39
adavison 9.2 40 In this tutorial, you will learn the basics of PyNN: how to build a simple network of integrate-and-fire neurons using PyNN, how to run simulation experiments with this network using different simulators, and how to visualize the data generated by these experiments.
adavison 4.1 41
adavison 10.2 42 (% class="box successmessage" %)
43 (((
44 **Slide** listing prerequisites
45 )))
adavison 4.1 46
adavison 9.3 47 To follow this tutorial, you need a basic knowledge of neuroscience (high-school level or greater), basic familiarity with the Python programming language, and you should have already followed our earlier tutorial video which guides you through the installation process.
adavison 4.1 48
adavison 9.4 49 This video covers PyNN 0.10. If you've installed a more recent version of PyNN, you might want to look for an updated version of this video.
50
adavison 10.2 51 (% class="box successmessage" %)
52 (((
53 **Slide** showing animation of leaky integrate-and-fire model
54 )))
adavison 4.1 55
adavison 9.5 56 PyNN is a tool for building models of nervous systems, and parts of nervous systems, at the level of individual neurons and synapses.
adavison 4.1 57
adavison 9.5 58 We'll start off creating a group of 100 neurons, using a really simple model of a neuron, the leaky integrate-and-fire model.
59
60 When we inject positive current into this model, either from an electrode or from an excitatory synapse, it increases the voltage across the cell membrane, until the voltage reaches a certain threshold.
61
62 At that point, the neuron produces an action potential, also called a spike, and the membrane voltage is reset.
63
adavison 10.2 64 (% class="box infomessage" %)
65 (((
66 **Screencast** - blank document in editor
67 )))
68
annedevismes 16.1 69 In this video, you'll see my editor on the left and my terminal and my file browser on the right. I'll be writing code in the editor and then running my scripts in the terminal. You're welcome to follow along~-~--you can pause the video at any time if I'm going too fast~-~--or you can just watch.
adavison 10.2 70
annedevismes 16.1 71 Let's start by writing a docstring "Simple network model using PyNN".
adavison 9.5 72
annedevismes 16.1 73 For now, we're going to use the NEST simulator to simulate this model; so, we import the PyNN-for-NEST module.
adavison 9.5 74
annedevismes 16.1 75 Like with any numerical model, we need to break time down into small steps; so let's set that up with steps of 0.1 milliseconds.
adavison 9.5 76
adavison 10.2 77 (% class="box infomessage" %)
78 (((
79 **Screencast** - current state of editor
80 \\(% style="color:#e74c3c" %)"""Simple network model using PyNN"""
81 \\import pyNN.nest as sim
82 sim.setup(timestep=0.1)
83 )))
84
adavison 9.5 85 PyNN comes with a selection of integrate-and-fire models. We're going to use the IF_curr_exp model, where "IF" is for integrate-and-fire, "curr" means that synaptic responses are changes in current, and "exp" means that the shape of the current is a decaying exponential function.
86
shailesh 23.1 87 This is where we set the parameters of the model: the resting membrane potential is -65 millivolts, the spike threshold is -55 millivolts, the reset voltage after a spike is again -65 millivolts, the refractory period after a spike is one millisecond, the membrane time constant is 10 milliseconds, and the membrane capacitance is 1 nanofarad. We're also going to inject a constant bias current of 1.1 nanoamps into these neurons, so that we get some action potentials.
adavison 9.5 88
adavison 10.2 89 (% class="box infomessage" %)
90 (((
91 **Screencast** - current state of editor
92 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
93 \\import pyNN.nest as sim
94 sim.setup(timestep=0.1)(%%)
shailesh 21.1 95 (% style="color:#e74c3c" %)cell_type  = sim.IF_curr_exp(v_rest=-65, v_thresh=-55, v_reset=-65, tau_refrac=1, tau_m=10, cm=1, i_offset=1.1)
adavison 10.2 96 )))
97
annedevismes 16.1 98 Let's create 100 of these neurons; then, we're going to record the membrane voltage and run a simulation for 100 milliseconds.
adavison 9.6 99
adavison 10.2 100 (% class="box infomessage" %)
101 (((
102 **Screencast** - current state of editor
103 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
104 \\import pyNN.nest as sim
105 sim.setup(timestep=0.1)(%%)
shailesh 21.1 106 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(v_rest=-65, v_thresh=-55, v_reset=-65, tau_refrac=1, tau_m=10, cm=1, i_offset=1.1)(%%)
adavison 10.2 107 (% style="color:#e74c3c" %)population1 = sim.Population(100, cell_type, label="Population 1")
108 population1.record("v")
adavison 10.3 109 sim.run(100.0)(%%)
110 \\**Run script in terminal**
adavison 10.2 111 )))
112
adavison 9.6 113 PyNN has some built-in tools for making simple plots, so let's import those, and plot the membrane voltage of the zeroth neuron in our population (remember Python starts counting at zero).
114
adavison 10.3 115 (% class="box infomessage" %)
116 (((
117 **Screencast** - current state of editor
118 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
119 \\import pyNN.nest as sim(%%)
120 (% style="color:#e74c3c" %)from pyNN.utility.plotting import Figure, Panel(%%)
121 (% style="color:#000000" %)sim.setup(timestep=0.1)(%%)
shailesh 21.1 122 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(v_rest=-65, v_thresh=-55, v_reset=-65, tau_refrac=1, tau_m=10, cm=1, i_offset=1.1)(%%)
adavison 10.3 123 (% style="color:#000000" %)population1 = sim.Population(100, cell_type, label="Population 1")
124 population1.record("v")
125 sim.run(100.0)(%%)
126 (% style="color:#e74c3c" %)data_v = population1.get_data().segments[0].filter(name='v')[0]
127 Figure(
128 Panel(
129 data_v[:, 0],
130 xticks=True, xlabel="Time (ms)",
131 yticks=True, ylabel="Membrane potential (mV)"
132 ),
133 title="Response of neuron #0",
134 annotations="Simulated with NEST"
135 ).show()(%%)
adavison 11.1 136 \\**Run script in terminal, show figure**
adavison 10.3 137 )))
138
annedevismes 16.1 139 As you'd expect, the bias current causes the membrane voltage to increase until it reaches threshold~-~--it doesn't increase in a straight line because it's a //leaky// integrate-and-fire neuron~-~--then, once it hits the threshold, the voltage is reset and then stays at the same level for a short time~-~--this is the refractory period~-~--before it starts to increase again.
adavison 9.6 140
annedevismes 16.1 141 Now, all 100 neurons in our population are identical; so, if we plotted the first neuron, the second neuron, ..., we'd get the same trace.
adavison 9.6 142
adavison 11.1 143 (% class="box infomessage" %)
144 (((
adavison 15.1 145 **Screencast** - changes in editor
146
147
148 **...**
149 (% style="color:#000000" %)Figure(
adavison 11.1 150 Panel(
151 data_v[:, (% style="color:#e74c3c" %)0:5(% style="color:#000000" %)],
152 xticks=True, xlabel="Time (ms)",
153 yticks=True, ylabel="Membrane potential (mV)"
154 ),
155 title="Response of (% style="color:#e74c3c" %)first five neurons(% style="color:#000000" %)",
156 annotations="Simulated with NEST"
157 ).show()(%%)
158 \\**Run script in terminal, show figure**
159 )))
160
annedevismes 16.1 161 Let's change that. In nature, every neuron is a little bit different; so, let's set the resting membrane potential and the spike threshold randomly from a Gaussian distribution.
adavison 9.6 162
adavison 11.1 163 (% class="box infomessage" %)
164 (((
adavison 15.1 165 **Screencast** - changes in editor
adavison 11.1 166 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
167 \\import pyNN.nest as sim(%%)
168 (% style="color:#000000" %)from pyNN.utility.plotting import Figure, Panel(%%)
shailesh 22.1 169 (% style="color:#e74c3c" %)from pyNN.random import RandomDistribution, NumpyRNG(%%)
adavison 11.1 170 (% style="color:#000000" %)sim.setup(timestep=0.1)(%%)
shailesh 22.1 171 (% style="color:#e74c3c" %)rng = NumpyRNG(seed=1)(%%)
adavison 11.1 172 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(
shailesh 22.1 173 (% style="color:#e74c3c" %) v_rest=RandomDistribution('normal', mu=-65.0, sigma=1.0, rng=rng),
174 v_thresh=RandomDistribution('normal', mu=-55.0, sigma=1.0, rng=rng),
175 v_reset=RandomDistribution('normal', mu=-65.0, sigma=1.0, rng=rng), (%%)
176 (% style="color:#000000" %) tau_refrac=1, tau_m=10, cm=1, i_offset=1.1)
adavison 15.1 177
178 **...**
179
180
181 (% style="color:#000000" %)Figure(
adavison 11.1 182 Panel(
183 data_v[:, 0:5],
184 xticks=True, xlabel="Time (ms)",
185 yticks=True, ylabel="Membrane potential (mV)"
186 ),
187 title="Response of first five neurons (% style="color:#e74c3c" %)with heterogeneous parameters(% style="color:#000000" %)",
188 annotations="Simulated with NEST"
189 ).show()(%%)
190 \\**Run script in terminal, show figure**
191 )))
192
annedevismes 16.1 193 Now, if we run our simulation again, we can see the effect of this heterogeneity in the neuron population.
adavison 9.7 194
adavison 11.3 195 (% class="box successmessage" %)
196 (((
annedevismes 16.1 197 **Slide** showing addition of second population and of connections between them
adavison 11.3 198 )))
adavison 9.7 199
adavison 11.3 200 (% class="wikigeneratedid" %)
annedevismes 16.1 201 So far, we have a population of neurons, but there are no connections between them, we don't have a network. Let's add a second population of the same size as the first, but we'll set the offset current to zero, so they don't fire action potentials spontaneously.
adavison 11.3 202
203 (% class="box infomessage" %)
204 (((
adavison 15.1 205 **Screencast** - changes in editor
206 \\**...**
adavison 11.3 207 (% style="color:#000000" %)population1 = sim.Population(100, cell_type, label="Population 1")(%%)
adavison 11.4 208 (% style="color:#e74c3c" %)population2 = sim.Population(100, cell_type, label="Population 2")
209 population2.set(i_offset=0)(%%)
210 (% style="color:#000000" %)population1.record("v")(%%)
211 (% style="color:#e74c3c" %)population2.record("v")(%%)
212 (% style="color:#000000" %)sim.run(100.0)(%%)
adavison 15.1 213 **...**
adavison 11.3 214 )))
215
annedevismes 16.1 216 Now, we want to create synaptic connections between the neurons in Population 1 and those in Population 2. There are lots of different ways these could be connected.
adavison 11.3 217
adavison 11.4 218 (% class="box successmessage" %)
219 (((
220 **Slide** showing all-to-all connections
221 )))
222
223 We could connect all neurons in Population 1 to all those in Population 2.
224
225 (% class="box successmessage" %)
226 (((
227 **Slide** showing random connections
228 )))
229
230 We could connect the populations randomly, in several different ways.
231
232 (% class="box successmessage" %)
233 (((
234 **Slide** showing distance-dependent connections
235 )))
236
237 (% class="wikigeneratedid" %)
238 We could connect the populations randomly, but with a probability of connection that depends on the distance between the neurons.
239
240 (% class="box successmessage" %)
241 (((
242 **Slide** showing explicit lists of connections
243 )))
244
245 (% class="wikigeneratedid" %)
246 Or we could connect the neurons in a very specific manner, based on an explicit list of connections.
247
248 (% class="wikigeneratedid" %)
249 Just as PyNN provides a variety of neuron models, so it comes with a range of connection algorithms built in. You can also add your own connection methods.
250
251 (% class="box successmessage" %)
252 (((
253 **Slide** showing addition of second population, and of connections between them, labelled as a Projection.
254 )))
255
256 (% class="wikigeneratedid" %)
annedevismes 16.1 257 In PyNN, we call a group of connections between two populations a _Projection_. To create a Projection, we need to specify the presynaptic population, the postsynaptic population, the connection algorithm, and the synapse model. Here, we're using the simplest synapse model available in PyNN, for which the synaptic weight is constant over time; there is no plasticity.
adavison 11.4 258
259 (% class="box infomessage" %)
260 (((
adavison 15.1 261 **Screencast** - changes in editor
262
263
264 **...**
265 (% style="color:#000000" %)population2.record("v")(%%)
shailesh 22.1 266 (% style="color:#e74c3c" %)connection_algorithm = sim.FixedProbabilityConnector(p_connect=0.5, rng=rng)
adavison 11.4 267 synapse_type = sim.StaticSynapse(weight=0.5, delay=0.5)
268 connections = sim.Projection(population1, population2, connection_algorithm, synapse_type)(%%)
269 (% style="color:#000000" %)sim.run(100.0)(%%)
adavison 15.1 270 **...**
adavison 11.4 271 )))
272
adavison 11.5 273 (% class="wikigeneratedid" %)
274 Finally, let's update our figure, by adding a second panel to show the responses of Population 2.
275
276 (% class="box infomessage" %)
277 (((
adavison 15.1 278 **Screencast** - changes in editor
279 \\**...**
adavison 11.5 280 (% style="color:#000000" %)sim.run(100.0)(%%)
adavison 13.1 281 (% style="color:#e74c3c" %)data1_v(% style="color:#000000" %) = population1.get_data().segments[0].filter(name='v')[0](%%)
282 (% style="color:#e74c3c" %)data2_v = population2.get_data().segments[0].filter(name='v')[0](%%)
283 (% style="color:#000000" %)Figure(
adavison 11.5 284 Panel(
adavison 13.1 285 (% style="color:#e74c3c" %)data1_v(% style="color:#000000" %)[:, 0:5],
286 xticks=True, (% style="color:#e74c3c" %)--xlabel="Time (ms)",--(%%)
287 (% style="color:#000000" %) yticks=True, ylabel="Membrane potential (mV)"
288 ),
289 (% style="color:#e74c3c" %)Panel(
290 data2_v[:, 0:5],
adavison 11.5 291 xticks=True, xlabel="Time (ms)",
adavison 18.1 292 yticks=True
adavison 13.1 293 ),(%%)
294 (% style="color:#000000" %) title="Response of (% style="color:#e74c3c" %)simple network(% style="color:#000000" %)",
adavison 11.5 295 annotations="Simulated with NEST"
296 ).show()
297
adavison 12.1 298 **Run script in terminal, show figure**
adavison 11.5 299 )))
300
adavison 13.2 301 (% class="wikigeneratedid" %)
annedevismes 16.1 302 and there we have it, our simple neuronal network of integrate-and-fire neurons, written in PyNN, simulated with NEST. If you prefer to use the NEURON simulator, PyNN makes this very simple: we import the PyNN-for-NEURON module instead.
adavison 13.2 303
304 (% class="box infomessage" %)
305 (((
adavison 15.1 306 **Screencast** - final state of editor
adavison 13.2 307 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
308 \\import pyNN.(% style="color:#e74c3c" %)neuron(% style="color:#000000" %) as sim(%%)
309 (% style="color:#000000" %)from pyNN.utility.plotting import Figure, Panel(%%)
shailesh 22.1 310 (% style="color:#000000" %)from pyNN.random import RandomDistribution, NumpyRNG(%%)
adavison 13.2 311 (% style="color:#000000" %)sim.setup(timestep=0.1)(%%)
shailesh 22.1 312 (% style="color:#000000" %)rng = NumpyRNG(seed=1)(%%)
adavison 13.2 313 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(
shailesh 22.1 314 v_rest=RandomDistribution('normal', mu=-65.0, sigma=1.0, rng=rng),
315 v_thresh=RandomDistribution('normal', mu=-55.0, sigma=1.0, rng=rng),
316 v_reset=RandomDistribution('normal', mu=-65.0, sigma=1.0, rng=rng), 
317 tau_refrac=1, tau_m=10, cm=1, i_offset=1.1)
318 population1 = sim.Population(100, cell_type, label="Population 1")
319 population2 = sim.Population(100, cell_type, label="Population 2")
adavison 13.2 320 population2.set(i_offset=0)
321 population1.record("v")
shailesh 22.1 322 population2.record("v")
323 connection_algorithm = sim.FixedProbabilityConnector(p_connect=0.5, rng=rng)
adavison 13.2 324 synapse_type = sim.StaticSynapse(weight=0.5, delay=0.5)
325 connections = sim.Projection(population1, population2, connection_algorithm, synapse_type)(%%)
326 (% style="color:#000000" %)sim.run(100.0)(%%)
327 (% style="color:#000000" %)data1_v = population1.get_data().segments[0].filter(name='v')[0]
328 data2_v = population2.get_data().segments[0].filter(name='v')[0]
329 Figure(
330 Panel(
331 data1_v[:, 0:5],
332 xticks=True,
333 yticks=True, ylabel="Membrane potential (mV)"
334 ),
335 Panel(
336 data2_v[:, 0:5],
337 xticks=True, xlabel="Time (ms)",
adavison 18.1 338 yticks=True
adavison 13.2 339 ),(%%)
340 (% style="color:#000000" %) title="Response of simple network",
341 annotations="Simulated with (% style="color:#e74c3c" %)NEURON(% style="color:#000000" %)"
342 ).show()
343
344 **Run script in terminal, show figure**
345 )))
346
347 (% class="wikigeneratedid" %)
348 As you would hope, NEST and NEURON give essentially identical results.
349
adavison 14.1 350 (% class="box successmessage" %)
351 (((
352 **Slide** recap of learning objectives
353 )))
adavison 4.1 354
annedevismes 16.1 355 That is the end of this tutorial, in which I've demonstrated how to build a simple network using PyNN and to simulate it using two different simulators, NEST and NEURON.
adavison 4.1 356
adavison 14.1 357 Of course, PyNN allows you to create much more complex networks than this, with more realistic neuron models, synaptic plasticity, spatial structure, and so on. You can also use other simulators, such as Brian or SpiNNaker, and you can run simulations in parallel on clusters or supercomputers.
adavison 4.1 358
adavison 14.1 359 We will be releasing a series of tutorials, throughout the rest of 2021 and 2022, to introduce these more advanced features of PyNN, so keep an eye on the EBRAINS website.
adavison 4.1 360
adavison 14.1 361 (% class="box successmessage" %)
362 (((
363 **Slide** acknowledgements, contact information
364 )))
adavison 4.1 365
adavison 14.1 366 (% class="wikigeneratedid" %)
annedevismes 16.1 367 PyNN has been developed by many different people, with financial support from several organisations. I'd like to mention in particular the CNRS and the European Commission, through the FACETS, BrainScaleS, and Human Brain Project grants.
adavison 4.1 368
adavison 14.1 369 (% class="wikigeneratedid" %)
annedevismes 16.1 370 For more information, visit neuralensemble.org/PyNN. If you have questions you can contact us through the PyNN Github project, the NeuralEnsemble forum, EBRAINS support, or the EBRAINS Community.