Version 20.1 by adavison on 2021/12/01 15:21

Show last authors
1 (% class="box warningmessage" %)
2 (((
3 tutorial under development
4 )))
5
6 == Learning objectives ==
7
8 In this tutorial, you will learn how to build a simple network of integrate-and-fire neurons using PyNN, how to run simulation experiments with this network using different simulators, and how to visualize the data generated by these experiments.
9
10 == Audience ==
11
12 This tutorial is intended for people with at least a basic knowledge of neuroscience (high-school level or above) and basic familiarity with the Python programming language. It should also be helpful for people who already have advanced knowledge of neuroscience and neural simulation, who simply wish to learn how to use PyNN and how it differs from other simulation tools they know.
13
14 == Prerequisites ==
15
16 To follow this tutorial, you need a basic knowledge of neuroscience (high-school level or greater), basic familiarity with the Python programming language, and either a computer with PyNN, NEST, NEURON, and Brian 2 installed or an EBRAINS account and basic familiarity with Jupyter notebooks. If you don't have these tools installed, see one of our previous tutorials which guide you through the installation.
17
18 == Format ==
19
20 This tutorial will be a video combining slides, animations, and screencast elements. The intended duration is 10 minutes.
21
22 == Script ==
23
24 (% class="box successmessage" %)
25 (((
26 **Slide** showing tutorial title, PyNN logo, link to PyNN service page.
27 )))
28
29 Hello, my name is X.
30
31 This video is one of a series of tutorials for PyNN, which is Python software for modelling and simulating spiking neural networks.
32
33 For a list of the other tutorials in this series, you can visit ebrains.eu/service/pynn, that's p-y-n-n.
34
35 (% class="box successmessage" %)
36 (((
37 **Slide** listing learning objectives
38 )))
39
40 In this tutorial, you will learn the basics of PyNN: how to build a simple network of integrate-and-fire neurons using PyNN, how to run simulation experiments with this network using different simulators, and how to visualize the data generated by these experiments.
41
42 (% class="box successmessage" %)
43 (((
44 **Slide** listing prerequisites
45 )))
46
47 To follow this tutorial, you need a basic knowledge of neuroscience (high-school level or greater), basic familiarity with the Python programming language, and you should have already followed our earlier tutorial video which guides you through the installation process.
48
49 This video covers PyNN 0.10. If you've installed a more recent version of PyNN, you might want to look for an updated version of this video.
50
51 (% class="box successmessage" %)
52 (((
53 **Slide** showing animation of leaky integrate-and-fire model
54 )))
55
56 PyNN is a tool for building models of nervous systems, and parts of nervous systems, at the level of individual neurons and synapses.
57
58 We'll start off creating a group of 100 neurons, using a really simple model of a neuron, the leaky integrate-and-fire model.
59
60 When we inject positive current into this model, either from an electrode or from an excitatory synapse, it increases the voltage across the cell membrane, until the voltage reaches a certain threshold.
61
62 At that point, the neuron produces an action potential, also called a spike, and the membrane voltage is reset.
63
64 (% class="box infomessage" %)
65 (((
66 **Screencast** - blank document in editor
67 )))
68
69 In this video, you'll see my editor on the left and my terminal and my file browser on the right. I'll be writing code in the editor and then running my scripts in the terminal. You're welcome to follow along~-~--you can pause the video at any time if I'm going too fast~-~--or you can just watch.
70
71 Let's start by writing a docstring "Simple network model using PyNN".
72
73 For now, we're going to use the NEST simulator to simulate this model; so, we import the PyNN-for-NEST module.
74
75 Like with any numerical model, we need to break time down into small steps; so let's set that up with steps of 0.1 milliseconds.
76
77 (% class="box infomessage" %)
78 (((
79 **Screencast** - current state of editor
80 \\(% style="color:#e74c3c" %)"""Simple network model using PyNN"""
81 \\import pyNN.nest as sim
82 sim.setup(timestep=0.1)
83 )))
84
85 PyNN comes with a selection of integrate-and-fire models. We're going to use the IF_curr_exp model, where "IF" is for integrate-and-fire, "curr" means that synaptic responses are changes in current, and "exp" means that the shape of the current is a decaying exponential function.
86
87 This is where we set the parameters of the model: the resting membrane potential is -65 millivolts, the spike threshold is -55 millivolts, the reset voltage after a spike is again -65 millivolts, the refractory period after a spike is one millisecond, the membrane time constant is 10 milliseconds, and the membrane capacitance is 1 nanofarad. We're also going to inject a constant bias current of 0.1 nanoamps into these neurons, so that we get some action potentials.
88
89 (% class="box infomessage" %)
90 (((
91 **Screencast** - current state of editor
92 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
93 \\import pyNN.nest as sim
94 sim.setup(timestep=0.1)(%%)
95 (% style="color:#e74c3c" %)cell_type  = sim.IF_curr_exp(v_rest=-65, v_thresh=-55, v_reset=-65, tau_refrac=1, tau_m=10, cm=1, i_offset=0.1)
96 )))
97
98 Let's create 100 of these neurons; then, we're going to record the membrane voltage and run a simulation for 100 milliseconds.
99
100 (% class="box infomessage" %)
101 (((
102 **Screencast** - current state of editor
103 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
104 \\import pyNN.nest as sim
105 sim.setup(timestep=0.1)(%%)
106 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(v_rest=-65, v_thresh=-55, v_reset=-65, tau_refrac=1, tau_m=10, cm=1, i_offset=0.1)(%%)
107 (% style="color:#e74c3c" %)population1 = sim.Population(100, cell_type, label="Population 1")
108 population1.record("v")
109 sim.run(100.0)(%%)
110 \\**Run script in terminal**
111 )))
112
113 PyNN has some built-in tools for making simple plots, so let's import those, and plot the membrane voltage of the zeroth neuron in our population (remember Python starts counting at zero).
114
115 (% class="box infomessage" %)
116 (((
117 **Screencast** - current state of editor
118 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
119 \\import pyNN.nest as sim(%%)
120 (% style="color:#e74c3c" %)from pyNN.utility.plotting import Figure, Panel(%%)
121 (% style="color:#000000" %)sim.setup(timestep=0.1)(%%)
122 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(v_rest=-65, v_thresh=-55, v_reset=-65, tau_refrac=1, tau_m=10, cm=1, i_offset=0.1)(%%)
123 (% style="color:#000000" %)population1 = sim.Population(100, cell_type, label="Population 1")
124 population1.record("v")
125 sim.run(100.0)(%%)
126 (% style="color:#e74c3c" %)data_v = population1.get_data().segments[0].filter(name='v')[0]
127 Figure(
128 Panel(
129 data_v[:, 0],
130 xticks=True, xlabel="Time (ms)",
131 yticks=True, ylabel="Membrane potential (mV)"
132 ),
133 title="Response of neuron #0",
134 annotations="Simulated with NEST"
135 ).show()(%%)
136 \\**Run script in terminal, show figure**
137 )))
138
139 As you'd expect, the bias current causes the membrane voltage to increase until it reaches threshold~-~--it doesn't increase in a straight line because it's a //leaky// integrate-and-fire neuron~-~--then, once it hits the threshold, the voltage is reset and then stays at the same level for a short time~-~--this is the refractory period~-~--before it starts to increase again.
140
141 Now, all 100 neurons in our population are identical; so, if we plotted the first neuron, the second neuron, ..., we'd get the same trace.
142
143 (% class="box infomessage" %)
144 (((
145 **Screencast** - changes in editor
146
147
148 **...**
149 (% style="color:#000000" %)Figure(
150 Panel(
151 data_v[:, (% style="color:#e74c3c" %)0:5(% style="color:#000000" %)],
152 xticks=True, xlabel="Time (ms)",
153 yticks=True, ylabel="Membrane potential (mV)"
154 ),
155 title="Response of (% style="color:#e74c3c" %)first five neurons(% style="color:#000000" %)",
156 annotations="Simulated with NEST"
157 ).show()(%%)
158 \\**Run script in terminal, show figure**
159 )))
160
161 Let's change that. In nature, every neuron is a little bit different; so, let's set the resting membrane potential and the spike threshold randomly from a Gaussian distribution.
162
163 (% class="box infomessage" %)
164 (((
165 **Screencast** - changes in editor
166 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
167 \\import pyNN.nest as sim(%%)
168 (% style="color:#000000" %)from pyNN.utility.plotting import Figure, Panel(%%)
169 (% style="color:#e74c3c" %)from pyNN.random import RandomDistribution(%%)
170 (% style="color:#000000" %)sim.setup(timestep=0.1)(%%)
171 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(
172 (% style="color:#e74c3c" %) v_rest=RandomDistribution('normal', mu=-65.0, sigma=1.0),
173 v_thresh=RandomDistribution('normal', mu=-55.0, sigma=1.0),
174 v_reset=RandomDistribution('normal', mu=-65.0, sigma=1.0), (%%)
175 (% style="color:#000000" %) tau_refrac=1, tau_m=10, cm=1, i_offset=0.1)(%%)
176
177
178 **...**
179
180
181 (% style="color:#000000" %)Figure(
182 Panel(
183 data_v[:, 0:5],
184 xticks=True, xlabel="Time (ms)",
185 yticks=True, ylabel="Membrane potential (mV)"
186 ),
187 title="Response of first five neurons (% style="color:#e74c3c" %)with heterogeneous parameters(% style="color:#000000" %)",
188 annotations="Simulated with NEST"
189 ).show()(%%)
190 \\**Run script in terminal, show figure**
191 )))
192
193 Now, if we run our simulation again, we can see the effect of this heterogeneity in the neuron population.
194
195 (% class="box successmessage" %)
196 (((
197 **Slide** showing addition of second population and of connections between them
198 )))
199
200 (% class="wikigeneratedid" %)
201 So far, we have a population of neurons, but there are no connections between them, we don't have a network. Let's add a second population of the same size as the first, but we'll set the offset current to zero, so they don't fire action potentials spontaneously.
202
203 (% class="box infomessage" %)
204 (((
205 **Screencast** - changes in editor
206 \\**...**
207 (% style="color:#000000" %)population1 = sim.Population(100, cell_type, label="Population 1")(%%)
208 (% style="color:#e74c3c" %)population2 = sim.Population(100, cell_type, label="Population 2")
209 population2.set(i_offset=0)(%%)
210 (% style="color:#000000" %)population1.record("v")(%%)
211 (% style="color:#e74c3c" %)population2.record("v")(%%)
212 (% style="color:#000000" %)sim.run(100.0)(%%)
213 **...**
214 )))
215
216 Now, we want to create synaptic connections between the neurons in Population 1 and those in Population 2. There are lots of different ways these could be connected.
217
218 (% class="box successmessage" %)
219 (((
220 **Slide** showing all-to-all connections
221 )))
222
223 We could connect all neurons in Population 1 to all those in Population 2.
224
225 (% class="box successmessage" %)
226 (((
227 **Slide** showing random connections
228 )))
229
230 We could connect the populations randomly, in several different ways.
231
232 (% class="box successmessage" %)
233 (((
234 **Slide** showing distance-dependent connections
235 )))
236
237 (% class="wikigeneratedid" %)
238 We could connect the populations randomly, but with a probability of connection that depends on the distance between the neurons.
239
240 (% class="box successmessage" %)
241 (((
242 **Slide** showing explicit lists of connections
243 )))
244
245 (% class="wikigeneratedid" %)
246 Or we could connect the neurons in a very specific manner, based on an explicit list of connections.
247
248 (% class="wikigeneratedid" %)
249 Just as PyNN provides a variety of neuron models, so it comes with a range of connection algorithms built in. You can also add your own connection methods.
250
251 (% class="box successmessage" %)
252 (((
253 **Slide** showing addition of second population, and of connections between them, labelled as a Projection.
254 )))
255
256 (% class="wikigeneratedid" %)
257 In PyNN, we call a group of connections between two populations a _Projection_. To create a Projection, we need to specify the presynaptic population, the postsynaptic population, the connection algorithm, and the synapse model. Here, we're using the simplest synapse model available in PyNN, for which the synaptic weight is constant over time; there is no plasticity.
258
259 (% class="box infomessage" %)
260 (((
261 **Screencast** - changes in editor
262
263
264 **...**
265 (% style="color:#000000" %)population2.record("v")(%%)
266 (% style="color:#e74c3c" %)connection_algorithm = sim.FixedProbabilityConnector(p_connect=0.5)
267 synapse_type = sim.StaticSynapse(weight=0.5, delay=0.5)
268 connections = sim.Projection(population1, population2, connection_algorithm, synapse_type)(%%)
269 (% style="color:#000000" %)sim.run(100.0)(%%)
270 **...**
271 )))
272
273 (% class="wikigeneratedid" %)
274 Finally, let's update our figure, by adding a second panel to show the responses of Population 2.
275
276 (% class="box infomessage" %)
277 (((
278 **Screencast** - changes in editor
279 \\**...**
280 (% style="color:#000000" %)sim.run(100.0)(%%)
281 (% style="color:#e74c3c" %)data1_v(% style="color:#000000" %) = population1.get_data().segments[0].filter(name='v')[0](%%)
282 (% style="color:#e74c3c" %)data2_v = population2.get_data().segments[0].filter(name='v')[0](%%)
283 (% style="color:#000000" %)Figure(
284 Panel(
285 (% style="color:#e74c3c" %)data1_v(% style="color:#000000" %)[:, 0:5],
286 xticks=True, (% style="color:#e74c3c" %)--xlabel="Time (ms)",--(%%)
287 (% style="color:#000000" %) yticks=True, ylabel="Membrane potential (mV)"
288 ),
289 (% style="color:#e74c3c" %)Panel(
290 data2_v[:, 0:5],
291 xticks=True, xlabel="Time (ms)",
292 yticks=True
293 ),(%%)
294 (% style="color:#000000" %) title="Response of (% style="color:#e74c3c" %)simple network(% style="color:#000000" %)",
295 annotations="Simulated with NEST"
296 ).show()
297
298 **Run script in terminal, show figure**
299 )))
300
301 (% class="wikigeneratedid" %)
302 and there we have it, our simple neuronal network of integrate-and-fire neurons, written in PyNN, simulated with NEST. If you prefer to use the NEURON simulator, PyNN makes this very simple: we import the PyNN-for-NEURON module instead.
303
304 (% class="box infomessage" %)
305 (((
306 **Screencast** - final state of editor
307 \\(% style="color:#000000" %)"""Simple network model using PyNN"""
308 \\import pyNN.(% style="color:#e74c3c" %)neuron(% style="color:#000000" %) as sim(%%)
309 (% style="color:#000000" %)from pyNN.utility.plotting import Figure, Panel(%%)
310 (% style="color:#000000" %)from pyNN.random import RandomDistribution(%%)
311 (% style="color:#000000" %)sim.setup(timestep=0.1)(%%)
312 (% style="color:#000000" %)cell_type  = sim.IF_curr_exp(
313 (% style="color:#e74c3c" %) (% style="color:#000000" %)v_rest=RandomDistribution('normal', mu=-65.0, sigma=1.0),
314 v_thresh=RandomDistribution('normal', mu=-55.0, sigma=1.0),
315 v_reset=RandomDistribution('normal', mu=-65.0, sigma=1.0), (%%)
316 (% style="color:#000000" %) tau_refrac=1, tau_m=10, cm=1, i_offset=0.1)(%%)
317 (% style="color:#000000" %)population1 = sim.Population(100, cell_type, label="Population 1")(%%)
318 (% style="color:#000000" %)population2 = sim.Population(100, cell_type, label="Population 2")
319 population2.set(i_offset=0)
320 population1.record("v")
321 population2.record("v")(%%)
322 (% style="color:#000000" %)connection_algorithm = sim.FixedProbabilityConnector(p_connect=0.5)
323 synapse_type = sim.StaticSynapse(weight=0.5, delay=0.5)
324 connections = sim.Projection(population1, population2, connection_algorithm, synapse_type)(%%)
325 (% style="color:#000000" %)sim.run(100.0)(%%)
326 (% style="color:#000000" %)data1_v = population1.get_data().segments[0].filter(name='v')[0]
327 data2_v = population2.get_data().segments[0].filter(name='v')[0]
328 Figure(
329 Panel(
330 data1_v[:, 0:5],
331 xticks=True,
332 yticks=True, ylabel="Membrane potential (mV)"
333 ),
334 Panel(
335 data2_v[:, 0:5],
336 xticks=True, xlabel="Time (ms)",
337 yticks=True
338 ),(%%)
339 (% style="color:#000000" %) title="Response of simple network",
340 annotations="Simulated with (% style="color:#e74c3c" %)NEURON(% style="color:#000000" %)"
341 ).show()
342
343 **Run script in terminal, show figure**
344 )))
345
346 (% class="wikigeneratedid" %)
347 As you would hope, NEST and NEURON give essentially identical results.
348
349 (% class="box successmessage" %)
350 (((
351 **Slide** recap of learning objectives
352 )))
353
354 That is the end of this tutorial, in which I've demonstrated how to build a simple network using PyNN and to simulate it using two different simulators, NEST and NEURON.
355
356 Of course, PyNN allows you to create much more complex networks than this, with more realistic neuron models, synaptic plasticity, spatial structure, and so on. You can also use other simulators, such as Brian or SpiNNaker, and you can run simulations in parallel on clusters or supercomputers.
357
358 We will be releasing a series of tutorials, throughout the rest of 2021 and 2022, to introduce these more advanced features of PyNN, so keep an eye on the EBRAINS website.
359
360 (% class="box successmessage" %)
361 (((
362 **Slide** acknowledgements, contact information
363 )))
364
365 (% class="wikigeneratedid" %)
366 PyNN has been developed by many different people, with financial support from several organisations. I'd like to mention in particular the CNRS and the European Commission, through the FACETS, BrainScaleS, and Human Brain Project grants.
367
368 (% class="wikigeneratedid" %)
369 For more information, visit neuralensemble.org/PyNN. If you have questions you can contact us through the PyNN Github project, the NeuralEnsemble forum, EBRAINS support, or the EBRAINS Community.