Wiki source code of NESTML

Version 41.1 by abonard on 2025/06/13 16:09

Hide last authors
adavison 1.1 1
2
abonard 3.1 3 * ((( ==== **[[Beginner >>||anchor = "HBeginner-1"]]** ==== )))
jessicamitchell 2.1 4
abonard 41.1 5 * ((( ==== **[[Advanced >>||anchor = "HAdvanced-1"]]** ==== )))
6
abonard 3.1 7 === **Beginner** ===
jessicamitchell 2.1 8
abonard 3.1 9 === [[Creating neuron models – Spike-frequency adaptation (SFA)>>https://nestml.readthedocs.io/en/latest/tutorials/spike_frequency_adaptation/nestml_spike_frequency_adaptation_tutorial.html||rel=" noopener noreferrer" target="_blank"]] ===
jessicamitchell 2.1 10
abonard 3.1 11 **Level**: beginner(%%) **Type**: interactive tutorial
jessicamitchell 2.1 12
abonard 3.1 13 Spike-frequency adaptation (SFA) is the empirically observed phenomenon where the firing rate of a neuron decreases for a sustained, constant stimulus. Learn how to model SFA using threshold adaptation and an adaptation current.
abonard 40.1 14 === [[Creating neuron models – Izhikevich tutorial>>https://nestml.readthedocs.io/en/latest/tutorials/izhikevich/nestml_izhikevich_tutorial.html||rel=" noopener noreferrer" target="_blank"]] ===
jessicamitchell 2.1 15
abonard 40.1 16 **Level**: beginner(%%) **Type**: interactive tutorial
17
18 Learn how to start to use NESTML by writing the Izhikevich spiking neuron model in NESTML.
abonard 41.1 19 === **Advanced** ===
abonard 40.1 20
abonard 41.1 21 === [[Creating synapse models – Dopamine-modulated STDP synapse>>https://nestml.readthedocs.io/en/latest/tutorials/stdp_dopa_synapse/stdp_dopa_synapse.html||rel=" noopener noreferrer" target="_blank"]] ===
22
23 **Level**: advanced(%%) **Type**: interactive tutorial
24
25 Adding dopamine modulation to the weight update rule of an STDP synapse allows it to be used in reinforcement learning tasks. This allows a network to learn which of the many cues and actions preceding a reward should be credited for the reward. In this tutorial, a dopamine-modulated STDP model is created in NESTML, and we characterize the model before using it in a network (reinforcement) learning task.
26