Attention: The EBRAINS IAM will be down Monday, the 21st July 2025, from 17.00 CEST (my timezone) for up to 1 hour. This will any affect services requiring an EBRAINS login, we apologise for any inconvenience caused.


Version 32.1 by dionperd on 2023/09/26 19:29

Hide last authors
ldomide 1.1 1 (% class="jumbotron" %)
2 (((
3 (% class="container" %)
4 (((
ldomide 23.1 5 = (% style="color:inherit" %)TVB Co-Simulation {{html}}<iframe width="302" height="170" src="https://www.youtube.com/embed/6hEuvxD7IDk?list=PLVtblERyzDeLcVv4BbW3BvmO8D-qVZxKf" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>{{/html}}  (%%) =
ldomide 1.1 6
ldomide 23.1 7
dionperd 30.1 8 (% style="color:inherit" %)Multiscale: TVB, NEST, (%%)ANNarchy, NetPyNE , Elephant, PySpike
ldomide 11.1 9
ldomide 28.1 10 (% style="color:inherit" %)Authors: (%%)D. Perdikis, A. Blickensdörfer, V. Bragin, L. Domide, J. Mersmann, M. Schirner, P. Ritter(% style="color:inherit" %)
ldomide 1.1 11 )))
12 )))
13
14 (% class="row" %)
15 (((
16 (% class="col-xs-12 col-sm-8" %)
17 (((
ldomide 19.1 18 For more details on TVB see:
ldomide 8.1 19
ldomide 19.1 20 * TVB Dedicated Wiki [[https:~~/~~/wiki.ebrains.eu/bin/view/Collabs/the-virtual-brain/>>url:https://wiki.ebrains.eu/bin/view/Collabs/the-virtual-brain/]]
21 * TVB in HBP User Story [[https:~~/~~/wiki.ebrains.eu/bin/view/Collabs/user-story-tvb/>>url:https://wiki.ebrains.eu/bin/view/Collabs/user-story-tvb/]]
22
dionperd 31.1 23 == ==
ldomide 1.1 24
dionperd 31.1 25 == Running TVB-MULTISCALE at EBRAINS JupyterLab ==
26
dionperd 32.1 27 TVB-multiscale is made available at [[EBRAINS JupyterLab>>https://lab.ebrains.eu/]].
dionperd 31.1 28
dionperd 32.1 29 All the user has to do is login with the EBRAINS credentials, and start a Python console or a Jupyter notebook, TVB-multiscale being available for importing (e.g., via "import tvb_multiscale").
dionperd 31.1 30
dionperd 32.1 31 All necessary TVB-multiscale dependencies (NEST, ANNarchy, NetPyNE (NEURON), Elephant, Pyspike) are also installed and available.
32
33 We suggest the users to upload [[documented notebooks>>https://github.com/the-virtual-brain/tvb-multiscale/tree/master/docs/notebooks]] and/or [[examples' scripts and notebooks >>https://github.com/the-virtual-brain/tvb-multiscale/tree/master/examples]]from TVB-multiscale Github repository and run them there.
34
35 Alternatively, users can sparse checkout the docs and examples folders of TVB-multiscale Github repo, via the following sequence of commands in a terminal or in Jupyter notebook's cells (for notebooks you need to use "!" before each command!):
36
37 ~1. Get into the user's My Libraries folder:
38
39 {{{cd /mnt/user/drive/My Libraries}}}
40
41 2. Create a folder, e.g., "tvb-multiscale-examples"
42
43 {{{mkdir tvb-multiscale-examples}}}
44
45 3. Create an empty git repository:
46
47 {{{git init
48
49 3. Add tvb-multiscale remote:
50 git remote add -f origin }}}
51
52 This fetches all objects but doesn't check them out.
53
54 4. Allow for sparse checkout in git config:
55
56 {{{git config core.sparseCheckout true
57 }}}
58
59 5. Add the docs and examples folders to the ones to be checked out:
60
61 {{{echo "docs" >> .git/info/sparse-checkout
62
63 echo "examples" >> .git/info/sparse-checkout}}}
64
65 6. Finally, pull the master from the remote:
66
67 {{{git pull origin master}}}
68
69 which will download the specified folders.
70
71 All these steps can of course be made from any user fork of the TVB-multiscale repository.
72
73 Last but not least, users will also have to modify the attribute config.DEFAULT_CONNECTIVITY_ZIP of the base configuration class Config in all cases of examples and notebooks, to be able to load a default TVB connectivity for the simulations to run.  For instance, in the above example, the correct path would be:
74
75 {{{config.DEFAULT_CONNECTIVITY_ZIP = "/mnt/user/drive/My Libraries/tvb-multiscale-examples/examples/data/tvb_data/berlinSubjects/QL_20120814/QL_20120814_Connectivity.zip" }}}
76
77
dionperd 31.1 78 == Use our Jupyter Hub setup online ((% style="color:#c0392b" %)DEPRECATED(%%)) ==
79
80 (% style="color:#c0392b" %)**TVB-multiscale app is deprecated and will stop being available after the end of 2023!**
81
dionperd 29.1 82 We have setup a Jupyter Hub service with tvb-multiscale as backed already prepared. You will only need an HBP account for accessing this: [[https:~~/~~/tvb-multiscale.apps.hbp.eu/>>https://tvb-multiscale.apps.hbp.eu/]]
ldomide 7.2 83
ldomide 28.1 84 This JupyterHub installation works smoothly with HBP Collab user credentials (login only once at HBP and get access here too). We use a custom Docker Hub tvb-multiscale image as a backend, and thus a ready to use environment is available immediately, without the need of any local installation or download. This should be the ideal env for demos, presentations or even workshops with tvb-multiscale.
ldomide 7.2 85
86 **[[image:https://lh6.googleusercontent.com/ytx9eYpMcL3cCScX2_Sxm4CeBW0xbKW3xKsfO2zSId10bW0gw1kiN2_SkexyYBCsF-sKsu0MaJC4cZvGVfQPjMoPBLiePbkvXOZd8BgY3Q0kFzSkRCqQ183lgDQv_6PYoqS3s7uJ||height="149" width="614"]]**
87
ldomide 28.1 88 Currently, the users can access 2 folders: //TVB-*-Examples// and //Contributed-Notebooks//.
ldomide 7.2 89
ldomide 28.1 90 The notebooks under **TVB-*-Examples** are public, shared by everyone accessing the instance. Periodically, we will clean all changes under TVB-*-Examples folder (by redeploying the pod image), and show the original example notebooks submitted on our Github repo. If users intend to contribute here, they are encouraged to submit changes through Pull Requests ([[https:~~/~~/github.com/the-virtual-brain/tvb-multiscale>>url:https://github.com/the-virtual-brain/tvb-multiscale]])
ldomide 7.2 91
92 **[[image:https://lh6.googleusercontent.com/nnsM0mhXQinmQsJwZwwwe5Sx7f-tZc8t4ELnCh9DwksyVEPUE-jixJTkhoP4l25VKwlDGoXACWtnuxQM9NMOCYbQOzDesgMDlT3sntow___vsEqRVd4OwqMY4BPyBiLJ32BnUbmM||height="267" width="614"]]**
93
dionperd 31.1 94 Folder **Contributed-Notebooks** is not shared. Here, users can experiment with their own private examples. This folder is persisted on restarts in the user HBP Collab personal space. Thus, users will be able to access their work even after a redeploy. (e.g. during a workshop every participant could have in here his own exercise solution)
ldomide 7.2 95
dionperd 31.1 96
ldomide 28.1 97 == Running TVB-MULTISCALE locally ==
ldomide 7.2 98
dionperd 31.1 99 See more on Github [[https:~~/~~/github.com/the-virtual-brain/tvb-multiscale>>url:https://github.com/the-virtual-brain/tvb-multiscale]] .
ldomide 7.2 100
dionperd 31.1 101 Documented notebooks and other examples will be ok to download and try yourself locally, after you have also prepared and launched locally a Docker env: [[https:~~/~~/hub.docker.com/r/thevirtualbrain/tvb-multiscale>>https://hub.docker.com/r/thevirtualbrain/tvb-multiscale]]
ldomide 7.2 102
ldomide 28.1 103 This is the path recommended for people working closely with tvb-multiscale. They are able to download it in their local work env and code freely and fast with it.
ldomide 7.2 104
dionperd 31.1 105 == ==
106
ldomide 28.1 107 == Running TVB-MULTISCALE jobs on CSCS infrastructure from HBP collab ==
ldomide 7.2 108
ldomide 28.1 109 The CSCS and HBP Collab deployment of tvb-multiscale is a good example to show how tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, this deployment requires that **the user have an active CSCS personal account**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]
ldomide 7.2 110
111 * Create a collab space of your own
112 * Clone and run in your HBP Collab Hub ([[https:~~/~~/lab.ebrains.eu/>>url:https://lab.ebrains.eu/]]) the notebooks from here: [[https:~~/~~/drive.ebrains.eu/d/245e6c13082f45bcacfa/>>url:https://drive.ebrains.eu/d/245e6c13082f45bcacfa/]]
113 ** test_tvb-nest_installation.ipynb  Run the cosimulate_tvb_nest.sh script on the CSCS Daint supercomputer. In this example, basically we are running the //installation_test.py// file which is in the docker folder.
ldomide 28.1 114 ** run_custom_cosimulation.ipynb For this example we are using the //cosimulate_with_staging.sh// script in order to pull the tvb-multiscale docker image and we are using a custom simulation script (from Github page) which will be uploaded in the staging in phase
ldomide 7.2 115 ** run_custom_cosimulation_from_notebook.ipynb  This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the CSCS server.
116
117 Few technical details about what we do in these notebooks:
118
119 1. Prepare UNICORE client api.
120
121 PYUNICORE client library is available on PYPI. In order to use it you have to install it using:
122
123 >pip install pyunicore
124
125 Next step is to configure client registry and what supercomputer to use
126
127 >tr = unicore_client.Transport(oauth.get_token())
128 >r = unicore_client.Registry(tr, unicore_client._HBP_REGISTRY_URL)
129 ># use "DAINT-CSCS" change if another supercomputer is prepared for usage
130 >client = r.site('DAINT-CSCS')
131
132 1. Prepare job submission
133
134 In this step we have to prepare a JSON object which will be used in the job submission process.
135
136 ># What job will execute (command/executable)
137 >my_job['Executable'] = 'job.sh'
138 >
139 ># To import files from remote sites to the job’s working directory
140 >my_job['Imports'] = [{
141 > "From": "https:~/~/raw.githubusercontent.com/the-virtual-brain/tvb-multiscale/update-collab-examples/docker/cosimulate_tvb_nest.sh",
142 > "To" : job.sh
143 >}]
144 >
145 ># Specify the resources to request on the remote system
146 >my_job['Resources'] = { 
147 > "CPUs": "1"}
148
149 1. Actual job submission
150
151 In order to submit a job we have to use the JSON built in the previous step and also if we have some local files, we have to give their paths as a list of strings (inputs argument) so the UNICORE library will upload them in the job's working directory in the staging in phase, before launching the job.
152
153 >job = site_client.new_job(job_description=my_job, inputs=['/path1', '/path2'])
154 >job.properties
155
156 1. Wait until job is completed and check the results
157
158 Wait until the job is completed using the following command
159
160 ># TRUE or FALSE
161 >job.is_running()
162
163 Check job's working directory for the output files/directories using
164
165 >wd = job.working_dir
166 >wd.listdir()
167
168 From working job you can preview files content and download files
169
170 ># Read 'stdout' file
171 >out = wd.stat("stdout")
172 >f = out.raw()
173 >all_lines = f.read().splitlines()
174 >all_lines[-20:]
175 >
176 ># Download 'outputs/res/results.npy' file
177 >wd.stat("outputs/res/results.npy").download("results.npy")
ldomide 1.1 178 )))
179
180
181 (% class="col-xs-12 col-sm-4" %)
182 (((
183 {{box title="**Contents**"}}
184 {{toc/}}
185 {{/box}}
186
187
188 )))
189 )))