Attention: The EBRAINS IAM will be down Monday, the 21st July 2025, from 17.00 CEST (my timezone) for up to 1 hour. This will any affect services requiring an EBRAINS login, we apologise for any inconvenience caused.


Last modified by ldomide on 2024/04/08 12:55

From version 36.1
edited by dionperd
on 2024/02/09 12:05
Change comment: There is no comment for this version
To version 41.1
edited by ldomide
on 2024/04/08 12:55
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.dionperd
1 +XWiki.ldomide
Content
... ... @@ -26,75 +26,19 @@
26 26  
27 27  TVB-multiscale is made available at [[EBRAINS JupyterLab>>https://lab.ebrains.eu/]].
28 28  
29 -All the user has to do is login with their EBRAINS credentials, and start a Python console or a Jupyter notebook, TVB-multiscale being available for importing (e.g., via "import tvb_multiscale").
29 +All the user has to do is log in with their EBRAINS credentials, and start a Python console or a Jupyter notebook using the kernel "EBRAINS-23.09" (or a more recent version), where TVB-multiscale can be imported (e.g., via "import tvb_multiscale"). All necessary TVB-multiscale dependencies (NEST, ANNarchy, NetPyNE (NEURON), Elephant, Pyspike) are also installed and available.
30 30  
31 -All necessary TVB-multiscale dependencies (NEST, ANNarchy, NetPyNE (NEURON), Elephant, Pyspike) are also installed and available.
31 +This collab contains various examples of using TVB-Multiscale with all three supported spiking simulators. We suggest copying the contents of this collab to your Library or to any collab owned by you, and running them there (note that the user's drive offers persistent storage, i.e. users will find their files after logging out and in again), as follows:
32 32  
33 -We suggest the users to upload [[documented notebooks>>https://github.com/the-virtual-brain/tvb-multiscale/tree/master/docs/notebooks]] and/or [[examples' scripts and notebooks >>https://github.com/the-virtual-brain/tvb-multiscale/tree/master/examples]]from TVB-multiscale Github repository, and run them there (please note that the user's drive offers persistent storage, i.e., users will find their files after logging out and in again).
33 +~1. Select `Drive` on the left of the current page (or use [[this link>>https://wiki.ebrains.eu/bin/view/Collabs/the-virtual-brain-multiscale/Drive||rel="noopener noreferrer" target="_blank"]]).
34 34  
35 -Alternatively, users can sparse checkout the docs and examples folders of TVB-multiscale Github repo, via the following sequence of commands in a terminal or in Jupyter notebook's cells (for notebooks you need to use "!" before each command!):
35 +2. Check the `tvb-multiscale-collab` folder checkbox, and copy it to your `My Library` ("copy" icon will appear above the files/folders list).
36 36  
37 -~1. Get into the user's My Libraries folder:
37 +3. Select `Lab` (on the left), and navigate to the destination where you just copied the folder.
38 38  
39 -{{{cd /mnt/user/drive/My Libraries}}}
39 +4. Enter the `tvb-multiscale-collab` folder, and open either of example notebooks. Ensure you select the appropriate ipykernel (EBRAINS-23.09 or a more recent one)
40 40  
41 -2. Create a folder, e.g., "tvb-multiscale-collab"
42 42  
43 -{{{mkdir tvb-multiscale-collab}}}
44 -
45 -3. Create an empty git repository:
46 -
47 -{{{git init
48 -
49 -3. Add tvb-multiscale remote:
50 -git remote add -f origin }}}
51 -
52 -This fetches all objects but doesn't check them out.
53 -
54 -4. Allow for sparse checkout in git config:
55 -
56 -{{{git config core.sparseCheckout true
57 -}}}
58 -
59 -5. Add the docs and examples folders to the ones to be checked out:
60 -
61 -{{{echo "docs" >> .git/info/sparse-checkout
62 -
63 -echo "examples" >> .git/info/sparse-checkout}}}
64 -
65 -6. Finally, pull the master from the remote:
66 -
67 -{{{git pull origin master}}}
68 -
69 -which will download the specified folders.
70 -
71 -All these steps can of course be made from any user initiated fork of the TVB-multiscale repository.
72 -
73 -Last but not least, users will also have to modify the attribute config.DEFAULT_CONNECTIVITY_ZIP of the base configuration class Config in all cases of examples and notebooks, to be able to load a default TVB connectivity for the simulations to run.  For instance, in the case of the above folder structure after sparse checkout, and for the example of the [[documented TVB-NEST_WilsonCowan.ipynb notebook>>https://github.com/the-virtual-brain/tvb-multiscale/blob/master/docs/notebooks/TVB-NEST_WilsonCowan.ipynb]], the correct path would be:
74 -
75 -config.DEFAULT_CONNECTIVITY_ZIP = "/mnt/user/drive/My Libraries/tvb-multiscale-examples/examples/data/tvb_data/berlinSubjects/QL_20120814/QL_20120814_Connectivity.zip"                  
76 -
77 -
78 -
79 -== Use our Jupyter Hub setup online ((% style="color:#c0392b" %)DEPRECATED(%%)) ==
80 -
81 -(% style="color:#c0392b" %)**TVB-multiscale app is deprecated and will stop being available after the end of 2023!**
82 -
83 -We have setup a Jupyter Hub service with tvb-multiscale as backed already prepared. You will only need an HBP account for accessing this: [[https:~~/~~/tvb-multiscale.apps.hbp.eu/>>https://tvb-multiscale.apps.hbp.eu/]]
84 -
85 -This JupyterHub installation works smoothly with HBP Collab user credentials (login only once at HBP and get access here too). We use a custom Docker Hub tvb-multiscale image as a backend, and thus a ready to use environment is available immediately, without the need of any local installation or download. This should be the ideal env for demos, presentations or even workshops with tvb-multiscale.
86 -
87 -**[[image:https://lh6.googleusercontent.com/ytx9eYpMcL3cCScX2_Sxm4CeBW0xbKW3xKsfO2zSId10bW0gw1kiN2_SkexyYBCsF-sKsu0MaJC4cZvGVfQPjMoPBLiePbkvXOZd8BgY3Q0kFzSkRCqQ183lgDQv_6PYoqS3s7uJ||height="149" width="614"]]**
88 -
89 -Currently, the users can access 2 folders: //TVB-*-Examples// and //Contributed-Notebooks//.
90 -
91 -The notebooks under **TVB-*-Examples** are public, shared by everyone accessing the instance. Periodically, we will clean all changes under TVB-*-Examples folder (by redeploying the pod image), and show the original example notebooks submitted on our Github repo. If users intend to contribute here, they are encouraged to submit changes through Pull Requests ([[https:~~/~~/github.com/the-virtual-brain/tvb-multiscale>>url:https://github.com/the-virtual-brain/tvb-multiscale]])
92 -
93 -**[[image:https://lh6.googleusercontent.com/nnsM0mhXQinmQsJwZwwwe5Sx7f-tZc8t4ELnCh9DwksyVEPUE-jixJTkhoP4l25VKwlDGoXACWtnuxQM9NMOCYbQOzDesgMDlT3sntow___vsEqRVd4OwqMY4BPyBiLJ32BnUbmM||height="267" width="614"]]**
94 -
95 -Folder **Contributed-Notebooks** is not shared. Here, users can experiment with their own private examples. This folder is persisted on restarts in the user HBP Collab personal space. Thus, users will be able to access their work even after a redeploy. (e.g. during a workshop every participant could have in here his own exercise solution)
96 -
97 -
98 98  == Running TVB-MULTISCALE locally ==
99 99  
100 100  See more on Github [[https:~~/~~/github.com/the-virtual-brain/tvb-multiscale>>url:https://github.com/the-virtual-brain/tvb-multiscale]] .
... ... @@ -105,15 +105,15 @@
105 105  
106 106  == ==
107 107  
108 -== Running TVB-MULTISCALE jobs on CSCS infrastructure from HBP collab ==
52 +== Running TVB-MULTISCALE jobs on HPC infrastructure from HBP collab ==
109 109  
110 -The CSCS and HBP Collab deployment of tvb-multiscale is a good example to show how tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, this deployment requires that **the user have an active CSCS personal account**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]
54 +tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, such a deployment requires that **the user have an active HPC personal account and allocation project active**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]
111 111  
112 112  * Create a collab space of your own
113 113  * Clone and run in your HBP Collab Hub ([[https:~~/~~/lab.ebrains.eu/>>url:https://lab.ebrains.eu/]]) the notebooks from here: [[https:~~/~~/drive.ebrains.eu/d/245e6c13082f45bcacfa/>>url:https://drive.ebrains.eu/d/245e6c13082f45bcacfa/]]
114 -** test_tvb-nest_installation.ipynb  Run the cosimulate_tvb_nest.sh script on the CSCS Daint supercomputer. In this example, basically we are running the //installation_test.py// file which is in the docker folder.
58 +** test_tvb-nest_installation.ipynb  Run the cosimulate_tvb_nest.sh script on the HPC supercomputer where you have an account active. In this example, basically we are running the //installation_test.py// file which is in the docker folder.
115 115  ** run_custom_cosimulation.ipynb For this example we are using the //cosimulate_with_staging.sh// script in order to pull the tvb-multiscale docker image and we are using a custom simulation script (from Github page) which will be uploaded in the staging in phase
116 -** run_custom_cosimulation_from_notebook.ipynb  This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the CSCS server.
60 +** run_custom_cosimulation_from_notebook.ipynb  This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the HPC.
117 117  
118 118  Few technical details about what we do in these notebooks:
119 119  
... ... @@ -127,10 +127,10 @@
127 127  
128 128  >tr = unicore_client.Transport(oauth.get_token())
129 129  >r = unicore_client.Registry(tr, unicore_client._HBP_REGISTRY_URL)
130 -># use "DAINT-CSCS" change if another supercomputer is prepared for usage
74 +># we used "DAINT-CSCS", but you should change it to another supercomputer where you have a project active
131 131  >client = r.site('DAINT-CSCS')
132 132  
133 -1. Prepare job submission
77 +2. Prepare job submission
134 134  
135 135  In this step we have to prepare a JSON object which will be used in the job submission process.
136 136  
... ... @@ -147,7 +147,7 @@
147 147  >my_job['Resources'] = { 
148 148  > "CPUs": "1"}
149 149  
150 -1. Actual job submission
94 +3. Actual job submission
151 151  
152 152  In order to submit a job we have to use the JSON built in the previous step and also if we have some local files, we have to give their paths as a list of strings (inputs argument) so the UNICORE library will upload them in the job's working directory in the staging in phase, before launching the job.
153 153  
... ... @@ -154,7 +154,7 @@
154 154  >job = site_client.new_job(job_description=my_job, inputs=['/path1', '/path2'])
155 155  >job.properties
156 156  
157 -1. Wait until job is completed and check the results
101 +4. Wait until job is completed and check the results
158 158  
159 159  Wait until the job is completed using the following command
160 160