Attention: The EBRAINS IAM will be down Monday, the 21st July 2025, from 17.00 CEST (my timezone) for up to 1 hour. This will any affect services requiring an EBRAINS login, we apologise for any inconvenience caused.


Last modified by ldomide on 2024/04/08 12:55

From version 40.1
edited by dionperd
on 2024/04/08 12:46
Change comment: There is no comment for this version
To version 41.1
edited by ldomide
on 2024/04/08 12:55
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.dionperd
1 +XWiki.ldomide
Content
... ... @@ -47,18 +47,17 @@
47 47  
48 48  This is the path recommended for people working closely with tvb-multiscale. They are able to download it in their local work env and code freely and fast with it.
49 49  
50 -(% class="wikigeneratedid" %)
51 51  == ==
52 52  
53 -== Running TVB-MULTISCALE jobs on CSCS infrastructure from HBP collab ==
52 +== Running TVB-MULTISCALE jobs on HPC infrastructure from HBP collab ==
54 54  
55 -The CSCS and HBP Collab deployment of tvb-multiscale is a good example to show how tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, this deployment requires that **the user have an active CSCS personal account**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]
54 +tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, such a deployment requires that **the user have an active HPC personal account and allocation project active**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]
56 56  
57 57  * Create a collab space of your own
58 58  * Clone and run in your HBP Collab Hub ([[https:~~/~~/lab.ebrains.eu/>>url:https://lab.ebrains.eu/]]) the notebooks from here: [[https:~~/~~/drive.ebrains.eu/d/245e6c13082f45bcacfa/>>url:https://drive.ebrains.eu/d/245e6c13082f45bcacfa/]]
59 -** test_tvb-nest_installation.ipynb  Run the cosimulate_tvb_nest.sh script on the CSCS Daint supercomputer. In this example, basically we are running the //installation_test.py// file which is in the docker folder.
58 +** test_tvb-nest_installation.ipynb  Run the cosimulate_tvb_nest.sh script on the HPC supercomputer where you have an account active. In this example, basically we are running the //installation_test.py// file which is in the docker folder.
60 60  ** run_custom_cosimulation.ipynb For this example we are using the //cosimulate_with_staging.sh// script in order to pull the tvb-multiscale docker image and we are using a custom simulation script (from Github page) which will be uploaded in the staging in phase
61 -** run_custom_cosimulation_from_notebook.ipynb  This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the CSCS server.
60 +** run_custom_cosimulation_from_notebook.ipynb  This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the HPC.
62 62  
63 63  Few technical details about what we do in these notebooks:
64 64  
... ... @@ -72,10 +72,10 @@
72 72  
73 73  >tr = unicore_client.Transport(oauth.get_token())
74 74  >r = unicore_client.Registry(tr, unicore_client._HBP_REGISTRY_URL)
75 -># use "DAINT-CSCS" change if another supercomputer is prepared for usage
74 +># we used "DAINT-CSCS", but you should change it to another supercomputer where you have a project active
76 76  >client = r.site('DAINT-CSCS')
77 77  
78 -1. Prepare job submission
77 +2. Prepare job submission
79 79  
80 80  In this step we have to prepare a JSON object which will be used in the job submission process.
81 81  
... ... @@ -92,7 +92,7 @@
92 92  >my_job['Resources'] = { 
93 93  > "CPUs": "1"}
94 94  
95 -1. Actual job submission
94 +3. Actual job submission
96 96  
97 97  In order to submit a job we have to use the JSON built in the previous step and also if we have some local files, we have to give their paths as a list of strings (inputs argument) so the UNICORE library will upload them in the job's working directory in the staging in phase, before launching the job.
98 98  
... ... @@ -99,7 +99,7 @@
99 99  >job = site_client.new_job(job_description=my_job, inputs=['/path1', '/path2'])
100 100  >job.properties
101 101  
102 -1. Wait until job is completed and check the results
101 +4. Wait until job is completed and check the results
103 103  
104 104  Wait until the job is completed using the following command
105 105