Attention: The EBRAINS IAM will be down Monday, the 21st July 2025, from 17.00 CEST (my timezone) for up to 1 hour. This will any affect services requiring an EBRAINS login, we apologise for any inconvenience caused.


Last modified by ldomide on 2024/04/08 12:55

From version 39.1
edited by dionperd
on 2024/04/08 12:44
Change comment: There is no comment for this version
To version 41.1
edited by ldomide
on 2024/04/08 12:55
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.dionperd
1 +XWiki.ldomide
Content
... ... @@ -49,15 +49,15 @@
49 49  
50 50  == ==
51 51  
52 -== Running TVB-MULTISCALE jobs on CSCS infrastructure from HBP collab ==
52 +== Running TVB-MULTISCALE jobs on HPC infrastructure from HBP collab ==
53 53  
54 -The CSCS and HBP Collab deployment of tvb-multiscale is a good example to show how tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, this deployment requires that **the user have an active CSCS personal account**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]
54 +tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, such a deployment requires that **the user have an active HPC personal account and allocation project active**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]
55 55  
56 56  * Create a collab space of your own
57 57  * Clone and run in your HBP Collab Hub ([[https:~~/~~/lab.ebrains.eu/>>url:https://lab.ebrains.eu/]]) the notebooks from here: [[https:~~/~~/drive.ebrains.eu/d/245e6c13082f45bcacfa/>>url:https://drive.ebrains.eu/d/245e6c13082f45bcacfa/]]
58 -** test_tvb-nest_installation.ipynb  Run the cosimulate_tvb_nest.sh script on the CSCS Daint supercomputer. In this example, basically we are running the //installation_test.py// file which is in the docker folder.
58 +** test_tvb-nest_installation.ipynb  Run the cosimulate_tvb_nest.sh script on the HPC supercomputer where you have an account active. In this example, basically we are running the //installation_test.py// file which is in the docker folder.
59 59  ** run_custom_cosimulation.ipynb For this example we are using the //cosimulate_with_staging.sh// script in order to pull the tvb-multiscale docker image and we are using a custom simulation script (from Github page) which will be uploaded in the staging in phase
60 -** run_custom_cosimulation_from_notebook.ipynb  This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the CSCS server.
60 +** run_custom_cosimulation_from_notebook.ipynb  This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the HPC.
61 61  
62 62  Few technical details about what we do in these notebooks:
63 63  
... ... @@ -71,10 +71,10 @@
71 71  
72 72  >tr = unicore_client.Transport(oauth.get_token())
73 73  >r = unicore_client.Registry(tr, unicore_client._HBP_REGISTRY_URL)
74 -># use "DAINT-CSCS" change if another supercomputer is prepared for usage
74 +># we used "DAINT-CSCS", but you should change it to another supercomputer where you have a project active
75 75  >client = r.site('DAINT-CSCS')
76 76  
77 -1. Prepare job submission
77 +2. Prepare job submission
78 78  
79 79  In this step we have to prepare a JSON object which will be used in the job submission process.
80 80  
... ... @@ -91,7 +91,7 @@
91 91  >my_job['Resources'] = { 
92 92  > "CPUs": "1"}
93 93  
94 -1. Actual job submission
94 +3. Actual job submission
95 95  
96 96  In order to submit a job we have to use the JSON built in the previous step and also if we have some local files, we have to give their paths as a list of strings (inputs argument) so the UNICORE library will upload them in the job's working directory in the staging in phase, before launching the job.
97 97  
... ... @@ -98,7 +98,7 @@
98 98  >job = site_client.new_job(job_description=my_job, inputs=['/path1', '/path2'])
99 99  >job.properties
100 100  
101 -1. Wait until job is completed and check the results
101 +4. Wait until job is completed and check the results
102 102  
103 103  Wait until the job is completed using the following command
104 104