Changes for page Co-Simulation The Virtual Brain Multiscale
Last modified by ldomide on 2024/04/08 12:55
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. ldomide1 +XWiki.dionperd - Content
-
... ... @@ -49,15 +49,15 @@ 49 49 50 50 == == 51 51 52 -== Running TVB-MULTISCALE jobs on HPC infrastructure from HBP collab ==52 +== Running TVB-MULTISCALE jobs on CSCS infrastructure from HBP collab == 53 53 54 -tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, suchadeployment requires that **the user have an activeHPC personal accountand allocation project active**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]54 +The CSCS and HBP Collab deployment of tvb-multiscale is a good example to show how tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, this deployment requires that **the user have an active CSCS personal account**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]] 55 55 56 56 * Create a collab space of your own 57 57 * Clone and run in your HBP Collab Hub ([[https:~~/~~/lab.ebrains.eu/>>url:https://lab.ebrains.eu/]]) the notebooks from here: [[https:~~/~~/drive.ebrains.eu/d/245e6c13082f45bcacfa/>>url:https://drive.ebrains.eu/d/245e6c13082f45bcacfa/]] 58 -** test_tvb-nest_installation.ipynb Run the cosimulate_tvb_nest.sh script on the HPC supercomputerwhere you have an account active. In this example, basically we are running the //installation_test.py// file which is in the docker folder.58 +** test_tvb-nest_installation.ipynb Run the cosimulate_tvb_nest.sh script on the CSCS Daint supercomputer. In this example, basically we are running the //installation_test.py// file which is in the docker folder. 59 59 ** run_custom_cosimulation.ipynb For this example we are using the //cosimulate_with_staging.sh// script in order to pull the tvb-multiscale docker image and we are using a custom simulation script (from Github page) which will be uploaded in the staging in phase 60 -** run_custom_cosimulation_from_notebook.ipynb This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the HPC.60 +** run_custom_cosimulation_from_notebook.ipynb This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the CSCS server. 61 61 62 62 Few technical details about what we do in these notebooks: 63 63 ... ... @@ -71,10 +71,10 @@ 71 71 72 72 >tr = unicore_client.Transport(oauth.get_token()) 73 73 >r = unicore_client.Registry(tr, unicore_client._HBP_REGISTRY_URL) 74 -># weused"DAINT-CSCS",butyou shouldchange ittoanother supercomputerwhereyou havea projectactive74 +># use "DAINT-CSCS" change if another supercomputer is prepared for usage 75 75 >client = r.site('DAINT-CSCS') 76 76 77 - 2. Prepare job submission77 +1. Prepare job submission 78 78 79 79 In this step we have to prepare a JSON object which will be used in the job submission process. 80 80 ... ... @@ -91,7 +91,7 @@ 91 91 >my_job['Resources'] = { 92 92 > "CPUs": "1"} 93 93 94 - 3. Actual job submission94 +1. Actual job submission 95 95 96 96 In order to submit a job we have to use the JSON built in the previous step and also if we have some local files, we have to give their paths as a list of strings (inputs argument) so the UNICORE library will upload them in the job's working directory in the staging in phase, before launching the job. 97 97 ... ... @@ -98,7 +98,7 @@ 98 98 >job = site_client.new_job(job_description=my_job, inputs=['/path1', '/path2']) 99 99 >job.properties 100 100 101 - 4. Wait until job is completed and check the results101 +1. Wait until job is completed and check the results 102 102 103 103 Wait until the job is completed using the following command 104 104