Changes for page Co-Simulation The Virtual Brain Multiscale
Last modified by ldomide on 2024/04/08 12:55
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki.d ionperd1 +XWiki.ldomide - Content
-
... ... @@ -20,34 +20,25 @@ 20 20 * TVB Dedicated Wiki [[https:~~/~~/wiki.ebrains.eu/bin/view/Collabs/the-virtual-brain/>>url:https://wiki.ebrains.eu/bin/view/Collabs/the-virtual-brain/]] 21 21 * TVB in HBP User Story [[https:~~/~~/wiki.ebrains.eu/bin/view/Collabs/user-story-tvb/>>url:https://wiki.ebrains.eu/bin/view/Collabs/user-story-tvb/]] 22 22 23 -(% class="wikigeneratedid" %) 24 24 == == 25 25 26 -(% class="wikigeneratedid" %) 27 27 == Running TVB-MULTISCALE at EBRAINS JupyterLab == 28 28 29 -TVB-multiscale is made available at [[EBRAINS JupyterLab>>https://lab.ebrains.eu/]]. All the user has to do is login with the EBRAINS credentials, and start a Python console or a Jupyter notebook, TVB-multiscale being available for importing (e.g., via "import tvb_multiscale"). All necessary TVB-multiscale dependencies (NEST, ANNarchy, NetPyNE (NEURON), Elephant, Pyspike) are also installed and available. We suggest the users to upload [[documented notebooks>>https://github.com/the-virtual-brain/tvb-multiscale/tree/master/docs/notebooks]] and/or [[examples' scripts and notebooks >>https://github.com/the-virtual-brain/tvb-multiscale/tree/master/examples]]from TVB-multiscale Github repository and run them there.27 +TVB-multiscale is made available at [[EBRAINS JupyterLab>>https://lab.ebrains.eu/]]. 30 30 29 +All the user has to do is log in with their EBRAINS credentials, and start a Python console or a Jupyter notebook using the kernel "EBRAINS-23.09" (or a more recent version), where TVB-multiscale can be imported (e.g., via "import tvb_multiscale"). All necessary TVB-multiscale dependencies (NEST, ANNarchy, NetPyNE (NEURON), Elephant, Pyspike) are also installed and available. 31 31 32 - ==Use ourJupyterHubsetup online ((%style="color:#c0392b"%)DEPRECATED(%%))==31 +This collab contains various examples of using TVB-Multiscale with all three supported spiking simulators. We suggest copying the contents of this collab to your Library or to any collab owned by you, and running them there (note that the user's drive offers persistent storage, i.e. users will find their files after logging out and in again), as follows: 33 33 34 - (%style="color:#c0392b"%)**TVB-multiscaleappisdeprecated and willop beingavailableafterthedof2023!**33 +~1. Select `Drive` on the left of the current page (or use [[this link>>https://wiki.ebrains.eu/bin/view/Collabs/the-virtual-brain-multiscale/Drive||rel="noopener noreferrer" target="_blank"]]). 35 35 36 - Wehavesetup a Jupyter Hub servicewith tvb-multiscalesbackedalready prepared.YouwillonlyneedanHBPaccountfor accessingthis: [[https:~~/~~/tvb-multiscale.apps.hbp.eu/>>https://tvb-multiscale.apps.hbp.eu/]]35 +2. Check the `tvb-multiscale-collab` folder checkbox, and copy it to your `My Library` ("copy" icon will appear above the files/folders list). 37 37 38 - This JupyterHubinstallationworks smoothly with HBP Collabuser credentials(loginonly once atHBP and get accessheretoo). We use a custom Docker Hub tvb-multiscaleimage as a backend, andthus a ready to use environment isavailable immediately,withouttheneedof any local installationor download. This should betheideal env fordemos, presentationsor even workshops withtvb-multiscale.37 +3. Select `Lab` (on the left), and navigate to the destination where you just copied the folder. 39 39 40 - **[[image:https://lh6.googleusercontent.com/ytx9eYpMcL3cCScX2_Sxm4CeBW0xbKW3xKsfO2zSId10bW0gw1kiN2_SkexyYBCsF-sKsu0MaJC4cZvGVfQPjMoPBLiePbkvXOZd8BgY3Q0kFzSkRCqQ183lgDQv_6PYoqS3s7uJ||height="149"width="614"]]**39 +4. Enter the `tvb-multiscale-collab` folder, and open either of example notebooks. Ensure you select the appropriate ipykernel (EBRAINS-23.09 or a more recent one) 41 41 42 -Currently, the users can access 2 folders: //TVB-*-Examples// and //Contributed-Notebooks//. 43 43 44 -The notebooks under **TVB-*-Examples** are public, shared by everyone accessing the instance. Periodically, we will clean all changes under TVB-*-Examples folder (by redeploying the pod image), and show the original example notebooks submitted on our Github repo. If users intend to contribute here, they are encouraged to submit changes through Pull Requests ([[https:~~/~~/github.com/the-virtual-brain/tvb-multiscale>>url:https://github.com/the-virtual-brain/tvb-multiscale]]) 45 - 46 -**[[image:https://lh6.googleusercontent.com/nnsM0mhXQinmQsJwZwwwe5Sx7f-tZc8t4ELnCh9DwksyVEPUE-jixJTkhoP4l25VKwlDGoXACWtnuxQM9NMOCYbQOzDesgMDlT3sntow___vsEqRVd4OwqMY4BPyBiLJ32BnUbmM||height="267" width="614"]]** 47 - 48 -Folder **Contributed-Notebooks** is not shared. Here, users can experiment with their own private examples. This folder is persisted on restarts in the user HBP Collab personal space. Thus, users will be able to access their work even after a redeploy. (e.g. during a workshop every participant could have in here his own exercise solution) 49 - 50 - 51 51 == Running TVB-MULTISCALE locally == 52 52 53 53 See more on Github [[https:~~/~~/github.com/the-virtual-brain/tvb-multiscale>>url:https://github.com/the-virtual-brain/tvb-multiscale]] . ... ... @@ -56,18 +56,17 @@ 56 56 57 57 This is the path recommended for people working closely with tvb-multiscale. They are able to download it in their local work env and code freely and fast with it. 58 58 59 -(% class="wikigeneratedid" %) 60 60 == == 61 61 62 -== Running TVB-MULTISCALE jobs on C SCSinfrastructure from HBP collab ==52 +== Running TVB-MULTISCALE jobs on HPC infrastructure from HBP collab == 63 63 64 - The CSCS and HBP Collab deploymentof tvb-multiscaleis a good example to show how tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also,this deployment requires that **the user have an active CSCSpersonal account**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]]54 +tvb-multiscale can run with an HPC backend. This will be efficient when the simulation jobs are very large. From our experience, with small jobs, the stage-in/out time is considerable, and then the user might be better with just a local run. Also, such a deployment requires that **the user have an active HPC personal account and allocation project active**. More details on how to use this deployment can be found in this movie: [[https:~~/~~/drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E>>url:https://drive.google.com/open?id=1osF263FK_NjhZcBJfpSy-F7qkbYs3Q-E]] 65 65 66 66 * Create a collab space of your own 67 67 * Clone and run in your HBP Collab Hub ([[https:~~/~~/lab.ebrains.eu/>>url:https://lab.ebrains.eu/]]) the notebooks from here: [[https:~~/~~/drive.ebrains.eu/d/245e6c13082f45bcacfa/>>url:https://drive.ebrains.eu/d/245e6c13082f45bcacfa/]] 68 -** test_tvb-nest_installation.ipynb Run the cosimulate_tvb_nest.sh script on the C SCSDaintsupercomputer. In this example, basically we are running the //installation_test.py// file which is in the docker folder.58 +** test_tvb-nest_installation.ipynb Run the cosimulate_tvb_nest.sh script on the HPC supercomputer where you have an account active. In this example, basically we are running the //installation_test.py// file which is in the docker folder. 69 69 ** run_custom_cosimulation.ipynb For this example we are using the //cosimulate_with_staging.sh// script in order to pull the tvb-multiscale docker image and we are using a custom simulation script (from Github page) which will be uploaded in the staging in phase 70 -** run_custom_cosimulation_from_notebook.ipynb This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the C SCS server.60 +** run_custom_cosimulation_from_notebook.ipynb This example is running the same simulation as the example above but instead of using an external file with the simulation code we will build a simulation file from a few notebook cells and we will pass this file to the HPC. 71 71 72 72 Few technical details about what we do in these notebooks: 73 73 ... ... @@ -81,10 +81,10 @@ 81 81 82 82 >tr = unicore_client.Transport(oauth.get_token()) 83 83 >r = unicore_client.Registry(tr, unicore_client._HBP_REGISTRY_URL) 84 -># use "DAINT-CSCS" change i fanother supercomputeris preparedforusage74 +># we used "DAINT-CSCS", but you should change it to another supercomputer where you have a project active 85 85 >client = r.site('DAINT-CSCS') 86 86 87 - 1. Prepare job submission77 +2. Prepare job submission 88 88 89 89 In this step we have to prepare a JSON object which will be used in the job submission process. 90 90 ... ... @@ -101,7 +101,7 @@ 101 101 >my_job['Resources'] = { 102 102 > "CPUs": "1"} 103 103 104 - 1. Actual job submission94 +3. Actual job submission 105 105 106 106 In order to submit a job we have to use the JSON built in the previous step and also if we have some local files, we have to give their paths as a list of strings (inputs argument) so the UNICORE library will upload them in the job's working directory in the staging in phase, before launching the job. 107 107 ... ... @@ -108,7 +108,7 @@ 108 108 >job = site_client.new_job(job_description=my_job, inputs=['/path1', '/path2']) 109 109 >job.properties 110 110 111 - 1. Wait until job is completed and check the results101 +4. Wait until job is completed and check the results 112 112 113 113 Wait until the job is completed using the following command 114 114