Warning: The Collaboratory IAM will be down Tuesday 8th October from 18:00 CET (my timezone) for up to 30 minutes for an update. Please note this will affect all EBRAINS services. Read more.

Warning: The EBRAINS Gitlab will be down Friday 4th October from 18:00 CET (my timezone) until Monday 7th October for a migration effort.



From MRI to personalized brain simulation using TVB-EBRAINS integrated workflows

Here we explain step by step how to use The Virtual Brain (TVB) tools for end-to-end personalized brain simulation. We start by finding shared MRI data using KnowledgeGraph, create a brain model from extracted connectomes using TVB pipeline, and simulate neural activity using TVB brain network model simulators. We will use Jupyter notebooks on EBRAINS Collab platforms for frontend operations and supercomputers in the backend for intensive number crunching.

Watch these videos to learn more:

export_overview_new2.png
TVB on EBRAINS services.

The Virtual Brain at EBRAINS

overview_figure_v10_whitebg.png

TVB on EBRAINS cloud infrastructure.

Brain simulation and neuroimaging workflows require personal medical data that is applicable to data protection regulation. To protect personal data the services rely on end-to-end encryption and access control. EBRAINS provides several core services: 'Drive' is a service for hosting and sharing files; 'Wiki' and 'Office' allow users to create workspaces and documents for collaborative research; 'Lab' hosts sandboxed JupyterLab instances for running live code; 'OpenShift' orchestrates different services and provides resources for interactive computing; 'HPC' are supercomputing backends for resource-intensive computations. Core services interact with the different deployments of TVB services via a RESTful API and Unicore for communication with supercomputers. TVB services are deployed in the form of a Web GUI, container images, Python notebooks, Python libraries and high-performance backend codes. The TVB Image Processing Pipeline produces structural and functional connectomes from MRI data and its outputs can be ingested by KnowledgeGraph and annotated with openMINDS metadata, which allows re-using the connectomes in other services. The connectors show interactions between different components (colours group connectors for different deployments). The six TVB services are independent modules that can be combined according to the requirements of the research question.

TVB pipeline: Extract connectomes

As a first step we browse through The Knowledge Graph (KG) in order to find a suitable dataset to construct a brain model. The dataset must contain diffusion-weighted MRI data, in order to extract a structural connectome, which will form the basis of a brain network model. Structural connectivity extracted from diffusion MRI is used to quantify how strongly brain regions interact in the brain model. Next, the data set must contain functional MRI (fMRI) data, because a common approach is to tune the parameters of the brain model such that the simulated fMRI functional connectivity fits with the empirical fMRI data. For fitting, we usually compute functional connectivity matrices from simulated and empirical data. Finally, we need anatomical T1-weighted MRI to extract cortical surfaces and to perform a parcellation of the brain into different regions.

  • Open The Knowledge Graph in your browser
  • Browse through KG and look for a data set that contains the above-mentioned MRI modalities. To find a suitable data set you may use the “Filter” sidebar on the left and e.g. select “Homo sapiens” or “magnetic resonance imaging”. In the following we are going to use the data set “Individual Brain Charting: ARCHI Social”.
  • The data-set can be downloaded here.

img1.png

KnowledgeGraph search sidebar and exemplary dataset card with link to OpenNeuro repository.

  • Download the imaging data. The full dataset would be quite large: 54.06 GB. It contains several subjects, modalities, tasks and runs, most of which we don’t need to demo the workflow. We will therefore only download the minimal set of files that we need to form a valid BIDS data set and to perform the following steps. Using the Dataset File Tree on the right, download the files indicated in the following folder tree. The interface unfortunately only allows to download individual files, so you have to click each one of them and also you have to create the necessary folder structure (incl. the folders sub-01, anat, dwi, func) yourself. Note that the full data set contains multiple sessions identified by the keyword “ses-XX”, where “XX” indicates the session number. Here we use only data from ses-00 and therefore omit the folder and instead directly copy the folders “dwi”, “func”, and “anat” one level beneath “sub-01”. When you are done, your folder tree should look like this:

tree.png

Folder tree of the example data set.

  • We now have an MRI dataset in BIDS format. The next step is to compress the folder (e.g. as a .zip or .tar.gz file) so that we can upload it as a single file to the EBAINS Collaboratory and later to the supercomputer. In the next steps, we are going to use diffusion MRI tractography to reconstruct white matter fiber pathways and to estimate coupling weights between brain regions.
  • Open the TVB Pipeline EBRAINS Collab.
  • The pipeline is implemented in the form of a Jupyter notebook that shows how to upload data from local filesystem to EBRAINS drive; how to copy the data to the supercomputer; how to run the three docker containers that perform the processing; how to download results to local filesystem.
  • In order to use EBRAINS Collab software, it is necessary to download the notebook, create a new Collab, and upload the notebook there. The process is described on the main Collab page and in the notebook.
  • For brain simulation the most important result of tractography is the structural connectome (SC), which consists of the coupling strengths matrix and the distances matrix. The former quantifies the strength of interaction between each pair of brain regions and the latter contains the average length of the respective fiber bundle. The exported SC can be directly imported to TVB: as one of the last steps of the pipeline, the SC was stored along with other data that can be read by TVB in the file "TVB_output.zip". Within that ZIP archive is the file “sub-<participant_label>_Connectome.zip”, which can be used to set up a brain network model in the other TVB workflows.

sc.png

Structural connectivity matrices. Left panel: weights, right panel: distances.

The Virtual Brain: Simulate brain activity

The Virtual Brain is the main TVB software package. It is a neuroinformatics platform that provides an ecosystem of tools for simulating and analysing large-scale brain network dynamics based on biologically realistic connectivity.

TVB can be operated via GUI and programmatic Python interface.

  • On the EBRAINS Collaboratory Platform TVB Simulator usage is introduced through IPython Notebooks in the main TVB collab.
  • Additionally, the TVB GUI can be directly accessed as a Web App. Via the Web App users can configure simulations that are – depending on their complexity – either simulated directly on the web server or on a supercomputer, thereby making resource-consuming TVB functionality accessible to researchers that do not have access to supercomputers.
  • Compiled standalone versions of the main software package can be downloaded from thevirtualbrain.org.

In the following we take you through the main steps of brain network model simulation:

  • Construct and downloaded the structural connectivity generated with the TVB pipeline. Alternatively, you can use demo SC that is shipped with the main TVB package. 
  • Having loaded the SC (in GUI or command line), next work through the basic process of setting up a simulation, see here https://docs.thevirtualbrain.org/demos/Demos.html
    • Simulate with the reduced Wong-Wang model 
    • Simulate with the Jansen Rit Model
  • Next, explore how you can output BOLD activity using the BOLD monitor
  • Having simulated a longer time series (at least a few minutes of activity) you can compute a functional connectivity matrix

Congratulations, you performed your first brain simulation. You may now want to play with parameters and look how it affects the simulated FC – a goal may be to maximize the fit between simulated and empirical FC. Often a good first step is to vary the global coupling scaling factor: start at a low value (little exchange of synaptic currents between brain regions) and then increase until fMRI time series of the different brain regions become increasingly correlated.

TVB+NEST: Multiscale simulation

In the previous step we simulated the brain at a coarse spatial resolution: the macroscopic scale of brain regions (e.g. “M1”, “V1”, etc.) and long-range white matter fiber bundles. However, interesting computations often happen on smaller scales, like the mesoscopic scale of small neural populations or the microscopic scale of individual neurons and neural networks. TVB+NEST is a Python toolbox that makes it easier to simulate multi-scale networks, i.e., networks, where one part simulates activity on a coarse scale and another part simulates activity on a finer scale. Essentially, TVB+NEST is a Python wrapper for The Virtual Brain neuroinformatics platform and the NEST spiking network simulator. TVB+NEST exists as a web app and a download version. The web app runs on HBP computers, while the download version is implemented as standalone Docker container that can be downloaded. To run TVB+NEST follow the code here or directly use the python module in EBRAINS Lab (where it was installed as a Spack package).

Alternatively, download the standalone Docker container thevirtualbrain/tvb-nest from Dockerhub. In the previous sections you may have simulated a large-scale brain model, but are now interested how large-scale activity affects finer-scale activity in a specific region. To familiarize yourself with TVB+NEST, you may read more here

Fast_TVB: Fast and parallel simulation

Fast_TVB is thousands of times faster than Python TVB as it uses several optimization techniques and is implemented in the hardware-near language C. In addition, it is able to simulate in parallel, i.e., users can specify a number of threads that will simultaneously perform the processing and occupy multiple processors, as often done on supercomputers.

To perform a dense parameter space exploration or to simulate high-dimensional models (e.g. with the number of network nodes N > 103) high performance simulation codes are necessary. For the ReducedWongWang model a high-performance C-version of the TVB simulator core is implemented as a Docker container here.

The EBRAINS Collaboratory “TVB C -- High-speed parallel brain network models” explains how to use the container on supercomputer backends with a Jupyter notebook as frontend:

  •  Open the “TVB C -- High-speed parallel brain network models” Collab
  • Follow the instructions in the Collab notebook or here to set up a brain model, simulate it and collect the results.
  • Simulations are more efficient when only a single thread is created, but faster for multiple threads. Play around with the num_threads parameter and compare the execution speeds for different settings. If execution speed is the primary goal a higher number of threads is advised, if efficiency during parameter space exploration is the goal, then it is advised to use multiple single-threaded instances of the program.

TVB-HPC: High-performance computing

In this project a toolbox has been created that supports the efficiently port TVB neural mass models between different computing architectures. This addresses the need that most models simulated in TVB are written in Python, and most of them have not yet been optimized for parallel execution or deployment on high-performance computing architectures. At the heart of this project is the development of a domain-specific language (DSL) that lets us define TVB models in a structured language that allows automatic code generation. Based on the model description computing code for different environments or hardware is automatically generated.

In order to implement your own model in TVB-HPC DSL you may first familiarize yourself with the DSL and how to define models in it here. As a next step, you may go on to learn how to generate CUDA code from your models here.

Disease models

An advanced application of TVB is the specification of disease models like it was done in this publication by Stefanovski et al. See it on Jupyter nbviewer.

In the INCF training space an extensive tutorial is provided that walks the user through the approach of the paper.

Tumor brain models

Planning for tumour surgery involves delineating eloquent tissue to spare. To this end doctors analyze noninvasive neuroimages like fMRI and dwMRI. Instead of analyzing these modalities independently, brain models provide a novel way to combine the information in these different modalities, which may reveal information that is not apparent from individual analysis. Aerts et al. generated brain models of tumour patients before and after surgery and now published their core brain model data set.

The dataset contains BOLD time series averaged over 68 regions of interest according to the Desikan-Kilianny atlas, and a structural connectivity matrix displaying the fibers connecting each pair of these regions of interest, derived from the DWI data. The locations of the areas, the centers, the fiber lengths and densities are also included. The computational models are implemented at each of these regions of interested, connected according to the white matter fibers. The empirical functional connectivity matrix (the Pearson correlation among pairs of BOLD time series from each ROI) is used to fit the model.

Having learned how to create and simulate first brain models in the initial chapters, how to optimally implement them in the middle chapters, and how to implement disease mechanisms in the last two chapters, researchers may now combine these workflows and extend them to study other healthy or pathological brain processes.  

INCF training space

TVB EduPack provides didactic use cases for The Virtual Brain. Typically a use case consists of a jupyter notebook and a didactic video. EduPack use cases help the user to reproduce TVB based publications or to get started quickly with TVB. EduCases demonstrate for example how to use TVB via the Collaboratory of the Human Brain Project, how to run multi-scale co-simulations with other simulators such as NEST, how to process imaging data to construct personalized virtual brains of healthy individuals and patients. See the INCF Study Track here.

openMINDS metadata

EBRAINS uses the openMINDS schema to annotate neural data with metadata:

https://github.com/HumanBrainProject/openMINDS

We provide a Jupyter notebook where we demonstrate the process of how to fill up Python dictionaries with key-value pairs that specify metadata according to the openMINDS schema and how to then dump them into a set of JSON-LD files.

https://wiki.ebrains.eu/bin/view/Collabs/openminds-metadata/

Tags:
    
EBRAINS logo