Wiki source code of Widget 3D Head
Last modified by paulapopa on 2025/05/30 15:03
Hide last authors
| author | version | line-number | content |
|---|---|---|---|
| |
15.1 | 1 | Source code: [[https:~~/~~/github.com/the-virtual-brain/tvb-widgets>>https://github.com/the-virtual-brain/tvb-widgets]] |
| |
3.1 | 2 | |
| |
22.1 | 3 | This is part of a Pypi release: [[https:~~/~~/pypi.org/project/tvb-widgets/>>https://pypi.org/project/tvb-widgets/]] |
| |
11.1 | 4 | |
| |
22.1 | 5 | //**tvb-widgets**// is also already installed in the official image released for EBRAINS lab, where you can test it directly. |
| 6 | |||
| |
3.1 | 7 | == Purpose == |
| 8 | |||
| |
12.1 | 9 | It is a Jupyter widget intended for visualization of the 3D Head data available for a patient: |
| |
3.1 | 10 | |
| 11 | * surfaces of different types (cortex, face, skull, etc) | ||
| |
25.1 | 12 | * connectivity region centers and edges |
| |
3.1 | 13 | * sensors locations (SEEG, MEG, EEG) |
| 14 | |||
| |
25.1 | 15 | On cortical surfaces, it can also display region parcellation. |
| 16 | |||
| |
3.1 | 17 | == Inputs == |
| 18 | |||
| 19 | It supports the above data in the form of their corresponding TVB datatypes: | ||
| 20 | |||
| 21 | * Surface (CorticalSurface, FaceSurface, etc) | ||
| |
25.1 | 22 | * Parcellation (RegionMapping) |
| |
3.1 | 23 | * Connectivity |
| 24 | * Sensors (SensorsInternal, SensorsMEG, SensorsEEG) | ||
| 25 | |||
| 26 | == Installation == | ||
| 27 | |||
| 28 | (% class="box" %) | ||
| 29 | ((( | ||
| 30 | pip install tvb-widgets | ||
| 31 | ))) | ||
| 32 | |||
| 33 | == API usage == | ||
| 34 | |||
| 35 | We need to first import the widget __API from tvbwidgets__// //package, together with the __TVB API __and the __display__ function: | ||
| 36 | |||
| |
26.1 | 37 | {{code language="python" layout="LINENUMBERS"}} |
| |
3.1 | 38 | import tvbwidgets.api as api |
| 39 | from tvb.simulator.lab import * | ||
| |
26.1 | 40 | from IPython.core.display_functions import display |
| 41 | {{/code}} | ||
| |
3.1 | 42 | |
| |
18.1 | 43 | Then, there are 2 options to work with the widget: |
| |
3.1 | 44 | |
| |
18.1 | 45 | 1. Use a file browser to load the data and automatically display it |
| 46 | 1. Use directly the API to load the data and display it | ||
| 47 | |||
| 48 | For the first option, you have to run the following 2 lines of code in a notebook cell and then just use the UI controls: | ||
| 49 | |||
| |
26.1 | 50 | {{code language="python" layout="LINENUMBERS"}} |
| |
18.1 | 51 | widget = api.HeadBrowser() |
| 52 | display(widget) | ||
| |
26.1 | 53 | {{/code}} |
| |
18.1 | 54 | |
| 55 | |||
| 56 | For the second option, the API is described below: | ||
| 57 | |||
| |
26.1 | 58 | In a cell, we load the data using the TVB API: |
| |
18.1 | 59 | |
| |
26.1 | 60 | {{code language="python" layout="LINENUMBERS"}} |
| 61 | surface = surfaces.Surface.from_file() | ||
| 62 | surface.configure() | ||
| |
3.1 | 63 | |
| |
26.1 | 64 | face = surfaces.Surface.from_file('face_8614.zip') |
| |
3.1 | 65 | face.configure() |
| 66 | |||
| |
26.1 | 67 | reg_map = region_mapping.RegionMapping.from_file() |
| |
3.1 | 68 | |
| |
26.1 | 69 | conn = connectivity.Connectivity.from_file() |
| |
3.1 | 70 | conn.configure() |
| 71 | |||
| |
26.1 | 72 | seeg = sensors.SensorsInternal.from_file() |
| |
3.1 | 73 | seeg.configure() |
| |
26.1 | 74 | {{/code}} |
| |
3.1 | 75 | |
| |
26.1 | 76 | Then we prepare the **HeadWidget** for display: |
| |
3.1 | 77 | |
| |
26.1 | 78 | {{code language="python" layout="LINENUMBERS"}} |
| 79 | widget = api.HeadWidget([face, conn, seeg]) | ||
| 80 | display(widget) | ||
| 81 | {{/code}} | ||
| |
6.1 | 82 | |
| |
27.1 | 83 | [[image:head.png]] |
| |
3.1 | 84 | |
| |
26.1 | 85 | Next, we can continue adding other datatypes to this widget, by calling //**add_datatype**// multiple times. |
| |
3.1 | 86 | |
| |
26.1 | 87 | In the code below, we add the **CorticalSurface** with a **RegionMapping** as parcellation**:** |
| |
3.1 | 88 | |
| |
26.1 | 89 | {{code language="python" layout="LINENUMBERS"}} |
| 90 | widget.add_datatype(surface, reg_map) | ||
| 91 | {{/code}} | ||
| |
3.1 | 92 | |
| |
28.1 | 93 | [[image:cort.png]] |
| |
29.1 | 94 | |
| 95 | |||
| 96 | In the upper-right corner, a menu is displayed. This will allow us to control what we want to visualize. For example, I can hide the head in order to get a better view of the cortical surface: | ||
| 97 | |||
| 98 | |||
| 99 | |||
| 100 | I can rotate the surface and zoom in/out: | ||
| 101 | |||
| 102 | [[image:parcel.png]] | ||
| 103 | |||
| |
31.1 | 104 | I could also make it transparent to visualize the connectivity centers and their edges, by moving the slider highlighted in red: |
| |
29.1 | 105 | |
| |
31.1 | 106 | [[image:transp.png]] |
| |
29.1 | 107 | |
| |
31.1 | 108 | Or, I can hide the connectivity and display only the SEEG sensors: |
| |
29.1 | 109 | |
| 110 | [[image:seeg.png]] |