Wiki source code of Widget 3D Head
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | Source code: [[https:~~/~~/github.com/the-virtual-brain/tvb-widgets>>https://github.com/the-virtual-brain/tvb-widgets]] | ||
2 | |||
3 | This is part of a Pypi release: [[https:~~/~~/pypi.org/project/tvb-widgets/>>https://pypi.org/project/tvb-widgets/]] | ||
4 | |||
5 | //**tvb-widgets**// is also already installed in the official image released for EBRAINS lab, where you can test it directly. | ||
6 | |||
7 | == Purpose == | ||
8 | |||
9 | It is a Jupyter widget intended for visualization of the 3D Head data available for a patient: | ||
10 | |||
11 | * surfaces of different types (cortex, face, skull, etc) | ||
12 | * connectivity region centers and edges | ||
13 | * sensors locations (SEEG, MEG, EEG) | ||
14 | |||
15 | On cortical surfaces, it can also display region parcellation. | ||
16 | |||
17 | == Inputs == | ||
18 | |||
19 | It supports the above data in the form of their corresponding TVB datatypes: | ||
20 | |||
21 | * Surface (CorticalSurface, FaceSurface, etc) | ||
22 | * Parcellation (RegionMapping) | ||
23 | * Connectivity | ||
24 | * Sensors (SensorsInternal, SensorsMEG, SensorsEEG) | ||
25 | |||
26 | == Installation == | ||
27 | |||
28 | (% class="box" %) | ||
29 | ((( | ||
30 | pip install tvb-widgets | ||
31 | ))) | ||
32 | |||
33 | == API usage == | ||
34 | |||
35 | We need to first import the widget __API from tvbwidgets__// //package, together with the __TVB API __and the __display__ function: | ||
36 | |||
37 | {{code language="python" layout="LINENUMBERS"}} | ||
38 | import tvbwidgets.api as api | ||
39 | from tvb.simulator.lab import * | ||
40 | from IPython.core.display_functions import display | ||
41 | {{/code}} | ||
42 | |||
43 | Then, there are 2 options to work with the widget: | ||
44 | |||
45 | 1. Use a file browser to load the data and automatically display it | ||
46 | 1. Use directly the API to load the data and display it | ||
47 | |||
48 | For the first option, you have to run the following 2 lines of code in a notebook cell and then just use the UI controls: | ||
49 | |||
50 | {{code language="python" layout="LINENUMBERS"}} | ||
51 | widget = api.HeadBrowser() | ||
52 | display(widget) | ||
53 | {{/code}} | ||
54 | |||
55 | |||
56 | For the second option, the API is described below: | ||
57 | |||
58 | In a cell, we load the data using the TVB API: | ||
59 | |||
60 | {{code language="python" layout="LINENUMBERS"}} | ||
61 | surface = surfaces.Surface.from_file() | ||
62 | surface.configure() | ||
63 | |||
64 | face = surfaces.Surface.from_file('face_8614.zip') | ||
65 | face.configure() | ||
66 | |||
67 | reg_map = region_mapping.RegionMapping.from_file() | ||
68 | |||
69 | conn = connectivity.Connectivity.from_file() | ||
70 | conn.configure() | ||
71 | |||
72 | seeg = sensors.SensorsInternal.from_file() | ||
73 | seeg.configure() | ||
74 | {{/code}} | ||
75 | |||
76 | Then we prepare the **HeadWidget** for display: | ||
77 | |||
78 | {{code language="python" layout="LINENUMBERS"}} | ||
79 | widget = api.HeadWidget([face, conn, seeg]) | ||
80 | display(widget) | ||
81 | {{/code}} | ||
82 | |||
83 | [[image:head.png]] | ||
84 | |||
85 | Next, we can continue adding other datatypes to this widget, by calling //**add_datatype**// multiple times. | ||
86 | |||
87 | In the code below, we add the **CorticalSurface** with a **RegionMapping** as parcellation**:** | ||
88 | |||
89 | {{code language="python" layout="LINENUMBERS"}} | ||
90 | widget.add_datatype(surface, reg_map) | ||
91 | {{/code}} | ||
92 | |||
93 | {{html}} | ||
94 | <iframe width="840" height="480" src="https://www.youtube.com/embed/aDC2TJm2NxM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||
95 | {{/html}} |