Version 38.1 by puchades on 2023/06/22 09:38

Show last authors
1 == To set up your working environment: ==
2
3 1. [[Register>>url:https://ebrains.eu/register/]] for an EBRAINS account, login, and set up a [[private collab>>url:https://wiki.ebrains.eu/bin/view/Collabs/]].
4 1. Initialise the Bucket by clicking on Bucket in the navigation panel -> Create Bucket.
5 1. Give users Admin, Editor or Viewer rights by clicking** Team** in the navigation panel.
6 1. Install WebAlign, WebWarp, LocaliZoom, and Meshview from the EBRAINS Collaboratory App Store (see instructions below)
7
8 == How to install Collaboratory Apps ==
9
10 1. To install Collaboratory Apps, click on the + Create button (top right corner).
11 1. Give the page a Title (for example, WebAlign), select the Community App option, and click Create.
12 1. Select the App to install (for example, WebAlign), and click Save and View.
13 1. Repeat this for all the relevant Community Apps. You will need, "QUINT Image creator app"; "WebAlign"; "WebWarp"; "LocaliZoom" and "MeshView".
14 1. Navigate between the Apps in the navigation panel. File transfer between the Apps is through the Bucket.
15
16 == How to prepare your images? ==
17
18 **~1. Prepare your images before upload by naming them according this naming convention:**
19
20 The ID should be unique to the particular brain section and in the format sXXX, with XXX representing the section number. The section number should reflect the serial order and spacing of the sections (e.g., s002, s006, s010 for every 4^^th^^ section starting with section 2).
21
22 Example: tg2345_MMSH_s001.tif
23
24 - Upload the images you want to work with into the bucket of your collab using the Data proxy (press on //"Bucket"//)
25
26 **2. Image ingestion**
27
28 Two alternatives are available, the QUINT image creator app, which is automatically creating the necessary files for downstream analyses in your collab. Or, you can use the image service app which is able to create image tiles in different formats. The steps for preparing your files are described bellow:
29
30 === Use the "QUINT image creator" app ===
31
32 **2a. Select images to ingest**
33
34 Click on your images in order to select them and press the "create brain from selection" button. Choose a name for your serie.
35
36 The App will automatically generate the files for you. Monitor the progress under the "processing" tab and on the two dashboards on the left.
37
38 When the ingestion is finished, your serie will appear under the "Prepared Brains" tab.
39
40 Click on the "View Brain" button in order to preview your images.
41
42 Now go to the WebAlign app in order to start the registration to atlas.
43
44 [[image:Screenshot create_brain app.png]]
45
46
47 === Use Image service app (alt.) ===
48
49 **2b. Start the Image service UI from your webapp**
50
51 * ** Fill in the requested information:**
52
53 - type or paste the URL of the dataset folder containing the images. //Note!// the name of the collab supports only hyphens.
54
55 E.g. https:~/~/data-proxy.ebrains.eu/api/v1/buckets/name-of-my-collab
56
57 - you can filter the data using RegEx expressions. E.g. name_of_the_data_file.*\.jpg$ (for filtering jpeg files only). Or e.g hbp-00173_262_1366_1827.*\.tif$ for tiff files.
58
59 - If your URL is valid, you will see the list of files by pressing "yes". Paste the name of the file you want to select. You can go back to the previous step  by pressing "back"
60
61 - Or use a prefix: E.g. https:~/~/data-proxy.ebrains.eu/api/v1/buckets/name-of-my-collab?prefix=name_of_the_data_folder
62
63 - allow the image service to access your bucket
64
65 [[[[image:Skjermbilde 2022-02-08 103443.png||height="199" width="500"]]>>attach:Skjermbilde 2022-02-08 103443.png]]
66
67 - Store your results: Choose "create a new collab" where your ingested chunks will be stored in the bucket. This option is preferred in order not to overload the Bucket of your current  collab. Give a name to your Bucket as well as a name for the  collab (bucket slug) (see this illustration for more info)
68
69 [[image:create collab.png]]
70
71 - Customize your ingestion:
72
73 - give a name to your ingestion in the "description of ingestion" field.
74
75 * ** For obtaining chunks in DZI format  (compatible with WebAlign),** choose "2D" and "not stack of Images".
76 * ** Click //"preview"// in order to preview your task**
77 * ** Click //"create task"//  to launch your ingestion**
78 * ** Checking results in the Main UI page**
79
80 Press "Task" in the higher right corner of the window.
81
82 When creating the task, a red banner is displayed in the "Create Task" view if something goes wrong for instance.
83
84 Otherwise, you will be redirected to the tasks list where the created task is selected.
85
86 //Note! //A task will be first "Queued" then "Running"; "Stagingout"and ultimately "Successful" or "Failed"
87
88 Refresh your browser in order to check the status of your task.
89
90 When "successful", your chunks have been created.
91
92 == **How to use WebAlign** ==
93
94 WebAlign is an online tool for spatial registration of histological section images from rodent brains to reference 3D atlases.  Different experimental datasets registered to the same reference atlas allows you to spatially integrate, analyse and navigate these datasets within a standardised coordinate system. The output of WebAlign can be used for analysis in the online QUINT workflow.
95
96 Online user manual: [[https:~~/~~/webalign.readthedocs.io/en/latest/>>https://webalign.readthedocs.io/en/latest/]]
97
98 The view can be magnified using the 4-arrow "X" symbol in the top-right corner.
99
100 === Opening a sample dataset ===
101
102 Demo dataset is loaded using the file: **demo_mouse_data_start.waln**
103
104 You can see the result of a finished anchoring by choosing the file: **demo_mouse_data.waln**
105
106 === Opening a private dataset ===
107
108 After you have uploaded your images to the bucket and ingested your images with the QUINT Image creator app, this has generated DZIP chunks. These DZIP files are used by WebAlign.
109
110 ~1. Start a new registration by pressing "create new series", the UI will ask you for the name of the collab where DZI chunks are stored. E.g. my-collab-name
111
112 2. WebAlign will search for DZIP files and list those found.
113
114 3. Enter a name for the descriptor json file which will be created and will contain your anchoring information.
115
116 4. Choose the target 3D reference atlas (WHSv3 for Rat and CCFv3_2017 for Mouse).
117
118 5. Press //"create"//. The main window will now display WebAlign. This step can take some time.
119
120 [[image:create series webAlign.png]]
121
122 === Opening an EBRAINS dataset ===
123
124 (% class="wikigeneratedid" %)
125 If you would like to work with an EBRAINS dataset, fetch the LocaliZoom link from the KG dataset card ( [[https:~~/~~/search.kg.ebrains.eu>>https://search.kg.ebrains.eu]] )and paste it in the "Import LocaliZoom link" tab.
126
127 (% class="wikigeneratedid" %)
128 These series already have been registered to a reference atlas, so this gives you a starting point. The linear registrations obtained with WebAlign can be refined using WebWarp.
129
130 === Registration instructions ===
131
132 **Short keys**
133
134 |=To do this|=Press|=Description
135 |Place marker|Space bar|Markers are the anchor points of most transformations (stretch and rotate).
136 |Remove marker|Esc|Removes a previously placed marker.
137 |Horizontal stretch from marker |Left/Right arrow keys|Marker becomes a vertical line, and mouse drag horizontally resizes the cut.
138 |Vertical stretch from marker |Up/Down arrow keys|Marker becomes a horizontal line, and mouse drag vertically resizes the cut
139 |Rotate around marker|PgUp/PgDown|Marker becomes a cross with a surrounding arc, and mouse drag rotates the cut.
140 |In plane adjust |Click + drag|If there is no marker, or the marker is a cross, mouse drag slides the cut in its plane (translation).
141
142 **Start the registration**
143
144 The main window shows the selected image with the atlas overlay.
145
146 -If necessary, change the atlas from coronal view to sagittal or horizontal view (see Navigation panel below)
147
148 ~1. Move the atlas to the approximate position of your section using the yellow dots in the three small windows from the navigation panel.
149
150 2. Start anchoring by placing a marker with the //"Space bar//" , it is initially a cross, and it is the fix point of (most) transformations. The "//Escape key//" can be used to remove the marker.
151
152 3. The main window supports mouse drag in multiple modes in order to stretch the atlas and find the correct position.
153
154 -If there is no marker, or the marker is a cross, mouse drag slides the cut in its plane (translation).
155
156 -Keyboard controls to modify mouse drag (they also place the marker if it's not placed already):
157
158 -Left/Right arrow keys: marker becomes a vertical line, and mouse drag horizontally resizes the cut
159
160 -Up/Down arrow keys: marker becomes a horizontal line, and mouse drag vertically resizes the cut
161
162 -PgUp/PgDown keys: marker becomes a cross with a surrounding arc, and mouse drag rotates the cut. This may look weird because the cut remains being a rectangle, and when  the horizontal and vertical physical resolutions (like pixels/mm) of the image do not match, atlas cut will appear stretching/shrinking with the rotation.
163
164 After each transformation step, marker resets to a cross (translation mode).
165
166 //Note!// The panel can be resized towards the left (common border with Control Panel) and towards the bottom (common border with Filmstrip).
167
168 4. Save the position by pressing //"Store". //The registration is copied to the remaining slides to help with scaling (visible also in the filmstrip)
169
170 5. Go through all sections and refine position and cutting angles.
171
172 //Note!// When jumping from one section to the other, wait a few seconds for the image to load
173
174 //Note!// The "restore" button allows you to go back to the saved position if necessary
175
176 6. Save your results in the descriptor file (json) by pressing "Save to bucket".
177
178 7. When the registration is finished, you can export your descriptor files ( .flat files used for analysis in the QUINT workflow) by pressing //"export overlays".//
179
180 **Control panel:**
181
182 |=Button|=Function
183 |Store |Store the current alignment and propagate to unaligned sections (**Note** this does not save the series to your bucket)
184 |Restore |Reset the current alignment to the last stored position
185 |Clear |Reset the current alignment to the default position
186 |Overlay Slider |Opacity of the atlas overlay, when fully opaque, it becomes an outline
187 |Overlay color |The outline color
188 |Filmstrip slider and color|The above settings, applied to the filmstrip
189 |Save to bucket|Save the series to your bucket (and overwrite the existing file)
190 |Export overlays|Generates a series of .flat files (for Nutil or similar utility), and stores them into a .zip file in the bucket (re-using the name of the series descriptor, e.g. series13.json will export series13.zip)
191
192
193 The right border of the control panel can be dragged horizontally, allowing to resize the panel and the main view
194
195 **Filmstrip:**
196
197 Drag horizontally to see series, click on a section in order to load it into the main view The top border of the filmstrip can be dragged vertically, allowing to resize the panel and the main view
198
199 **Navigation panel:**
200
201 Shows the three standard planes centered around the midpoint of the current alignment visible in the main view.
202
203 The rectangle of the current cut is projected on each standard plane as a yellow line/rectangle/parallelogram. A small yellow circle represents the midpoint of the projection.
204
205 Drag the midpoint around to move the cut.
206
207 Drag anywhere else to rotate the cut (inside the given standard plane, around the midpoint).
208
209 == **How to use WebWarp** ==
210
211 WebWarp is an online tool for nonlinear refinement of spatial registration of histological section images from rodent brains to reference 3D atlases. Webwarp is compatible with registration performed with the WebAlign tool. Different experimental datasets registered to the same reference atlas allows you to spatially integrate, analyse and navigate these datasets within a standardised coordinate system.
212
213 Online user manual: [[https:~~/~~/webwarp.readthedocs.io/en/latest/>>https://webwarp.readthedocs.io/en/latest/]]
214
215 The view can be magnified using the 4-arrow "X" symbol in the top-right corner.
216
217 === Opening a sample dataset ===
218
219 Demo dataset is loaded using the file: **demo_mouse_data.waln**
220
221 You can see the result of a finished anchoring by choosing the file: **demo_mouse_data.wwrp**
222
223 === Opening a private dataset ===
224
225 1. Navigate to the WebWarp app in the left-hand panel: all the .waln files located in the Bucket are displayed on the WebWarp main page.
226 1. Select the waln file corresponding to your result from the WebAlign image registration.
227 1. Wait for the images to load: this may take some time.
228
229 === Opening an EBRAINS dataset ===
230
231 If you would like to work with an EBRAINS dataset, open the LocaliZoom link from the KG dataset card ( [[https:~~/~~/search.kg.ebrains.eu>>url:https://search.kg.ebrains.eu]]) and paste it in the "Import LocaliZoom link" tab in WebAlign. Save this series as a .waln file you then can open in WebWarp.
232
233
234 == **How to use LocaliZoom** ==
235
236 LocaliZoom is a web application for viewing of series of high-resolution 2D images that have been anchored to reference atlases. LocaliZoom allows the viewing and exploring of high-resolution images with superimposed atlas overlays, and the extraction of coordinates of annotated points within those images for viewing in 3D brain atlas space.
237
238 Online Manual: [[https:~~/~~/localizoom.readthedocs.io/en/latest/>>https://localizoom.readthedocs.io/en/latest/]]
239
240 The view can be magnified using the 4-arrow "X" symbol in the top-right corner.
241
242 === Opening a sample dataset ===
243
244 A demo dataset is loaded using the file: demo_mouse_data_lz
245
246
247
248
249
250 == **How to use MeshView** ==
251
252
253