Version 54.1 by puchades on 2022/09/30 15:47

Hide last authors
puchades 25.1 1 == What is Webilastik? ==
sharoncy 3.1 2
tomazvieira 7.2 3
4 Classic [[ilastik>>https://www.ilastik.org/]] is a simple, user-friendly desktop tool for **interactive image classification, segmentation and analysis**. It is built as a modular software framework, which currently has workflows for automated (supervised) pixel- and object-level classification, automated and semi-automated object tracking, semi-automated segmentation and object counting without detection. Most analysis operations are performed **lazily**, which enables targeted interactive processing of data subvolumes, followed by complete volume analysis in offline batch mode. Using it requires no experience in image processing.
5
6 [[webilastik>>https://app.ilastik.org/]] is a web version of ilastik's Pixel Classification Workflow, integrated with the ebrains ecosystem. It can access the data-proxy buckets for reading and writing (though reading is still suffering from latency issues). It uses Neuroglancer as a 3D viewer as well as compute sessions allocated from the CSCS infrastructure.
7
puchades 25.1 8 == How to use Webilastik ==
tomazvieira 7.2 9
tomazvieira 37.1 10 (% class="wikigeneratedid" %)
11 Webilastik is a web application that can be accessed on [[https:~~/~~/app.ilastik.org>>https://app.ilastik.org]]. We suggest using it via the Chrome (or Chromium) web browser for now, since most of the testing has been done in this browser and subtle differences between browsers might cause unexpected behavior in the application.
12
tomazvieira 41.1 13 (% class="wikigeneratedid" %)
14 You can find the webilastik application at [[https:~~/~~/app.ilastik.org/>>url:https://app.ilastik.org/]]. You can also go directly to the [[application page>>https://app.ilastik.org/public/nehuba/index.html#!%7B%22layout%22:%22xy%22%7D]].
tomazvieira 7.2 15
tomazvieira 41.1 16 Webilastik is an overlay on top of other data viewers. In particular, this implementation uses [[Neuroglancer>>https://github.com/google/neuroglancer]] as an underlying data viewer, so if you're familiar with its controls you can still use them when using webilastik.
tomazvieira 7.2 17
tomazvieira 41.1 18 === Moving the controls window ===
19
20 You can move the webilastik controls around the screen by clicking and dragging on the header:
21
22 [[image:webilastik_click_and_drag.png]]
23
24
25 === Opening a Dataset ===
26
27 Like in vanilla Neuroglancer, you add datasets to the viewer by clicking the "+" button at the top of the viewer:
28
29 [[image:webilastik_click_plus_sign_in_neuroglancer.png||height="200"]]
30
31
32 You should be presented with a popup prompt where you can type in the URL of a dataset you want to view, in the format typically used by Neuroglancer. There are a few sample datasets hosted in webilastik:
33
34 precomputed:~/~/https:~/~/app.ilastik.org/public/images/mouse1.precomputed
35
36 precomputed:~/~/https:~/~/app.ilastik.org/public/images/mouse2.precomputed
37
38 precomputed:~/~/https:~/~/app.ilastik.org/public/images/mouse3.precomputed
39
tomazvieira 7.2 40 precomputed:~/~/https:~/~/app.ilastik.org/public/images/c_cells_2.precomputed
41
tomazvieira 41.1 42 precomputed:~/~/https:~/~/app.ilastik.org/public/images/c_cells_3.precomputed
43
44
45 After you type or paste the URL into the "Source" field, neuroglancer should recognize the shape and number of channels in the image. You can the click "Add Layer" to open the dataset in the viewer.
46
tomazvieira 7.2 47 [[image:image-20220125164204-2.png]]
48
tomazvieira 24.1 49
50 === Opening a Dataset from the data-proxy ===
51
tomazvieira 45.2 52 You can also load Neuroglancer Precomputed Chunks data from the data-proxy (e.g. the [[ana-workshop-event bucket>>https://wiki.ebrains.eu/bin/view/Collabs/ana-workshop-event/Bucket]]); The URLs for this kind of data follow the following scheme:
tomazvieira 53.1 53 \\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/v1/buckets/(% style="background-color:#3498db; color:#ffffff" %)my-bucket-name(% style="background-color:#ffffff; color:#000000" %)/(% style="background-color:#9b59b6; color:#ffffff" %)path/inside/your/bucket(%%)##
tomazvieira 24.1 54
tomazvieira 45.2 55 where (% style="background-color:#9b59b6; color:#ffffff" %)path/inside/your/bucket(%%)  should be the path to the folder containing the dataset "info" file.
tomazvieira 24.1 56
57
tomazvieira 45.2 58 So, for example, to load the sample data inside the (% style="background-color:#3498db; color:#ffffff" %)ana-workshop-event(%%) bucket, under the path (% style="background-color:#9b59b6; color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(% style="color:#000000" %) (%%) like in the example below:
tomazvieira 24.1 59
tomazvieira 45.2 60 (% style="display:none" %) (%%)
61
62 [[image:webilastik_bucket_paths.png]]
63
puchades 54.1 64 === ===
tomazvieira 24.1 65
66 you would type a URL like this:
67
68
tomazvieira 45.2 69 {{{precomputed://https://data-proxy.ebrains.eu/api/v1/buckets/ana-workshop-event/tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed}}}
tomazvieira 24.1 70
71 this scheme is the same whether you're loading data into the Neuroglancer viewer or specifying an input URL in the export applet.
72
73 === Viewing 2D Data ===
74
tomazvieira 33.1 75 If your dataset is 2D like in the example, you can click the "switch to xy layout" button at the top-right corner of the top-left quadrant of the viewport to use a single, 2D viewport:
tomazvieira 7.2 76
77 [[image:image-20220125164416-3.png]]
78
79 which will change the view to something like this:
80
81 [[image:image-20220125164557-4.png]]
82
tomazvieira 35.1 83
tomazvieira 36.1 84 ==== A Note on Neuroglancer and 2D data ====
85
86
tomazvieira 35.1 87 Neuroglancer interprets all data as 3D, and visualizing a 2D image is interpreted as a single flat slice of data in 3D space. Scrolling in Neuroglancer can make the viewer go past this single slice of data, effectively hiding it from view. You can see the current viewer position in the top-left corner of the viewport, and you can edit those coordinates to reset the viewer to a position where your data is present and therefore visible (usually z=0 for 2D data):
88
89
90 [[image:image-20220222161022-1.png]]
91
tomazvieira 41.1 92 (% class="wikigeneratedid" %)
93 Alternatively, once you have a compute session running you can also click the "Reset" button in the lower-right corner of the viewer to move the viewer back to the center of your datasets:
94
95 (% class="wikigeneratedid" %)
96 [[image:webilastik_click_recenter_button.png]]
97
98 == Allocating a Compute Session ==
99
100 Normal ilastik operation can be computationally intensive, requiring dedicated compute resources to be allocated to every user working with it.
101
102 The "Session Management" widget allows you to request a compute session where webilastik will run; Select a session duration and click 'Create' to create a new compute session. Eventually the compute session will be allocated, opening up the other workflow widgets.
103
tomazvieira 45.2 104 Don't forget to close your compute session by clicking the "Close Session" button once you're done to prevent wasting your quota in the HPC. If you have a long running job, though, you can just leave the session and rejoin it later by pasting its session ID in the "Session Id" field of the "Session Management" widget and clicking "rejoin Session".
tomazvieira 41.1 105
tomazvieira 7.2 106 == Training the Pixel Classifier ==
107
108 === Selecting Image Features ===
109
tomazvieira 45.2 110 Pixel Classification uses different characteristics ("features") of each pixel from your image to determine which class that pixel should belong to. These take into account, for example, color and texture of each pixel as well as that of the neighboring pixels. Each one of this characteristics requires some computational power, which is why you can select only the ones that are sensible for your particular dataset.
tomazvieira 7.2 111
tomazvieira 45.2 112 Use the checkboxes in the applet "Select Image Features" applet to select some image features and their corresponding sigma. The higher the sigma, the bigger the vicinity considered when computing values for each pixel, and the bigger its influence over the final value of that feature. Higher sigmas also require more computations to be done and can increase the time required to do predictions.
tomazvieira 7.2 113
tomazvieira 15.2 114 You can read more about image features in [[ilastik's documentation.>>https://www.ilastik.org/documentation/pixelclassification/pixelclassification\]]
115
116 The following is an arbitrary selection of image features. Notice that the checkboxes marked in orange haven't been commited yet; Click Ok to send your feature selections (or deselections) to the server.
117
118 [[image:image-20220125171850-7.png]]
119
tomazvieira 7.2 120 === Labeling the image ===
121
tomazvieira 45.2 122 In order to classify the pixels of an image into different classes (e.g.: 'foreground' and 'background') ilastik needs you to provide it with examples of each class.
tomazvieira 7.2 123
124
tomazvieira 45.2 125 ==== Picking an Image Resolution (for multi-resolution images only) ====
tomazvieira 7.2 126
tomazvieira 45.2 127 If your data has multiple resolutions (**not the case in any of the sample datasets**), you'll have to pick one of them in the "Training" widget. Neuroglancer interpolates between multiple scales of the dataset, but ilastik operates on a single resolution:
128
129 [[image:image-20220911155827-1.png]]
130
tomazvieira 15.2 131 Once you've selected a resolution to train on, you should see a new "training" tab at the top of the viewer:
tomazvieira 7.2 132
tomazvieira 15.2 133 [[image:image-20220125165832-2.png]]
tomazvieira 7.2 134
tomazvieira 15.2 135 You must have the "training" tab as the frontmost visible tab in order to start adding brush strokes (in neuroglancer you can click the name of the raw data tab to hide it, for example):
tomazvieira 7.2 136
tomazvieira 32.1 137 [[image:image-20220222151117-1.png]]
tomazvieira 7.2 138
puchades 54.1 139 ==== ====
tomazvieira 32.1 140
tomazvieira 45.2 141 ==== Painting Labels ====
tomazvieira 7.2 142
tomazvieira 45.2 143 The status display in the "Training" applet will show "training on [datasource url]" when it's ready to start painting.
144
tomazvieira 52.1 145 Now you can start adding brush strokes. By default, webilastik will create two kinds of labels: "Background" and "Foreground". You can rename them to your liking or change their colors to something more suitable for you or your dataset. You can also add more labels if you'd like ilastik to classify the pixels of your image into more than two categories.
tomazvieira 7.2 146
tomazvieira 52.1 147 Select one of the labels from the "Current Label" dropdown or by using the "Select Label" button, check the "Enable Brushing" checkbox to enable brushing mode (**and disable navigation**), and click and drag over the image to add brush strokes. Ilastik will map each used color to a "class", and will try to figure out a class for every pixel in the image based on the examples provided by the brush strokes. By painting, you provide ilastik with samples of what a pixel in that particular class should look like. The following image shows an example with 2 classes: magenta, representing the "foreground" and green, representing the "background" class.
tomazvieira 7.2 148
tomazvieira 52.1 149 [[image:image-20220911162555-3.png]]
150
tomazvieira 32.1 151 Once you have some image features selected and some brush annotation of at least 2 colors, you can check "Live Update" and ilastik will automatically use your examples to predict what classes the rest of your dataset should be, displaying the results in a "predictions" tab.
tomazvieira 7.2 152
tomazvieira 52.1 153 [[image:image-20220911163127-4.png]]
tomazvieira 15.2 154
tomazvieira 32.1 155
156 You can keep adding or removing brush strokes to improve your predictions.
157
tomazvieira 7.2 158 You can adjust the display settings of the overlay predictions layer as you would in vanilla neuroglancer:
159
tomazvieira 15.2 160 1. right-click the predictions Neuroglancer tab to reveal the "rendering" options
161 1. Adjust the layer opacity to better view the predictions or underlying raw data;
tomazvieira 7.2 162 1. Advanced users: edit the shader to render the predictions in any arbitrary way;
163
tomazvieira 52.1 164 The image below shows the "predictions" tab with an opacity set to 0.88 using the steps described above:
tomazvieira 7.2 165
tomazvieira 52.1 166 [[image:image-20220911163504-5.png]]
tomazvieira 7.2 167
tomazvieira 52.1 168 You can keep adding or removing features to your model, as well as adding and removing annotations, which will automatically refresh the predictions tab.
tomazvieira 17.1 169
170 === Exporting Results and Running Jobs ===
171
tomazvieira 52.1 172 Once you trained your pixel classifier with the previous applets, you can apply it to other datasets or even the same dataset that was used to do the training on. You can export your results in two ways:
tomazvieira 17.1 173
tomazvieira 52.1 174 ~1. As a "Predictions Map", which is a float32 image with as many channels as the number of Label colors you've used, or;
tomazvieira 17.1 175
tomazvieira 52.1 176 2. As a "Simple Segmentation", which is one 3-channel uint8 image for each of the Label colors you've used. The imag will be red where that pixel is more likely to belong to the respective Label and black everywhere else.
tomazvieira 17.1 177
tomazvieira 52.1 178 To do so, select a data source by typing in the URL of the data source in the "Url" field of the "Input" fieldset and select a scale from the data source as they appear beneath the URL field. You can also click the "Suggestions..." button to select one of the annotated datasources.
tomazvieira 21.1 179
tomazvieira 52.1 180 Then, configure the Output, i.e., the destination that will receive the results of the pixel classification. For now, webilastik will only export to ebrains' data-proxy buckets:
181
182 1. Fill in the name of the data-proxy bucket where the results in Neuroglancer's precomputed chunks format should be written to;
183 1. Fill in the directory path inside the bucket where the results should be saved to. This path will also contain the "info" file of the precomputed chunks format.
184
185 [[image:image-20220911170735-7.png]]
186
187
tomazvieira 17.1 188 Finally, click export button and eventually a new job shall be created if all the parameters were filled in correctly.
189
190 You'll be able to find your results in the data-proxy GUI, in a url that looks something like this:
191
tomazvieira 52.1 192 https:~/~/data-proxy.ebrains.eu/your-bucket-name?prefix=your/info/directory/path
tomazvieira 21.1 193
194 [[image:image-20220125191847-3.png]]