Changes for page 5. How to segment your objects with Webilastik
Last modified by puchades on 2022/09/30 16:01
From version 43.1
edited by tomazvieira
on 2022/09/11 15:17
on 2022/09/11 15:17
Change comment:
Uploaded new attachment "webilastik_bucket_paths.png", version {1}
To version 52.1
edited by tomazvieira
on 2022/09/11 17:09
on 2022/09/11 17:09
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
-
Attachments (1 modified, 7 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -49,20 +49,24 @@ 49 49 50 50 === Opening a Dataset from the data-proxy === 51 51 52 -You can also load Neuroglancer Precomputed Chunks data from the data-proxy; The URLs for this kind of data follow the following scheme: 53 -\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color:#3498db; color:#ffffff" %)my-bucket-name(% style="background-color:#9b59b6; color:#ffffff" %) /path/inside/your/bucket(%%)##52 +You can also load Neuroglancer Precomputed Chunks data from the data-proxy (e.g. the [[ana-workshop-event bucket>>https://wiki.ebrains.eu/bin/view/Collabs/ana-workshop-event/Bucket]]); The URLs for this kind of data follow the following scheme: 53 +\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/v1/buckets/(% style="background-color:#3498db; color:#ffffff" %)my-bucket-name(% style="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255)" %)/(% style="background-color:#9b59b6; color:#ffffff" %)path/inside/your/bucket(%%)## 54 54 55 - So, for example, to load thesampledata inside the(% style="background-color:#3498db; color:#ffffff" %)ana-workshop(%%)bucket,underthe path(%style="background-color:#9b59b6;color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(%style="color:#000000"%) (%%) like in the examplebelow:55 +where (% style="background-color:#9b59b6; color:#ffffff" %)path/inside/your/bucket(%%) should be the path to the folder containing the dataset "info" file. 56 56 57 57 58 - [[image:image-20220128142757-1.png]]58 +So, for example, to load the sample data inside the (% style="background-color:#3498db; color:#ffffff" %)ana-workshop-event(%%) bucket, under the path (% style="background-color:#9b59b6; color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(% style="color:#000000" %) (%%) like in the example below: 59 59 60 +(% style="display:none" %) (%%) 61 + 62 +[[image:webilastik_bucket_paths.png]] 63 + 60 60 === === 61 61 62 62 you would type a URL like this: 63 63 64 64 65 - ##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color:#3498db; color:#ffffff" %)ana-workshop(%%)/(% style="background-color:#9b59b6; color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(%%)##69 +{{{precomputed://https://data-proxy.ebrains.eu/api/v1/buckets/ana-workshop-event/tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed}}} 66 66 67 67 this scheme is the same whether you're loading data into the Neuroglancer viewer or specifying an input URL in the export applet. 68 68 ... ... @@ -96,18 +96,17 @@ 96 96 97 97 Normal ilastik operation can be computationally intensive, requiring dedicated compute resources to be allocated to every user working with it. 98 98 99 - 100 100 The "Session Management" widget allows you to request a compute session where webilastik will run; Select a session duration and click 'Create' to create a new compute session. Eventually the compute session will be allocated, opening up the other workflow widgets. 101 101 105 +Don't forget to close your compute session by clicking the "Close Session" button once you're done to prevent wasting your quota in the HPC. If you have a long running job, though, you can just leave the session and rejoin it later by pasting its session ID in the "Session Id" field of the "Session Management" widget and clicking "rejoin Session". 102 102 103 - 104 104 == Training the Pixel Classifier == 105 105 106 106 === Selecting Image Features === 107 107 108 -Pixel Classification uses different characteristics ("features") of your image to determine which class each pixel should belong to. These take into account, for example, color and texture of each pixel as well as that of the neighboring pixels. Each one of this characteristics requires some computational power, which is why you can select only the ones that are sensible for your particular dataset.111 +Pixel Classification uses different characteristics ("features") of each pixel from your image to determine which class that pixel should belong to. These take into account, for example, color and texture of each pixel as well as that of the neighboring pixels. Each one of this characteristics requires some computational power, which is why you can select only the ones that are sensible for your particular dataset. 109 109 110 -Use the checkboxes in the applet "Select Image Features" applet to select some image features and their corresponding sigma (whichdeterminesthe radiusaround thepixel thatwillbeconsideredwhencomputingthatfeature).113 +Use the checkboxes in the applet "Select Image Features" applet to select some image features and their corresponding sigma. The higher the sigma, the bigger the vicinity considered when computing values for each pixel, and the bigger its influence over the final value of that feature. Higher sigmas also require more computations to be done and can increase the time required to do predictions. 111 111 112 112 You can read more about image features in [[ilastik's documentation.>>https://www.ilastik.org/documentation/pixelclassification/pixelclassification\]] 113 113 ... ... @@ -117,12 +117,15 @@ 117 117 118 118 === Labeling the image === 119 119 120 -In order to classify the pixels of an image into different classes (e.g.: 'foreground' and 'background') ilastik needs you to provide it with samples of each class.123 +In order to classify the pixels of an image into different classes (e.g.: 'foreground' and 'background') ilastik needs you to provide it with examples of each class. 121 121 122 -To do so, first select a particular resolution of your dataset (your viewer might interpolate between multiple scales of the dataset, but ilastik operates on a single resolution): 123 123 124 - [[image:image-20220125165642-1.png]]126 +==== Picking an Image Resolution (for multi-resolution images only) ==== 125 125 128 +If your data has multiple resolutions (**not the case in any of the sample datasets**), you'll have to pick one of them in the "Training" widget. Neuroglancer interpolates between multiple scales of the dataset, but ilastik operates on a single resolution: 129 + 130 +[[image:image-20220911155827-1.png]] 131 + 126 126 Once you've selected a resolution to train on, you should see a new "training" tab at the top of the viewer: 127 127 128 128 [[image:image-20220125165832-2.png]] ... ... @@ -131,16 +131,21 @@ 131 131 132 132 [[image:image-20220222151117-1.png]] 133 133 140 +==== ==== 134 134 135 - Thestatus display inhis applet will show "trainingon [datasourceurl]"when you're in training mode.142 +==== Painting Labels ==== 136 136 137 - Nowyou canstartadding brush strokes. Selectacolor from the color picker,checkthe "Enable Brushing" checkbox to enable brushing (and disablenavigation),and click and drag overtheimage to add brush strokes. Ilastikwillmap each used color to a "class", andwilltry to figure outaclass for every pixel intheimagebasedon the examplesprovided by the brush strokes.By painting, youprovide ilastikwithsamples of what a pixel in thatparticular class shouldlooklike. The following imageshows an example with 2 classes:teal,representingthe "foreground" or the "cell class", and magenta, representingthe "background" class.144 +The status display in the "Training" applet will show "training on [datasource url]" when it's ready to start painting. 138 138 139 - [[image:image-20220222153157-4.png]]146 +Now you can start adding brush strokes. By default, webilastik will create two kinds of labels: "Background" and "Foreground". You can rename them to your liking or change their colors to something more suitable for you or your dataset. You can also add more labels if you'd like ilastik to classify the pixels of your image into more than two categories. 140 140 148 +Select one of the labels from the "Current Label" dropdown or by using the "Select Label" button, check the "Enable Brushing" checkbox to enable brushing mode (**and disable navigation**), and click and drag over the image to add brush strokes. Ilastik will map each used color to a "class", and will try to figure out a class for every pixel in the image based on the examples provided by the brush strokes. By painting, you provide ilastik with samples of what a pixel in that particular class should look like. The following image shows an example with 2 classes: magenta, representing the "foreground" and green, representing the "background" class. 149 + 150 +[[image:image-20220911162555-3.png]] 151 + 141 141 Once you have some image features selected and some brush annotation of at least 2 colors, you can check "Live Update" and ilastik will automatically use your examples to predict what classes the rest of your dataset should be, displaying the results in a "predictions" tab. 142 142 143 -[[image:image-20220 222153610-5.png]]154 +[[image:image-20220911163127-4.png]] 144 144 145 145 146 146 You can keep adding or removing brush strokes to improve your predictions. ... ... @@ -151,26 +151,34 @@ 151 151 1. Adjust the layer opacity to better view the predictions or underlying raw data; 152 152 1. Advanced users: edit the shader to render the predictions in any arbitrary way; 153 153 154 -The image below shows the "predictions" tab with an opacity set to 0. 68 using the steps described above:165 +The image below shows the "predictions" tab with an opacity set to 0.88 using the steps described above: 155 155 156 -[[image:image-202201 25172238-8.png]]167 +[[image:image-20220911163504-5.png]] 157 157 158 -You can keep adding or removing features to your model, as well as adding and removing annotations, which will automatically update the predictions tab.169 +You can keep adding or removing features to your model, as well as adding and removing annotations, which will automatically refresh the predictions tab. 159 159 160 160 === Exporting Results and Running Jobs === 161 161 162 -Once you trained your pixel classifier with the previous applets, you can apply it to other datasets or even the same dataset that was used to do the training on. 173 +Once you trained your pixel classifier with the previous applets, you can apply it to other datasets or even the same dataset that was used to do the training on. You can export your results in two ways: 163 163 164 - Todoso,select adatasourcebytyping intheURLofthedatasourceintheData SourceUrlfieldand selecta scale from the datasourceastheyappearbeneath the URL field.175 +~1. As a "Predictions Map", which is a float32 image with as many channels as the number of Label colors you've used, or; 165 165 166 - Then,configureaDataSink, i.e.,a destinationthatwillreceivethe resultsof thepixel classification. Fornow,webilastikwillonlyexporttoebrains'data-proxy buckets;Fillinthe nameof thebucketandthentheprefix (i.e.:path within the bucket) where the resultsinNeuroglancer'sprecomputed chunks format should bewrittento.177 +2. As a "Simple Segmentation", which is one 3-channel uint8 image for each of the Label colors you've used. The imag will be red where that pixel is more likely to belong to the respective Label and black everywhere else. 167 167 168 - [[image:image-20220125190311-2.png]]179 +To do so, select a data source by typing in the URL of the data source in the "Url" field of the "Input" fieldset and select a scale from the data source as they appear beneath the URL field. You can also click the "Suggestions..." button to select one of the annotated datasources. 169 169 181 +Then, configure the Output, i.e., the destination that will receive the results of the pixel classification. For now, webilastik will only export to ebrains' data-proxy buckets: 182 + 183 +1. Fill in the name of the data-proxy bucket where the results in Neuroglancer's precomputed chunks format should be written to; 184 +1. Fill in the directory path inside the bucket where the results should be saved to. This path will also contain the "info" file of the precomputed chunks format. 185 + 186 +[[image:image-20220911170735-7.png]] 187 + 188 + 170 170 Finally, click export button and eventually a new job shall be created if all the parameters were filled in correctly. 171 171 172 172 You'll be able to find your results in the data-proxy GUI, in a url that looks something like this: 173 173 174 -https:~/~/data-proxy.ebrains.eu/your-bucket-name?prefix=your/ selected/prefix193 +https:~/~/data-proxy.ebrains.eu/your-bucket-name?prefix=your/info/directory/path 175 175 176 176 [[image:image-20220125191847-3.png]]
- webilastik_bucket_paths.png
-
- Size
-
... ... @@ -1,1 +1,1 @@ 1 - 3.7MB1 +64.7 KB - Content
- image-20220911155827-1.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.tomazvieira - Size
-
... ... @@ -1,0 +1,1 @@ 1 +13.1 KB - Content
- image-20220911162525-2.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.tomazvieira - Size
-
... ... @@ -1,0 +1,1 @@ 1 +124.5 KB - Content
- image-20220911162555-3.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.tomazvieira - Size
-
... ... @@ -1,0 +1,1 @@ 1 +154.7 KB - Content
- image-20220911163127-4.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.tomazvieira - Size
-
... ... @@ -1,0 +1,1 @@ 1 +136.9 KB - Content
- image-20220911163504-5.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.tomazvieira - Size
-
... ... @@ -1,0 +1,1 @@ 1 +49.9 KB - Content
- image-20220911165711-6.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.tomazvieira - Size
-
... ... @@ -1,0 +1,1 @@ 1 +138.8 KB - Content
- image-20220911170735-7.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.tomazvieira - Size
-
... ... @@ -1,0 +1,1 @@ 1 +138.9 KB - Content