Last modified by puchades on 2022/09/30 16:01

From version 53.1
edited by tomazvieira
on 2022/09/11 17:13
Change comment: There is no comment for this version
To version 51.1
edited by tomazvieira
on 2022/09/11 17:07
Change comment: Uploaded new attachment "image-20220911170735-7.png", version {1}

Summary

Details

Page properties
Content
... ... @@ -50,7 +50,7 @@
50 50  === Opening a Dataset from the data-proxy ===
51 51  
52 52  You can also load Neuroglancer Precomputed Chunks data from the data-proxy (e.g. the [[ana-workshop-event bucket>>https://wiki.ebrains.eu/bin/view/Collabs/ana-workshop-event/Bucket]]); The URLs for this kind of data follow the following scheme:
53 -\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/v1/buckets/(% style="background-color:#3498db; color:#ffffff" %)my-bucket-name(% style="background-color:#ffffff; color:#000000" %)/(% style="background-color:#9b59b6; color:#ffffff" %)path/inside/your/bucket(%%)##
53 +\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/v1/buckets/(% style="background-color:#3498db; color:#ffffff" %)my-bucket-name(% style="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255)" %)/(% style="background-color:#9b59b6; color:#ffffff" %)path/inside/your/bucket(%%)##
54 54  
55 55  where (% style="background-color:#9b59b6; color:#ffffff" %)path/inside/your/bucket(%%)  should be the path to the folder containing the dataset "info" file.
56 56  
... ... @@ -61,7 +61,7 @@
61 61  
62 62  [[image:webilastik_bucket_paths.png]]
63 63  
64 -=== ===
64 +=== ===
65 65  
66 66  you would type a URL like this:
67 67  
... ... @@ -80,6 +80,7 @@
80 80  
81 81  [[image:image-20220125164557-4.png]]
82 82  
83 +You can also click the
83 83  
84 84  ==== A Note on Neuroglancer and 2D data ====
85 85  
... ... @@ -136,21 +136,19 @@
136 136  
137 137  [[image:image-20220222151117-1.png]]
138 138  
139 -==== ====
140 +==== ====
140 140  
141 141  ==== Painting Labels ====
142 142  
143 143  The status display in the "Training" applet will show "training on [datasource url]" when it's ready to start painting.
144 144  
145 -Now you can start adding brush strokes. By default, webilastik will create two kinds of labels: "Background" and "Foreground". You can rename them to your liking or change their colors to something more suitable for you or your dataset. You can also add more labels if you'd like ilastik to classify the pixels of your image into more than two categories.
146 +Now you can start adding brush strokes. Select a color from the color picker, check the "Enable Brushing" checkbox to enable brushing (and disable navigation), and click and drag over the image to add brush strokes. Ilastik will map each used color to a "class", and will try to figure out a class for every pixel in the image based on the examples provided by the brush strokes. By painting, you provide ilastik with samples of what a pixel in that particular class should look like. The following image shows an example with 2 classes: teal, representing the "foreground" or the "cell class", and magenta, representing the "background" class.
146 146  
147 -Select one of the labels from the "Current Label" dropdown or by using the "Select Label" button, check the "Enable Brushing" checkbox to enable brushing mode (**and disable navigation**), and click and drag over the image to add brush strokes. Ilastik will map each used color to a "class", and will try to figure out a class for every pixel in the image based on the examples provided by the brush strokes. By painting, you provide ilastik with samples of what a pixel in that particular class should look like. The following image shows an example with 2 classes: magenta, representing the "foreground" and green, representing the "background" class.
148 +[[image:image-20220222153157-4.png]]
148 148  
149 -[[image:image-20220911162555-3.png]]
150 -
151 151  Once you have some image features selected and some brush annotation of at least 2 colors, you can check "Live Update" and ilastik will automatically use your examples to predict what classes the rest of your dataset should be, displaying the results in a "predictions" tab.
152 152  
153 -[[image:image-20220911163127-4.png]]
152 +[[image:image-20220222153610-5.png]]
154 154  
155 155  
156 156  You can keep adding or removing brush strokes to improve your predictions.
... ... @@ -161,34 +161,26 @@
161 161  1. Adjust the layer opacity to better view the predictions or underlying raw data;
162 162  1. Advanced users: edit the shader to render the predictions in any arbitrary way;
163 163  
164 -The image below shows the "predictions" tab with an opacity set to 0.88 using the steps described above:
163 +The image below shows the "predictions" tab with an opacity set to 0.68 using the steps described above:
165 165  
166 -[[image:image-20220911163504-5.png]]
165 +[[image:image-20220125172238-8.png]]
167 167  
168 -You can keep adding or removing features to your model, as well as adding and removing annotations, which will automatically refresh the predictions tab.
167 +You can keep adding or removing features to your model, as well as adding and removing annotations, which will automatically update the predictions tab.
169 169  
170 170  === Exporting Results and Running Jobs ===
171 171  
172 -Once you trained your pixel classifier with the previous applets, you can apply it to other datasets or even the same dataset that was used to do the training on. You can export your results in two ways:
171 +Once you trained your pixel classifier with the previous applets, you can apply it to other datasets or even the same dataset that was used to do the training on.
173 173  
174 -~1. As a "Predictions Map", which is a float32 image with as many channels as the number of Label colors you've used, or;
173 +To do so, select a data source by typing in the URL of the data source in the Data Source Url field and select a scale from the data source as they appear beneath the URL field.
175 175  
176 -2. As a "Simple Segmentation", which is one 3-channel uint8 image for each of the Label colors you've used. The imag will be red where that pixel is more likely to belong to the respective Label and black everywhere else.
175 +Then, configure a Data Sink, i.e., a destination that will receive the results of the pixel classification. For now, webilastik will only export to ebrains' data-proxy buckets; Fill in the name of the bucket and then the prefix (i.e.: path within the bucket) where the results in Neuroglancer's precomputed chunks format should be written to.
177 177  
178 -To do so, select a data source by typing in the URL of the data source in the "Url" field of the "Input" fieldset and select a scale from the data source as they appear beneath the URL field. You can also click the "Suggestions..." button to select one of the annotated datasources.
177 +[[image:image-20220125190311-2.png]]
179 179  
180 -Then, configure the Output, i.e., the destination that will receive the results of the pixel classification. For now, webilastik will only export to ebrains' data-proxy buckets:
181 -
182 -1. Fill in the name of the data-proxy bucket where the results in Neuroglancer's precomputed chunks format should be written to;
183 -1. Fill in the directory path inside the bucket where the results should be saved to. This path will also contain the "info" file of the precomputed chunks format.
184 -
185 -[[image:image-20220911170735-7.png]]
186 -
187 -
188 188  Finally, click export button and eventually a new job shall be created if all the parameters were filled in correctly.
189 189  
190 190  You'll be able to find your results in the data-proxy GUI, in a url that looks something like this:
191 191  
192 -https:~/~/data-proxy.ebrains.eu/your-bucket-name?prefix=your/info/directory/path
183 +https:~/~/data-proxy.ebrains.eu/your-bucket-name?prefix=your/selected/prefix
193 193  
194 194  [[image:image-20220125191847-3.png]]