Changes for page 5. How to segment your objects with Webilastik
Last modified by puchades on 2022/09/30 16:01
From version 27.1
edited by tomazvieira
on 2022/02/22 15:11
on 2022/02/22 15:11
Change comment:
Uploaded new attachment "image-20220222151117-1.png", version {1}
To version 20.1
edited by tomazvieira
on 2022/01/25 19:18
on 2022/01/25 19:18
Change comment:
Uploaded new attachment "image-20220125191847-3.png", version {1}
Summary
-
Page properties (2 modified, 0 added, 0 removed)
-
Attachments (0 modified, 0 added, 2 removed)
Details
- Page properties
-
- Title
-
... ... @@ -1,1 +1,1 @@ 1 -4. How to use Webilastik1 +4. How to use webilastik - Content
-
... ... @@ -1,4 +1,4 @@ 1 -== What is Webilastik? ==1 +== What is webilastik? == 2 2 3 3 4 4 Classic [[ilastik>>https://www.ilastik.org/]] is a simple, user-friendly desktop tool for **interactive image classification, segmentation and analysis**. It is built as a modular software framework, which currently has workflows for automated (supervised) pixel- and object-level classification, automated and semi-automated object tracking, semi-automated segmentation and object counting without detection. Most analysis operations are performed **lazily**, which enables targeted interactive processing of data subvolumes, followed by complete volume analysis in offline batch mode. Using it requires no experience in image processing. ... ... @@ -5,38 +5,16 @@ 5 5 6 6 [[webilastik>>https://app.ilastik.org/]] is a web version of ilastik's Pixel Classification Workflow, integrated with the ebrains ecosystem. It can access the data-proxy buckets for reading and writing (though reading is still suffering from latency issues). It uses Neuroglancer as a 3D viewer as well as compute sessions allocated from the CSCS infrastructure. 7 7 8 -== How to use Webilastik ==8 +== How to use webilastik == 9 9 10 10 === Opening a sample Dataset === 11 11 12 -Go to [[ https:~~/~~/app.ilastik.org/>>https://app.ilastik.org/]] and load a [[Neuroglancer Precomputed Chunks dataset>>https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed]]. You can e.g. use a sample data set that is already in the server by pasting the following URL into Neuroglancer's prompt:12 +Go to [[app.ilastik.org>>app.ilastik.org/]] and load a [[Neuroglancer Precomputed Chunks dataset>>https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed]]. You can e.g. use a sample data set that is already in the server by pasting the following URL into Neuroglancer's prompt: 13 13 14 14 precomputed:~/~/https:~/~/app.ilastik.org/public/images/c_cells_2.precomputed 15 15 16 16 [[image:image-20220125164204-2.png]] 17 17 18 - 19 -=== Opening a Dataset from the data-proxy === 20 - 21 -You can also load Neuroglancer Precomputed Chunks data from the data-proxy; The URLs for this kind of data follow the following scheme: 22 -\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color:#3498db; color:#ffffff" %)my-bucket-name(% style="background-color:#9b59b6; color:#ffffff" %)/path/inside/your/bucket(%%)## 23 - 24 -So, for example, to load the sample data inside the (% style="background-color:#3498db; color:#ffffff" %)quint-demo(%%) bucket, under the path (% style="background-color:#9b59b6; color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(% style="color:#000000" %) (%%) like in the example below: 25 - 26 - 27 -[[image:image-20220128142757-1.png]] 28 - 29 -=== === 30 - 31 -you would type a URL like this: 32 - 33 - 34 -##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color:#3498db; color:#ffffff" %)quint-demo(%%)/(% style="background-color:#9b59b6; color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(%%)## 35 - 36 -this scheme is the same whether you're loading data into the Neuroglancer viewer or specifying an input URL in the export applet. 37 - 38 -=== Viewing 2D Data === 39 - 40 40 If your dataset is 2D like in the example, you can click the "switch to xy layout" button at the top-right corner of the top-left quadrant of the viewport to use asingle, 2D viewport: 41 41 42 42 [[image:image-20220125164416-3.png]] ... ... @@ -101,16 +101,12 @@ 101 101 102 102 Once you trained your pixel classifier with the previous applets, you can apply it to other datasets or even the same dataset that was used to do the training on. 103 103 104 -To do so, select a data source by typing in the URL of the data as they appear beneath the URL field.82 +To do so, select a data source by typing in the URL of the datasource in the Data Source Url field and select a scale from the data source. 105 105 106 106 Then, configure a Data Sink, i.e., a destination that will receive the results of the pixel classification. For now, webilastik will only export to ebrains' data-proxy buckets; Fill in the name of the bucket and then the prefix (i.e.: path within the bucket) where the results in Neuroglancer's precomputed chunks format should be written to. 107 107 108 -[[image:image-20220125190311-2.png]] 109 - 110 110 Finally, click export button and eventually a new job shall be created if all the parameters were filled in correctly. 111 111 112 112 You'll be able to find your results in the data-proxy GUI, in a url that looks something like this: 113 113 114 114 https:~/~/data-proxy.ebrains.eu/your-bucket-name?prefix=your/selected/prefix 115 - 116 -[[image:image-20220125191847-3.png]]
- image-20220128142757-1.png
-
- Author
-
... ... @@ -1,1 +1,0 @@ 1 -XWiki.tomazvieira - Size
-
... ... @@ -1,1 +1,0 @@ 1 -53.7 KB - Content
- image-20220222151117-1.png
-
- Author
-
... ... @@ -1,1 +1,0 @@ 1 -XWiki.tomazvieira - Size
-
... ... @@ -1,1 +1,0 @@ 1 -225.3 KB - Content