Changes for page 5. How to segment your objects with Webilastik
Last modified by puchades on 2022/09/30 16:01
From version 25.1
edited by puchades
on 2022/02/15 09:16
on 2022/02/15 09:16
Change comment:
There is no comment for this version
To version 24.1
edited by tomazvieira
on 2022/01/28 14:40
on 2022/01/28 14:40
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. puchades1 +XWiki.tomazvieira - Content
-
... ... @@ -1,4 +1,4 @@ 1 -== What is Webilastik? ==1 +== What is webilastik? == 2 2 3 3 4 4 Classic [[ilastik>>https://www.ilastik.org/]] is a simple, user-friendly desktop tool for **interactive image classification, segmentation and analysis**. It is built as a modular software framework, which currently has workflows for automated (supervised) pixel- and object-level classification, automated and semi-automated object tracking, semi-automated segmentation and object counting without detection. Most analysis operations are performed **lazily**, which enables targeted interactive processing of data subvolumes, followed by complete volume analysis in offline batch mode. Using it requires no experience in image processing. ... ... @@ -5,7 +5,7 @@ 5 5 6 6 [[webilastik>>https://app.ilastik.org/]] is a web version of ilastik's Pixel Classification Workflow, integrated with the ebrains ecosystem. It can access the data-proxy buckets for reading and writing (though reading is still suffering from latency issues). It uses Neuroglancer as a 3D viewer as well as compute sessions allocated from the CSCS infrastructure. 7 7 8 -== How to use Webilastik ==8 +== How to use webilastik == 9 9 10 10 === Opening a sample Dataset === 11 11 ... ... @@ -19,19 +19,19 @@ 19 19 === Opening a Dataset from the data-proxy === 20 20 21 21 You can also load Neuroglancer Precomputed Chunks data from the data-proxy; The URLs for this kind of data follow the following scheme: 22 -\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color: #3498db;color:#ffffff" %)my-bucket-name(% style="background-color:#9b59b6;color:#ffffff" %)/path/inside/your/bucket(%%)##22 +\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="color: rgb(255, 255, 255); background-color: rgb(52, 152, 219)" %)my-bucket-name(% style="color: rgb(255, 255, 255); background-color: rgb(155, 89, 182)" %)/path/inside/your/bucket(%%)## 23 23 24 -So, for example, to load the sample data inside the (% style="background-color: #3498db;color:#ffffff" %)quint-demo(%%) bucket, under the path (% style="background-color:#9b59b6;color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(% style="color:#000000" %) (%%) like in the example below:24 +So, for example, to load the sample data inside the (% style="color: rgb(255, 255, 255); background-color: rgb(52, 152, 219)" %)quint-demo(%%) bucket, under the path (% style="color: rgb(255, 255, 255); background-color: rgb(155, 89, 182)" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(% style="color:#000000" %) (%%) like in the example below: 25 25 26 26 27 27 [[image:image-20220128142757-1.png]] 28 28 29 -=== 29 +=== === 30 30 31 31 you would type a URL like this: 32 32 33 33 34 -##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color: #3498db;color:#ffffff" %)quint-demo(%%)/(% style="background-color:#9b59b6;color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(%%)##34 +##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="color: rgb(255, 255, 255); background-color: rgb(52, 152, 219)" %)quint-demo(%%)/(% style="color: rgb(255, 255, 255); background-color: rgb(155, 89, 182)" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(%%)## 35 35 36 36 this scheme is the same whether you're loading data into the Neuroglancer viewer or specifying an input URL in the export applet. 37 37