Attention: The EBRAINS drive will be unavailable for most of the weekend starting the 25th October. Although the Lab is availble while the Drive is down, files that are stored in the Drive will not be loaded and you will be unable to save documents directly on the Lab.


Last modified by puchades on 2022/09/30 16:01

From version 22.1
edited by tomazvieira
on 2022/01/25 19:23
Change comment: There is no comment for this version
To version 27.1
edited by tomazvieira
on 2022/02/22 15:11
Change comment: Uploaded new attachment "image-20220222151117-1.png", version {1}

Summary

Details

Page properties
Title
... ... @@ -1,1 +1,1 @@
1 -4. How to use webilastik
1 +4. How to use Webilastik
Content
... ... @@ -1,4 +1,4 @@
1 -== What is webilastik? ==
1 +== What is Webilastik? ==
2 2  
3 3  
4 4  Classic [[ilastik>>https://www.ilastik.org/]] is a simple, user-friendly desktop tool for **interactive image classification, segmentation and analysis**. It is built as a modular software framework, which currently has workflows for automated (supervised) pixel- and object-level classification, automated and semi-automated object tracking, semi-automated segmentation and object counting without detection. Most analysis operations are performed **lazily**, which enables targeted interactive processing of data subvolumes, followed by complete volume analysis in offline batch mode. Using it requires no experience in image processing.
... ... @@ -5,7 +5,7 @@
5 5  
6 6  [[webilastik>>https://app.ilastik.org/]] is a web version of ilastik's Pixel Classification Workflow, integrated with the ebrains ecosystem. It can access the data-proxy buckets for reading and writing (though reading is still suffering from latency issues). It uses Neuroglancer as a 3D viewer as well as compute sessions allocated from the CSCS infrastructure.
7 7  
8 -== How to use webilastik ==
8 +== How to use Webilastik ==
9 9  
10 10  === Opening a sample Dataset ===
11 11  
... ... @@ -15,6 +15,28 @@
15 15  
16 16  [[image:image-20220125164204-2.png]]
17 17  
18 +
19 +=== Opening a Dataset from the data-proxy ===
20 +
21 +You can also load Neuroglancer Precomputed Chunks data from the data-proxy; The URLs for this kind of data follow the following scheme:
22 +\\##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color:#3498db; color:#ffffff" %)my-bucket-name(% style="background-color:#9b59b6; color:#ffffff" %)/path/inside/your/bucket(%%)##
23 +
24 +So, for example, to load the sample data inside the (% style="background-color:#3498db; color:#ffffff" %)quint-demo(%%) bucket, under the path (% style="background-color:#9b59b6; color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(% style="color:#000000" %) (%%) like in the example below:
25 +
26 +
27 +[[image:image-20220128142757-1.png]]
28 +
29 +=== ===
30 +
31 +you would type a URL like this:
32 +
33 +
34 +##precomputed:~/~/https:~/~/data-proxy.ebrains.eu/api/buckets/(% style="background-color:#3498db; color:#ffffff" %)quint-demo(%%)/(% style="background-color:#9b59b6; color:#ffffff" %)tg-ArcSwe_mice_precomputed/hbp-00138_122_381_423_s001.precomputed(%%)##
35 +
36 +this scheme is the same whether you're loading data into the Neuroglancer viewer or specifying an input URL in the export applet.
37 +
38 +=== Viewing 2D Data ===
39 +
18 18  If your dataset is 2D like in the example, you can click the "switch to xy layout" button at the top-right corner of the top-left quadrant of the viewport to use  asingle, 2D viewport:
19 19  
20 20  [[image:image-20220125164416-3.png]]
image-20220128142757-1.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.tomazvieira
Size
... ... @@ -1,0 +1,1 @@
1 +53.7 KB
Content
image-20220222151117-1.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.tomazvieira
Size
... ... @@ -1,0 +1,1 @@
1 +225.3 KB
Content