Warning:  The EBRAINS Lab will be down today from 21:00 CEST (my timezone)  for ~10 minutes for an update


Last modified by puchades on 2021/12/07 10:37

From version 16.3
edited by puchades
on 2020/04/07 15:52
Change comment: There is no comment for this version
To version 16.2
edited by puchades
on 2020/04/07 15:51
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -4,7 +4,7 @@
4 4  
5 5  == (% style="color:#c0392b" %)Input requirements(%%) ==
6 6  
7 -For //QuickNII,// the input requirements are described in details in Puchades et al., 2019. To summarise, images should be in 24-bit PNG or JPEG format, with a resolution up to 16 megapixels (e.g. 4000x4000 or 5000x3000 pixels). Keep in mind that QuickNII does not benefit from image resolutions exceeding the resolution of the monitor in use. For a standard FullHD or widescreen display (1920x1080 or 1920x1200 pixels) the useful image area in QuickNII is approximately 1500x1000 pixels. Using a similar resolution ensures optimal image-loading performance.
7 +For //QuickNII,// the input requirements are described in in Puchades et al., 2019. To summarise, images should be in 24-bit PNG or JPEG format, and can be loaded up to a resolution of 16 megapixels (e.g.4000x4000 or 5000x3000 pixels). However //QuickNII// does not benefit from image resolutions exceeding the resolution of the monitor in use. For a standard FullHD or WUXGA display (1920x1080 or 1920x1200 pixels) the useful image area is approximately 1500x1000 pixels. Using a similar resolution ensures optimal image-loading performance.
8 8  
9 9  For //ilastik// (Borg et al.2019) images are downscaled in order to enable efficient processing. The pixel classification algorithm relies on input from manual user annotations of the training images, and the features ‒ intensity, edge and/or texture ‒ of the image pixels. The resizing factor is determined by trial and error, with a test run performed with //ilastik// on images of different sizes to determine the optimal resolution for segmentation. As an example, in Yates et al., 2019, the images were downscaled by a factor of 0.1 and 0.05 for cellular features and Alzheimer's plaques respectively (factor applies to the image width here).
10 10