Last modified by robing on 2022/03/25 09:55

From version 38.1
edited by robing
on 2020/04/17 12:02
Change comment: There is no comment for this version
To version 39.1
edited by robing
on 2020/04/17 12:14
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -29,12 +29,13 @@
29 29  (((
30 30  == Flexible workflows to generate multi-scale analysis scenarios ==
31 31  
32 -This Collab is aimed at experimental and computational neuroscientists interested in the usage of the Neo and Elephant tools in performing data analysis of spiking data.
32 +This Collab is aimed at experimental and computational neuroscientists interested in the usage of the [[Neo>>https://neo.readthedocs.io/en/stable/]] and [[Elephant>>https://elephant.readthedocs.io/en/latest/]] tools in performing data analysis of spiking data.
33 +Here, the collab illustrates the tool usage with regards to KR3.2, investigating sleep, anesthesia, and the transition to wakefulness.
33 33  
34 34  == How the Pipeline works ==
35 35  
36 -The design of the pipeline aims at interfacing a variety of general and specific analysis and processing steps in a flexible modular manner. Hence, it enables the pipeline to adapt to diverse types of data (e.b. electrical EEG, or optical Calcium Imaging recordings) and to different analysis questions. This makes the analyses a) more reproducible and b) comparable amongst each other since they rely on the same stack of algorithms and any differences in the analysis are fully transparent.
37 -The individual processing and analysis steps (**blocks**//, //see// //the arrow-connected elements below) are organized in sequential **stages**// (//see the columns below//). //Following along the stages the analysis becomes more specific but also allows to branch off at after any stage as each stage yields useful intermediate results is autonomous so that it can be reused and recombined. Within each stage, there is a collection of blocks from which the user can select and arrange the analysis via a config file. Thus, the pipeline can be thought of as a curated database of methods on which an analysis can be constructed by drawing a path along the blocks and stages.
37 +The design of the pipeline aims at interfacing a variety of general and specific analysis and processing steps in a flexible modular manner. Hence, it enables the pipeline to adapt to diverse types of data (e.g., electrical ECoG, or optical Calcium Imaging recordings) and to different analysis questions. This makes the analyses a) more reproducible and b) comparable amongst each other since they rely on the same stack of algorithms and any differences in the analysis are fully transparent.
38 +The individual processing and analysis steps (**blocks**//, //see// //the arrow-connected elements below) are organized in sequential **stages**// (//see the columns below//). //Following along the stages the analysis becomes more specific but also allows to branch off at after any stage as each stage yields useful intermediate results is autonomous so that it can be reused and recombined. Within each stage, there is a collection of blocks from which the user can select and arrange the analysis via a config file. Thus, the pipeline can be thought of as a curated database of methods on which an analysis can be constructed by drawing a path along the blocks and stages.
38 38  
39 39  (% class="wikigeneratedid" id="H" %)
40 40  [[image:pipeline_flowchart.png]]