Changes for page SGA2 SP3 UC002 KR3.2 - Slow Wave Analysis Pipeline
Last modified by robing on 2022/03/25 09:55
From version 57.2
edited by pierstanpaolucci
on 2020/04/26 14:24
on 2020/04/26 14:24
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. pierstanpaolucci1 +XWiki.debonisg - Content
-
... ... @@ -47,7 +47,7 @@ 47 47 == How the Pipeline works == 48 48 49 49 The design of the pipeline aims at interfacing a variety of general and specific analysis and processing steps in a flexible modular manner. Hence, it enables the pipeline to adapt to diverse types of data (e.g., electrical ECoG, or optical Calcium Imaging recordings) and to different analysis questions. This makes the analyses a) more reproducible and b) comparable amongst each other since they rely on the same stack of algorithms and any differences in the analysis are fully transparent. 50 -The individual processing and analysis steps (**blocks**//, //see// //the arrow-connected elements below) are organized in sequential **stages**// (//see the columns below//). //Following along the stages the analysis becomes more specific but also allows to branch off at after any stage as each stage yields useful intermediate results is autonomous so that it can be reused and recombined. Within each stage, there is a collection of blocks from which the user can select and arrange the analysis via a config file. Thus, the pipeline can be thought of as a curated database of methods on which an analysis can be constructed by drawing a path along the blocks and stages. 50 +The individual processing and analysis steps (**blocks**//, //see// //the arrow-connected elements below) are organized in sequential **stages**// (//see the columns below//). //Following along the stages, the analysis becomes more specific but also allows to branch off at after any stage, as each stage yields useful intermediate results and is autonomous so that it can be reused and recombined. Within each stage, there is a collection of blocks from which the user can select and arrange the analysis via a config file. Thus, the pipeline can be thought of as a curated database of methods on which an analysis can be constructed by drawing a path along the blocks and stages. 51 51 52 52 (% class="wikigeneratedid" id="H" %) 53 53 [[image:pipeline_flowchart.png]] ... ... @@ -54,7 +54,7 @@ 54 54 55 55 == Executing the pipeline == 56 56 57 -There are two ways of getting started and testing the pipeline, 57 +There are two ways of getting started and testing the pipeline, i) online using the collab drive and jupyter hub, or ii) downloading the code and data from GitHub and the collab storage and running it locally. 58 58 59 59 === i) In the collab === 60 60 ... ... @@ -71,7 +71,7 @@ 71 71 In another browser tab, open [[https:~~/~~/lab.ebrains.eu>>https://lab.ebrains.eu]] 72 72 73 73 * **Edit the config files** 74 -Each stage has a config file (//pipeline/<stage_name>/config.yaml//) to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can found in the README and config files of each stage. The default values are set for an example dataset (ECoG, anesthetized mouse, [[IDIBAPS>>https://kg.ebrains.eu/search/?facet_type[0]=Dataset&q=sanchez-vives#Dataset/2ead029b-bba5-4611-b957-bb6feb631396]]]). 74 +Each stage has a config file (//pipeline/<stage_name>/config.yaml//) to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can be found in the README and config files of each stage. The default values are set for an example dataset (ECoG, anesthetized mouse, [[IDIBAPS>>https://kg.ebrains.eu/search/?facet_type[0]=Dataset&q=sanchez-vives#Dataset/2ead029b-bba5-4611-b957-bb6feb631396]]]). 75 75 76 76 * **Run the notebook** 77 77 In the jupyter hub, navigate to //drive/My Libraries/My Library/pipeline/showcase_notebooks/run_snakemake_in_collab.ipynb//, or where you copied the //pipeline// folder to. ... ... @@ -111,7 +111,7 @@ 111 111 For adding new datasets see //[[pipeline/stage01_data_entry/README.md>>https://drive.ebrains.eu/smart-link/d2e93a2a-09f6-4dce-982d-0370953a4da8/]]// 112 112 113 113 * **Run the pipeline (-stages)** 114 -To run the pipeline with snakemake ([introtosnakemake]())activate the Python environment ##conda activate wavescalephant_env,## make sure you are in the working directory`pipeline/`, and call ##snakemake## to run the entire pipeline.114 +To run the pipeline with [[snakemake>>https://snakemake.readthedocs.io/en/stable/]]), activate the Python environment ##conda activate wavescalephant_env,## make sure you are in the working directory (pipeline/), and call ##snakemake## to run the entire pipeline. 115 115 To (re-)execute an individual stage, you can navigate to the corresponding stage folder and call the ##snakemake## command there. For running an individual stage, you may need to manually set the path for input file for the stage (i.e. the output file of the previous stage) in the config file ##INPUT: /path/to/file##. 116 116 117 117 == Accessing and using the results ==