Version 40.1 by robing on 2020/04/21 12:36

Hide last authors
robing 1.1 1 (% class="jumbotron" %)
2 (((
3 (% class="container" %)
4 (((
robing 36.1 5 = (% style="--darkreader-inline-color:inherit; color:inherit" %)Slow Wave Analysis Pipeline(%%) =
robing 1.1 6
denker 22.2 7 (% class="wikigeneratedid" id="HUseCaseSGA2-SP3-002:IntegratingmultiscaledataA0inareproducibleandadaptablepipeline" %)
robing 36.1 8 (% style="--darkreader-inline-color:inherit; color:inherit; font-size:24px" %)**Use Case SGA2-SP3-002: Integrating multi-scale data in a reproducible and adaptable pipeline**
denker 25.1 9
denker 30.1 10 To be discussed Author orders, contributions,...:
denker 25.1 11
12 Experiments: ...?
13
denker 30.1 14 Implementation: Robin Gutzen^^1^^, Elena Pastorelli^^2^^, ...
denker 25.1 15
denker 28.1 16 Lead: Michael Denker^^1^^, Sonja Grün^^1^^, Pier Stanislao Paolucci^^2^^, Andrew Davison?
denker 25.1 17
18 ,,1) Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany,,
19
debonisg 34.1 20 ,,2) Dipartimento di Fisica, Università di Cagliari and INFN Sezione di Roma, Italy,,
denker 25.1 21
22
robing 1.1 23 )))
24 )))
25
26 (% class="row" %)
27 (((
28 (% class="col-xs-12 col-sm-8" %)
29 (((
denker 23.1 30 == Flexible workflows to generate multi-scale analysis scenarios ==
robing 1.1 31
robing 39.1 32 This Collab is aimed at experimental and computational neuroscientists interested in the usage of the [[Neo>>https://neo.readthedocs.io/en/stable/]] and [[Elephant>>https://elephant.readthedocs.io/en/latest/]] tools in performing data analysis of spiking data.
33 Here, the collab illustrates the tool usage with regards to KR3.2, investigating sleep, anesthesia, and the transition to wakefulness.
robing 1.1 34
robing 36.1 35 == How the Pipeline works ==
36
robing 39.1 37 The design of the pipeline aims at interfacing a variety of general and specific analysis and processing steps in a flexible modular manner. Hence, it enables the pipeline to adapt to diverse types of data (e.g., electrical ECoG, or optical Calcium Imaging recordings) and to different analysis questions. This makes the analyses a) more reproducible and b) comparable amongst each other since they rely on the same stack of algorithms and any differences in the analysis are fully transparent.
38 The individual processing and analysis steps (**blocks**//, //see// //the arrow-connected elements below) are organized in sequential **stages**// (//see the columns below//). //Following along the stages the analysis becomes more specific but also allows to branch off at after any stage as each stage yields useful intermediate results is autonomous so that it can be reused and recombined. Within each stage, there is a collection of blocks from which the user can select and arrange the analysis via a config file. Thus, the pipeline can be thought of as a curated database of methods on which an analysis can be constructed by drawing a path along the blocks and stages.
robing 36.1 39
robing 37.1 40 (% class="wikigeneratedid" id="H" %)
41 [[image:pipeline_flowchart.png]]
robing 36.1 42
robing 10.1 43 == Executing the pipeline ==
robing 6.1 44
robing 38.1 45 There are two ways of getting started and testing the pipeline, i) online using the collab drive and jupyter hub, or ii) downloading the code and data from GitHub and the collab storage and running it locally.
robing 8.1 46
robing 38.1 47 === i) In the collab ===
48
robing 36.1 49 * (((
robing 38.1 50 **Copy the collab drive to your personal drive space**
robing 36.1 51
robing 38.1 52 * Open the Drive from the left menu
robing 40.1 53 * Select the folders //pipeline// and //datasets,//
54 and the notebook// run_snakemake_in_collab.ipynb//
robing 36.1 55 * Select 'Copy', and then 'My Library' from the dropdown 'Other Libraries'
robing 19.1 56
robing 36.1 57 )))
robing 19.1 58 * **Start a Jupyter Hub instance **
robing 38.1 59 In another browser tab, open [[https:~~/~~/lab.ebrains.eu>>https://lab.ebrains.eu]]
robing 40.1 60
robing 10.1 61 * **Edit the config files**
robing 40.1 62 Each stage has a config file (//pipeline/<stage_name>/config.yaml//) to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can found in the README and config files of each stage. The default values are set for an example dataset (ECoG, anesthetized mouse, [[IDIBAPS>>https://kg.ebrains.eu/search/?facet_type[0]=Dataset&q=sanchez-vives#Dataset/2ead029b-bba5-4611-b957-bb6feb631396]]]).
robing 10.1 63
robing 20.1 64 * **Run the notebook**
robing 38.1 65 In the jupyter hub, navigate to //drive/My Libraries/My Library/pipeline/showcase_notebooks/run_snakemake_in_collab.ipynb//, or where you copied the //pipeline// folder to.
robing 40.1 66 Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake.
robing 20.1 67
robing 10.1 68 * **Coming soon**
69 ** Use of KnowledgeGraph API
70 ** Provenance Tracking
71 ** HPC support
72
robing 38.1 73 === ii) Local execution ===
robing 10.1 74
75 * **Get the code**
robing 38.1 76 The source code of the pipeline is available via Github: [[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]] and can be cloned to your machine ([[how to Github>>https://guides.github.com/activities/hello-world/]]).
robing 10.1 77
robing 38.1 78 * (((
79 **Build the Python environment**
80 In the wavescalephant git repository, there is an environment file ([[pipeline/envs/wavescalephant_env.yaml>>https://drive.ebrains.eu/f/efe2ecf0874d4402bb11/]]) specifying the required packages and versions. To build the environment, we recommend using conda ([[how to get started with conda>>https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html]]).
81 ##conda env create ~-~-file /envs/wavescalephant_env.yml##
robing 10.1 82
robing 38.1 83 )))
robing 10.1 84 * **Edit the settings**
robing 38.1 85 The settings file specifies the path to the output folder, where results are saved to. Open the template file //[[pipeline/settings_template.py>>https://drive.ebrains.eu/f/b6dbd9f15e4f4d97af17/]]//, set the ##output_path## to the desired path, and save it as //pipeline/settings.py//.
robing 10.1 86
87 * **Edit the config files**
robing 38.1 88 Each stage has a config file to specify which analysis/processing blocks to execute and which parameters to use. Edit the config template files //pipeline/stageXX_<stage_name>/config_template.yaml// according to your dataset and analysis goal, and save them as //pipeline/stageXX_<stage_name>/config.yaml//. A detailed description of the available parameter settings and their meaning is commented in the template files, and a more general description of the working mechanism of each stage can be found in the respective README file //pipeline/stageXX_<stage_name>/README.md//.
89 //Links are view-only//
90 ** full pipeline:[[ README.md>>https://drive.ebrains.eu/f/ec474df6919a4089832e/]], config.yaml
91 ** stage01_data_entry: [[README.md>>https://drive.ebrains.eu/f/b46ffe259b3a4a51a277/]], [[config.yaml>>https://drive.ebrains.eu/f/8de751f48d7d47edaec1/]]
92 ** stage02_processing: [[README.md>>https://drive.ebrains.eu/f/7f19d89913624425bf63/]], [[config.yaml>>https://drive.ebrains.eu/f/b1607671f6f2468aa43c/]]
93 ** stage03_trigger_detection: [[README.md>>https://drive.ebrains.eu/f/94d12860dde84bbab7b1/]], [[config.yaml>>https://drive.ebrains.eu/f/6dfb712d5fa24f4f9fcf/]]
94 ** stage04_wavefront_detection: [[README.md>>https://drive.ebrains.eu/d/9c53abd5eaf543b28615/]], [[config.yaml>>https://drive.ebrains.eu/f/9534e46c4fae41c78f17/]]
95 ** stage05_wave_characterization: [[README.md>>https://drive.ebrains.eu/f/4d79f3e314474c22a781/]], [[config.yaml>>https://drive.ebrains.eu/f/1689dda03be04251b85f/]]
robing 10.1 96
97 * **Enter a dataset**
robing 38.1 98 There are two test datasets in the collab drive (IDIBAPS and LENS) for which there are also corresponding config files and scripts in the data_entry stage. So, these datasets are ready to be used and analyzed.
99 For adding new datasets see //[[pipeline/stage01_data_entry/README.md>>https://drive.ebrains.eu/f/b46ffe259b3a4a51a277/]]//
robing 10.1 100
101 * **Run the pipeline (-stages)**
robing 38.1 102 To run the pipeline with snakemake ([intro to snakemake]()) activate the Python environment ##conda activate wavescalephant_env,## make sure you are in the working directory `pipeline/`, and call ##snakemake## to run the entire pipeline.
103 To (re-)execute an individual stage, you can navigate to the corresponding stage folder and call the ##snakemake## command there. For running an individual stage, you may need to manually set the path for input file for the stage (i.e. the output file of the previous stage) in the config file ##INPUT: /path/to/file##.
robing 10.1 104
105 == Accessing and using the results ==
106
robing 38.1 107 All results are stored in the path specified in the //settings.py// file. The folder structure reflects the structuring of the pipeline into stages and blocks. All intermediate results are stored as //.nix// files using the [[Neo data format>>https://neo.readthedocs.io/en/stable/]] and can be loaded with ##neo.NixIO('/path/to/file.nix').read_block()##. Additionally, most blocks produce a figure, and each stage a report file, to give an overview of the execution log, parameters, intermediate results, and to help with debugging. The final stage (//stage05_wave_characterization//) stores the results as[[ //pandas.DataFrames//>>https://pandas.pydata.org/]] in //.csv// files, separately for each measure as well as in a combined dataframe for all measures.
robing 10.1 108
109 == References ==
110
111
denker 22.1 112 == License (to discuss) ==
113
114 All text and example data in this collab is licensed under Creative Commons CC-BY 4.0 license. Software code is licensed under a modified BSD license.
115
denker 29.1 116 [[image:https://i.creativecommons.org/l/by/4.0/88x31.png||style="float:left"]]
117
denker 33.1 118 == ==
119
robing 10.1 120 == Acknowledgments ==
121
denker 22.1 122 This open source software code was developed in part or in whole in the Human Brain Project, funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2).
denker 33.1 123
124
125 [[image:logos_sga2_sp3_uc002.png||alt="Logos SP3 Use Case 2"]]
robing 1.1 126 )))
127
128
robing 36.1 129 == Executing the pipeline ==
130
robing 1.1 131 (% class="col-xs-12 col-sm-4" %)
132 (((
133 {{box title="**Contents**"}}
134 {{toc/}}
135 {{/box}}
136
137
138 )))
139 )))