Wiki source code of Slow Wave Analysis Pipeline

Version 21.1 by robing on 2020/01/23 14:49

Hide last authors
robing 1.1 1 (% class="jumbotron" %)
2 (((
3 (% class="container" %)
4 (((
robing 9.1 5 = (% style="color:inherit" %)Slow Wave Analysis Pipeline(%%) =
robing 1.1 6
robing 9.1 7 = (% style="color:inherit; font-size:24px" %)Integrating multiscale data in a reproducible and adaptable pipeline(%%) =
robing 1.1 8 )))
9 )))
10
11 (% class="row" %)
12 (((
13 (% class="col-xs-12 col-sm-8" %)
14 (((
15 = What can I find here? =
16
robing 10.1 17 ...
robing 1.1 18
19 = Who has access? =
20
21 Describe the audience of this collab.
robing 6.1 22
robing 10.1 23 == Executing the pipeline ==
robing 6.1 24
robing 10.1 25 [[image:pipeline_flowchart.png]]
robing 6.1 26
robing 10.1 27 === in the collab (beta) ===
robing 8.1 28
robing 19.1 29 * **Copy the collab drive to your drive space**
30 ** Open the Drive from the left menu
31 ** Select the folders 'pipeline' and 'datasets'
robing 20.1 32 ** Select 'Copy', and then 'My Library' from the dropdown 'Other Libraries'
robing 19.1 33
34 * **Start a Jupyter Hub instance **
robing 20.1 35 copy the URL to another browser page: 'jupyterhub-preview.apps-dev.hbp.eu'
robing 19.1 36
robing 10.1 37 * **Edit the config files**
38 Each stage has a config file to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can found in the README and config files. The default values are set for an example dataset (ECoG, anesthetized mouse, IDIBAPS [ref]).
39 ** stage01_data_entry: [README.md](), [config.yaml]()
40 ** stage02_preprocessing: [README.md](), [config.yaml]()
41 ** stage03_trigger_detection: [README.md](), [config.yaml]()
42 ** stage04_wavefront_detection: [README.md](), [config.yaml]()
43 ** stage05_wave_characterization: [README.md](), [config.yaml]()
44
robing 20.1 45 * **Run the notebook**
46 In the jupyter hub, navigate to `drive/My Libraries/My Library/pipeline/showcase_notebooks/run_snakemake_in_collab.ipynb`, or where you copied the 'pipeline' folder to.
robing 19.1 47 * Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake.
robing 20.1 48
robing 10.1 49 * **Coming soon**
50 ** Use of KnowledgeGraph API
51 ** Provenance Tracking
52 ** HPC support
53
54 === locally ===
55
56 * **Get the code**
57 The source code of the pipeline is available via Github: [INM-6/wavescalephant]('https:~/~/github.com/INM-6/wavescalephant') and can be cloned to your machine ([how to use Github]()).
58
59 * **Build the Python environment**
60 In the wavescalephant repository, there is an environment file (`pipeline/envs/wavescalephant_env.yaml`) specifying the required packages and versions. To build the environment, we recommend using *conda* ([how to get started with conda]()).
epastorelli 15.2 61 `conda env create ~-~-file /envs/wavescalephant_env.yml`.
robing 10.1 62
63 * **Edit the settings**
64 The settings file specifies the path to the output folder, where results are saved to. Open the template file `pipeline/settings_template.py`, set the `output_path` to the desired path, and save it as `pipeline/settings.py`.
65
66 * **Edit the config files**
67 Each stage has a config file to specify which analysis/processing blocks to execute and which parameters to use. Edit the config template files `pipeline/stageXX_<stage_name>/config_template.yaml` according to your dataset and analysis goal, and save them as `pipeline/stageXX_<stage_name>/config.yaml`. A detailed description of the available parameter settings and their meaning is commented in the template files, and a more general description of the working mechanism of each stage can be found in the respective README file `pipeline/stageXX_<stage_name>/README.md`.
68
69 * **Enter a dataset**
70 see `pipeline/stage01_data_entry/README.md`
71
72 * **Run the pipeline (-stages)**
73 To run the pipeline with snakemake ([intro to snakemake]()) activate the Python environment `conda activate wavescalephant_env`, make sure you are in the working directory `pipeline/`, and call `snakemake` to run the entire pipeline.
74 To (re-)execute an individual stage, you can navigate to the corresponding stage folder and call the `snakemake` command there. For running an individual stage, you may need to manually set the path for input file for the stage (i.e. the output file of the previous stage) in the config file `INPUT: /path/to/file`.
75
76 == Accessing and using the results ==
77
78 All results are stored in the path specified in the `settings.py` file. The folder structure reflects the structuring of the pipeline into stages and blocks. All intermediate results are stored as `.nix` files using the Neo data format ([Neo]()) and can be loaded with `neo.NixIO('/path/to/file.nix').read_block()` ([documentation]()).
79 Additionally, most blocks produce a figure, and each stage a report file, to give an overview of the execution log, parameters, intermediate results, and to help with debugging.
80 The final stage (*stage05_wave_characterization*) stores the results as pandas.DataFrames ([pandas]()) in `.csv` files, separately for each measure as well as in a combined dataframe for all measures.
81
82 == References ==
83
84
85 == Acknowledgments ==
86
87 This open source software code was developed in part or in whole in the Human Brain Project, funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation
88 under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2).
robing 1.1 89 )))
90
91
92 (% class="col-xs-12 col-sm-4" %)
93 (((
94 {{box title="**Contents**"}}
95 {{toc/}}
96 {{/box}}
97
98
99 )))
100 )))