Changes for page SGA2 SP3 UC002 KR3.2 - Slow Wave Analysis Pipeline
Last modified by robing on 2022/03/25 09:55
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. denker1 +XWiki.robing - Content
-
... ... @@ -71,22 +71,26 @@ 71 71 In another browser tab, open [[https:~~/~~/lab.ebrains.eu>>https://lab.ebrains.eu]] 72 72 73 73 * **Edit the config files** 74 -Each stage has config file s(//pipeline/<stage_name>/configs/config_<profile>.yaml//) to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can be found in the README and config files of each stage. There arepresetconfiguration profilesforthebenchmarkdatasetsIDIBAPS([[ECoG, anesthetized mouse>>https://kg.ebrains.eu/search/?facet_type[0]=Dataset&q=sanchez-vives#Dataset/2ead029b-bba5-4611-b957-bb6feb631396]]) and LENS ([[Calcium Imaging, anesthetized mouse>>https://kg.ebrains.eu/search/instances/Dataset/71285966-8381-48f7-bd4d-f7a66afa9d79]]).74 +Each stage has a config file (//pipeline/<stage_name>/config.yaml//) to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can be found in the README and config files of each stage. The default values are set for an example dataset (ECoG, anesthetized mouse, [[IDIBAPS>>https://kg.ebrains.eu/search/?facet_type[0]=Dataset&q=sanchez-vives#Dataset/2ead029b-bba5-4611-b957-bb6feb631396]]]). 75 75 76 76 * **Run the notebook** 77 77 In the jupyter hub, navigate to //drive/My Libraries/My Library/pipeline/showcase_notebooks/run_snakemake_in_collab.ipynb//, or where you copied the //pipeline// folder to. 78 -Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake 78 +Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake. 79 + 80 +* **Coming soon** 81 +** Use of KnowledgeGraph API 82 +** Provenance Tracking 83 +** HPC support 79 79 80 80 === ii) Local execution === 81 81 82 82 * **Get the code** 83 -The source code of the pipeline is available via Github: [[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]] and can be cloned to your machine ([[how to get started withGithub>>https://guides.github.com/activities/hello-world/]]).88 +The source code of the pipeline is available via Github: [[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]] and can be cloned to your machine ([[how to Github>>https://guides.github.com/activities/hello-world/]]). 84 84 85 85 * ((( 86 86 **Build the Python environment** 87 -In the wavescalephant git repository, there is an environment file ([[pipeline/environment.yaml>>https://drive.ebrains.eu/smart-link/1a0b15bb-be87-46ee-b838-4734bc320d20/]]) specifying the required packages and versions. To build the environment, we recommend using conda ([[how to get started with conda>>https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html]]). 88 -##conda env create ~-~-file environment.yaml 89 -conda activate wavescalephant_env## 92 +In the wavescalephant git repository, there is an environment file ([[pipeline/envs/wavescalephant_env.yml>>https://drive.ebrains.eu/lib/905d7321-a16b-4147-8cca-31d710d1f946/file/pipeline/envs/wavescalephant_env.yml]]) specifying the required packages and versions. To build the environment, we recommend using conda ([[how to get started with conda>>https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html]]). 93 +##conda env create ~-~-file /envs/wavescalephant_env.yml## 90 90 91 91 ))) 92 92 * **Edit the settings** ... ... @@ -93,14 +93,14 @@ 93 93 The settings file specifies the path to the output folder, where results are saved to. Open the template file //[[pipeline/settings_template.py>>https://drive.ebrains.eu/lib/905d7321-a16b-4147-8cca-31d710d1f946/file/pipeline/settings_template.py]]//, set the ##output_path## to the desired path, and save it as //pipeline/settings.py//. 94 94 95 95 * **Edit the config files** 96 -Each stage usesa config file to specify which analysis/processing blocks to execute and which parameters to use. Edit the config template files //pipeline/stageXX_<stage_name>/configs/config_template.yaml// according to your dataset and analysis goal, and save them as //pipeline/stageXX_<stage_name>/configs/config_<profile>.yaml//. A detailed description of the available parameter settings and their meaning is commented in the template files, and a more general description of the working mechanism of each stage can be found in the respective README file //pipeline/stageXX_<stage_name>/README.md//.100 +Each stage has a config file to specify which analysis/processing blocks to execute and which parameters to use. Edit the config template files //pipeline/stageXX_<stage_name>/config_template.yaml// according to your dataset and analysis goal, and save them as //pipeline/stageXX_<stage_name>/config.yaml//. A detailed description of the available parameter settings and their meaning is commented in the template files, and a more general description of the working mechanism of each stage can be found in the respective README file //pipeline/stageXX_<stage_name>/README.md//. 97 97 //Links are view-only// 98 98 ** full pipeline: [[README.md>>https://drive.ebrains.eu/smart-link/d2e93a2a-09f6-4dce-982d-0370953a4da8/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/7948fbb3-bf8a-4785-9b28-d5c15a1aafa7/]] 99 -** stage01_data_entry: [[README.md>>https://drive.ebrains.eu/smart-link/896f8880-a7d1-4a30-adbf-98759860fed5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/9 bef8f59-1007-48c4-b5ba-30de4ff18f34/]]100 -** stage02_processing: [[README.md>>https://drive.ebrains.eu/smart-link/01f21fa5-94f7-4883-8388-cc50957f9c81/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ 7e75caf6-e2d6-4393-a97c-4f481c908cf8/]]101 -** stage03_trigger_detection: [[README.md>>https://drive.ebrains.eu/smart-link/18d276cd-a691-4ee1-81c6-7978cef9c1b4/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ dfa375c0-cc80-4f95-b3ed-40140acbd96b/]]102 -** stage04_wavefront_detection: [[README.md>>https://drive.ebrains.eu/smart-link/a8e80096-06a0-4ff4-b645-90e134e46ac5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ 3a54be8c-b9f4-4698-a85d-6ad97990b40a/]]103 -** stage05_wave_characterization: [[README.md>>https://drive.ebrains.eu/smart-link/3009a214-a11f-424c-8a6e-13e7506545eb/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ 83f68955-0ca8-4123-9734-6e93349ca3e3/]]103 +** stage01_data_entry: [[README.md>>https://drive.ebrains.eu/smart-link/896f8880-a7d1-4a30-adbf-98759860fed5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/d429639d-b76e-4093-8fad-a25463d41edc/]] 104 +** stage02_processing: [[README.md>>https://drive.ebrains.eu/smart-link/01f21fa5-94f7-4883-8388-cc50957f9c81/]], [[config.yaml>>https://drive.ebrains.eu/f/b1607671f6f2468ahttps://drive.ebrains.eu/smart-link/02a3f92c-dc7d-4b33-94f5-91b00db060d5/a43c/]] 105 +** stage03_trigger_detection: [[README.md>>https://drive.ebrains.eu/smart-link/18d276cd-a691-4ee1-81c6-7978cef9c1b4/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/76adbb12-7cb4-42df-9fd5-735927ea3ba8/]] 106 +** stage04_wavefront_detection: [[README.md>>https://drive.ebrains.eu/smart-link/a8e80096-06a0-4ff4-b645-90e134e46ac5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/6b0b233f-30b7-4bbd-8564-1abebd27ea6d/]] 107 +** stage05_wave_characterization: [[README.md>>https://drive.ebrains.eu/smart-link/3009a214-a11f-424c-8a6e-13e7506545eb/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/471001d5-33f5-488e-a9a4-f03b190e3da7/]] 104 104 105 105 * **Enter a dataset** 106 106 There are two test datasets in the collab drive (IDIBAPS and LENS) for which there are also corresponding config files and scripts in the data_entry stage. So, these datasets are ready to be used and analyzed. ... ... @@ -107,21 +107,13 @@ 107 107 For adding new datasets see //[[pipeline/stage01_data_entry/README.md>>https://drive.ebrains.eu/smart-link/d2e93a2a-09f6-4dce-982d-0370953a4da8/]]// 108 108 109 109 * **Run the pipeline (-stages)** 110 -To run the pipeline with [[snakemake>>https://snakemake.readthedocs.io/en/stable/]]), activate the Python environment ##conda activate wavescalephant_env,## make sure you are in the working directory ( //pipeline/)//, and call ##snakemake## to run the entire pipeline.111 - Fora moredetailedexecuted guide andhowtoexecute individual stagesandblocksseethe pipeline[[Readme>>https://drive.ebrains.eu/smart-link/3009a214-a11f-424c-8a6e-13e7506545eb/]].114 +To run the pipeline with [[snakemake>>https://snakemake.readthedocs.io/en/stable/]]), activate the Python environment ##conda activate wavescalephant_env,## make sure you are in the working directory (pipeline/), and call ##snakemake## to run the entire pipeline. 115 +To (re-)execute an individual stage, you can navigate to the corresponding stage folder and call the ##snakemake## command there. For running an individual stage, you may need to manually set the path for input file for the stage (i.e. the output file of the previous stage) in the config file ##INPUT: /path/to/file##. 112 112 113 113 == Accessing and using the results == 114 114 115 115 All results are stored in the path specified in the //settings.py// file. The folder structure reflects the structuring of the pipeline into stages and blocks. All intermediate results are stored as //.nix// files using the [[Neo data format>>https://neo.readthedocs.io/en/stable/]] and can be loaded with ##neo.NixIO('/path/to/file.nix').read_block()##. Additionally, most blocks produce a figure, and each stage a report file, to give an overview of the execution log, parameters, intermediate results, and to help with debugging. The final stage (//stage05_wave_characterization//) stores the results as[[ //pandas.DataFrames//>>https://pandas.pydata.org/]] in //.csv// files, separately for each measure as well as in a combined dataframe for all measures. 116 116 117 -== Outlook == 118 - 119 -* Using the **KnowledgeGraph API **to insert data directly from the Knowledge Graph into the pipeline and also register and store the corresponding results as Analysis Objects. Such Analysis Objects are to incorporate **Provenance Tracking, **using [[fairgraph>>https://github.com/HumanBrainProject/fairgraph]],** **to record the details of the processing and analysis steps. 120 -* Adding support for the pipeline to make use of **HPC** resources when running on the collab. 121 -* Further extending the available **methods** to address a wider variety of analysis objectives and support the processing of other datatypes. Additional documentation and guides should also make it easier for non-developers to contribute new method blocks. 122 -* Extending the **application** of the pipeline to the analysis of other types of activity waves and oscillations. 123 -* Integrating and co-developing new features of the underlying **software tools **[[Elephant>>https://elephant.readthedocs.io/en/latest/]], [[Neo>>https://neo.readthedocs.io/en/stable/]], [[Nix>>https://github.com/G-Node/nix]], [[Snakemake>>https://snakemake.readthedocs.io/en/stable/]]. 124 - 125 125 == References == 126 126 127 127 * [[Celotto, Marco, et al. "Analysis and Model of Cortical Slow Waves Acquired with Optical Techniques." //Methods and Protocols// 3.1 (2020): 14.>>https://doi.org/10.3390/mps3010014]] ... ... @@ -130,17 +130,14 @@ 130 130 * [[Sanchez-Vives, M. (2020). "Propagation modes of slow waves in mouse cortex". //EBRAINS//>>https://doi.org/10.25493/WKA8-Q4T]] 131 131 * [[Sanchez-Vives, M. (2019). "Cortical activity features in transgenic mouse models of cognitive deficits (Fragile X Syndrome).//" EBRAINS//>>https://doi.org/10.25493/ANF9-EG3]] 132 132 133 -== License == 129 +== License (to discuss) == 134 134 135 - Textis licensedunder the Creative Commons CC-BY 4.0license.LENSdata is licensedunderthe Creative CommonsCC-BY-NC-ND 4.0 license. IDIBAPS datais licensed undertheCreative Commons CC-BY-NC-SA4.0 license. Software code is licensed underGNUGeneralPublicLicensev3.0.131 +All text and example data in this collab is licensed under Creative Commons CC-BY 4.0 license. Software code is licensed under a modified BSD license. 136 136 137 137 [[image:https://i.creativecommons.org/l/by/4.0/88x31.png||style="float:left"]] 138 138 139 - [[image:https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png||alt="https://i.creativecommons.org/l/by/4.0/88x31.png"style="float:left"]]135 +== == 140 140 141 -[[image:https://licensebuttons.net/l/by-nc-nd/4.0/88x31.png||alt="https://i.creativecommons.org/l/by/4.0/88x31.png" style="float:left"]] 142 - 143 - 144 144 == Acknowledgments == 145 145 146 146 This open source software code was developed in part or in whole in the Human Brain Project, funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). ... ... @@ -150,12 +150,12 @@ 150 150 ))) 151 151 152 152 153 -== == 146 +== Executing the pipeline == 154 154 155 155 (% class="col-xs-12 col-sm-4" %) 156 156 ((( 157 157 {{box title="**Contents**"}} 158 -{{toc depth="3"/}}151 +{{toc/}} 159 159 {{/box}} 160 160 161 161