Changes for page SGA2 SP3 UC002 KR3.2 - Slow Wave Analysis Pipeline
Last modified by robing on 2022/03/25 09:55
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki.de bonisg1 +XWiki.denker - Content
-
... ... @@ -23,7 +23,7 @@ 23 23 24 24 ,,4) Unité de Neurosciences, Information et Complexité, Neuroinformatics Group, CNRS FRE 3693, Gif-sur-Yvette, France,, 25 25 26 -,,5) European Laboratory for Non-linear Spectroscopy (LENS), (% style="color:inherit" %)University of Florence, Florence, Italy(%%),, 26 +,,5) European Laboratory for Non-linear Spectroscopy (LENS), (% style="--darkreader-inline-color:inherit; color:inherit" %)University of Florence, Florence, Italy(%%),, 27 27 28 28 ,,6) Istituto di Neuroscienze, CNR, Pisa, Italy,, 29 29 ... ... @@ -39,13 +39,10 @@ 39 39 ((( 40 40 == Flexible workflows to generate multi-scale analysis scenarios == 41 41 42 -This Collab is aimed at experimental and computational neuroscientists interested in the usage of the [[Neo>>https://neo.readthedocs.io/en/stable/]] and [[Elephant>>https://elephant.readthedocs.io/en/latest/]] tools in performing data analysis of spiking data. 43 -Here, the collab illustrates the tool usage with regards to the SGA2-SP3-UC002 KR3.2, investigating sleep, anesthesia, and the transition to wakefulness: see Chapter 1 and Figure 2 of SGA2[[ Deliverable D3.2.1.>>https://drive.ebrains.eu/smart-link/17ac0d6e-e050-4a49-8ca2-e223b70a3121/]], for an overview of the scientific motivations and a description of the UseCase workflow; Chapter 2 (same document) for an introduction to KR3.2; Chapter 3, for a description of the mice ECoG data sets; Chapter 5, about the Slow Wave Analysis Pipeline and Chapter 6 for the mice wide-field GECI data) 42 +This collab illustrates the usage of the [[Neo>>https://neo.readthedocs.io/en/stable/]] and [[Elephant>>https://elephant.readthedocs.io/en/latest/]] tools in performing data analysis with regards to the SGA2-SP3-UC002 KR3.2, investigating sleep, anesthesia, and the transition to wakefulness: see Chapter 1 and Figure 2 of SGA2[[ Deliverable D3.2.1.>>https://drive.ebrains.eu/smart-link/17ac0d6e-e050-4a49-8ca2-e223b70a3121/]], for an overview of the scientific motivations and a description of the UseCase workflow; Chapter 2 (same document) for an introduction to KR3.2; Chapter 3, for a description of the mice ECoG data sets; Chapter 5, about the Slow Wave Analysis Pipeline and Chapter 6 for the mice wide-field GECI data). For details on the datasets used in this collab, please see the References below. 44 44 45 - [[image:https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png||height="35"width="35"]][[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]]44 +== How the pipeline works == 46 46 47 -== How the Pipeline works == 48 - 49 49 The design of the pipeline aims at interfacing a variety of general and specific analysis and processing steps in a flexible modular manner. Hence, it enables the pipeline to adapt to diverse types of data (e.g., electrical ECoG, or optical Calcium Imaging recordings) and to different analysis questions. This makes the analyses a) more reproducible and b) comparable amongst each other since they rely on the same stack of algorithms and any differences in the analysis are fully transparent. 50 50 The individual processing and analysis steps (**blocks**//, //see// //the arrow-connected elements below) are organized in sequential **stages**// (//see the columns below//). //Following along the stages, the analysis becomes more specific but also allows to branch off at after any stage, as each stage yields useful intermediate results and is autonomous so that it can be reused and recombined. Within each stage, there is a collection of blocks from which the user can select and arrange the analysis via a config file. Thus, the pipeline can be thought of as a curated database of methods on which an analysis can be constructed by drawing a path along the blocks and stages. 51 51 ... ... @@ -71,26 +71,22 @@ 71 71 In another browser tab, open [[https:~~/~~/lab.ebrains.eu>>https://lab.ebrains.eu]] 72 72 73 73 * **Edit the config files** 74 -Each stage has aconfig file (//pipeline/<stage_name>/config.yaml//) to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can be found in the README and config files of each stage. Thedefaultvaluesareanexampledataset (ECoG, anesthetized mouse, [[IDIBAPS>>https://kg.ebrains.eu/search/?facet_type[0]=Dataset&q=sanchez-vives#Dataset/2ead029b-bba5-4611-b957-bb6feb631396]]]).71 +Each stage has config files (//pipeline/<stage_name>/configs/config_<profile>.yaml//) to specify which analysis/processing blocks to execute and which parameters to use. General and specific information about the blocks and parameters can be found in the README and config files of each stage. There are preset configuration profiles for the benchmark datasets IDIBAPS ([[ECoG, anesthetized mouse>>https://kg.ebrains.eu/search/?facet_type[0]=Dataset&q=sanchez-vives#Dataset/2ead029b-bba5-4611-b957-bb6feb631396]]) and LENS ([[Calcium Imaging, anesthetized mouse>>https://kg.ebrains.eu/search/instances/Dataset/71285966-8381-48f7-bd4d-f7a66afa9d79]]). 75 75 76 76 * **Run the notebook** 77 77 In the jupyter hub, navigate to //drive/My Libraries/My Library/pipeline/showcase_notebooks/run_snakemake_in_collab.ipynb//, or where you copied the //pipeline// folder to. 78 -Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake. 79 - 80 -* **Coming soon** 81 -** Use of KnowledgeGraph API 82 -** Provenance Tracking 83 -** HPC support 75 +Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake 84 84 85 85 === ii) Local execution === 86 86 87 87 * **Get the code** 88 -The source code of the pipeline is available via Github: [[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]] and can be cloned to your machine ([[how to 80 +The source code of the pipeline is available via Github: [[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]] and can be cloned to your machine ([[how to get started with Github>>https://guides.github.com/activities/hello-world/]]). 89 89 90 90 * ((( 91 91 **Build the Python environment** 92 -In the wavescalephant git repository, there is an environment file ([[pipeline/envs/wavescalephant_env.yml>>https://drive.ebrains.eu/lib/905d7321-a16b-4147-8cca-31d710d1f946/file/pipeline/envs/wavescalephant_env.yml]]) specifying the required packages and versions. To build the environment, we recommend using conda ([[how to get started with conda>>https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html]]). 93 -##conda env create ~-~-file /envs/wavescalephant_env.yml## 84 +In the wavescalephant git repository, there is an environment file ([[pipeline/environment.yaml>>https://drive.ebrains.eu/smart-link/1a0b15bb-be87-46ee-b838-4734bc320d20/]]) specifying the required packages and versions. To build the environment, we recommend using conda ([[how to get started with conda>>https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html]]). 85 +##conda env create ~-~-file environment.yaml 86 +conda activate wavescalephant_env## 94 94 95 95 ))) 96 96 * **Edit the settings** ... ... @@ -97,14 +97,14 @@ 97 97 The settings file specifies the path to the output folder, where results are saved to. Open the template file //[[pipeline/settings_template.py>>https://drive.ebrains.eu/lib/905d7321-a16b-4147-8cca-31d710d1f946/file/pipeline/settings_template.py]]//, set the ##output_path## to the desired path, and save it as //pipeline/settings.py//. 98 98 99 99 * **Edit the config files** 100 -Each stage has93 +Each stage uses a config file to specify which analysis/processing blocks to execute and which parameters to use. Edit the config template files //pipeline/stageXX_<stage_name>/configs/config_template.yaml// according to your dataset and analysis goal, and save them as //pipeline/stageXX_<stage_name>/configs/config_<profile>.yaml//. A detailed description of the available parameter settings and their meaning is commented in the template files, and a more general description of the working mechanism of each stage can be found in the respective README file //pipeline/stageXX_<stage_name>/README.md//. 101 101 //Links are view-only// 102 102 ** full pipeline: [[README.md>>https://drive.ebrains.eu/smart-link/d2e93a2a-09f6-4dce-982d-0370953a4da8/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/7948fbb3-bf8a-4785-9b28-d5c15a1aafa7/]] 103 -** stage01_data_entry: [[README.md>>https://drive.ebrains.eu/smart-link/896f8880-a7d1-4a30-adbf-98759860fed5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ d429639d-b76e-4093-8fad-a25463d41edc/]]104 -** stage02_processing: [[README.md>>https://drive.ebrains.eu/smart-link/01f21fa5-94f7-4883-8388-cc50957f9c81/]], [[config.yaml>>https://drive.ebrains.eu/ f/b1607671f6f2468ahttps://drive.ebrains.eu/smart-link/02a3f92c-dc7d-4b33-94f5-91b00db060d5/a43c/]]105 -** stage03_trigger_detection: [[README.md>>https://drive.ebrains.eu/smart-link/18d276cd-a691-4ee1-81c6-7978cef9c1b4/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ 76adbb12-7cb4-42df-9fd5-735927ea3ba8/]]106 -** stage04_wavefront_detection: [[README.md>>https://drive.ebrains.eu/smart-link/a8e80096-06a0-4ff4-b645-90e134e46ac5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ 6b0b233f-30b7-4bbd-8564-1abebd27ea6d/]]107 -** stage05_wave_characterization: [[README.md>>https://drive.ebrains.eu/smart-link/3009a214-a11f-424c-8a6e-13e7506545eb/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/ 471001d5-33f5-488e-a9a4-f03b190e3da7/]]96 +** stage01_data_entry: [[README.md>>https://drive.ebrains.eu/smart-link/896f8880-a7d1-4a30-adbf-98759860fed5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/9bef8f59-1007-48c4-b5ba-30de4ff18f34/]] 97 +** stage02_processing: [[README.md>>https://drive.ebrains.eu/smart-link/01f21fa5-94f7-4883-8388-cc50957f9c81/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/7e75caf6-e2d6-4393-a97c-4f481c908cf8/]] 98 +** stage03_trigger_detection: [[README.md>>https://drive.ebrains.eu/smart-link/18d276cd-a691-4ee1-81c6-7978cef9c1b4/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/dfa375c0-cc80-4f95-b3ed-40140acbd96b/]] 99 +** stage04_wavefront_detection: [[README.md>>https://drive.ebrains.eu/smart-link/a8e80096-06a0-4ff4-b645-90e134e46ac5/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/3a54be8c-b9f4-4698-a85d-6ad97990b40a/]] 100 +** stage05_wave_characterization: [[README.md>>https://drive.ebrains.eu/smart-link/3009a214-a11f-424c-8a6e-13e7506545eb/]], [[config.yaml>>https://drive.ebrains.eu/smart-link/83f68955-0ca8-4123-9734-6e93349ca3e3/]] 108 108 109 109 * **Enter a dataset** 110 110 There are two test datasets in the collab drive (IDIBAPS and LENS) for which there are also corresponding config files and scripts in the data_entry stage. So, these datasets are ready to be used and analyzed. ... ... @@ -111,26 +111,45 @@ 111 111 For adding new datasets see //[[pipeline/stage01_data_entry/README.md>>https://drive.ebrains.eu/smart-link/d2e93a2a-09f6-4dce-982d-0370953a4da8/]]// 112 112 113 113 * **Run the pipeline (-stages)** 114 -To run the pipeline with [[snakemake>>https://snakemake.readthedocs.io/en/stable/]]), activate the Python environment ##conda activate wavescalephant_env,## make sure you are in the working directory (pipeline/), and call ##snakemake## to run the entire pipeline. 115 - To(re-)executeanindividualstage,you can navigateto theorrespondingstage folderandcallthe##snakemake##commandthere.For running an individual stage,you mayneedtomanuallysetthe path forinput filefor thestage(i.e. the outputfileof theprevious stage)intheconfig file ##INPUT:/path/to/file##.107 +To run the pipeline with [[snakemake>>https://snakemake.readthedocs.io/en/stable/]]), activate the Python environment ##conda activate wavescalephant_env,## make sure you are in the working directory (//pipeline/)//, and call ##snakemake## to run the entire pipeline. 108 +For a more detailed executed guide and how to execute individual stages and blocks see the pipeline [[Readme>>https://drive.ebrains.eu/smart-link/3009a214-a11f-424c-8a6e-13e7506545eb/]]. 116 116 117 117 == Accessing and using the results == 118 118 119 119 All results are stored in the path specified in the //settings.py// file. The folder structure reflects the structuring of the pipeline into stages and blocks. All intermediate results are stored as //.nix// files using the [[Neo data format>>https://neo.readthedocs.io/en/stable/]] and can be loaded with ##neo.NixIO('/path/to/file.nix').read_block()##. Additionally, most blocks produce a figure, and each stage a report file, to give an overview of the execution log, parameters, intermediate results, and to help with debugging. The final stage (//stage05_wave_characterization//) stores the results as[[ //pandas.DataFrames//>>https://pandas.pydata.org/]] in //.csv// files, separately for each measure as well as in a combined dataframe for all measures. 120 120 114 +== Outlook == 115 + 116 +* Using the **KnowledgeGraph API **to insert data directly from the Knowledge Graph into the pipeline and also register and store the corresponding results as Analysis Objects. Such Analysis Objects are to incorporate **Provenance Tracking, **using [[fairgraph>>https://github.com/HumanBrainProject/fairgraph]],** **to record the details of the processing and analysis steps. 117 +* Adding support for the pipeline to make use of **HPC** resources when running on the collab. 118 +* Further extending the available **methods** to address a wider variety of analysis objectives and support the processing of other datatypes. Additional documentation and guides should also make it easier for non-developers to contribute new method blocks. 119 +* Extending the **application** of the pipeline to the analysis of other types of activity waves and oscillations. 120 +* Integrating and co-developing new features of the underlying **software tools **[[Elephant>>https://elephant.readthedocs.io/en/latest/]], [[Neo>>https://neo.readthedocs.io/en/stable/]], [[Nix>>https://github.com/G-Node/nix]], [[Snakemake>>https://snakemake.readthedocs.io/en/stable/]]. 121 + 121 121 == References == 122 122 123 -* Celotto, Marco, et al. "Analysis and Model of Cortical Slow Waves Acquired with Optical Techniques." //Methods and Protocols// 3.1 (2020): 14. 124 -* De Bonis, Giulia, et al. "Analysis pipeline for extracting features of cortical slow oscillations." //Frontiers in Systems Neuroscience// 13 (2019): 70. 124 +* [[Celotto, Marco, et al. "Analysis and Model of Cortical Slow Waves Acquired with Optical Techniques." //Methods and Protocols// 3.1 (2020): 14.>>https://doi.org/10.3390/mps3010014]] 125 +* [[De Bonis, Giulia, et al. "Analysis pipeline for extracting features of cortical slow oscillations." //Frontiers in Systems Neuroscience// 13 (2019): 70.>>https://doi.org/10.3389/fnsys.2019.00070]] 126 +* [[Resta, F., Allegra Mascaro, A. L., & Pavone, F. (2020). "Study of Slow Waves (SWs) propagation through wide-field calcium imaging of the right cortical hemisphere of GCaMP6f mice" //EBRAINS//>>https://doi.org/10.25493/3E6Y-E8G]]// // 127 +* [[Sanchez-Vives, M. (2020). "Propagation modes of slow waves in mouse cortex". //EBRAINS//>>https://doi.org/10.25493/WKA8-Q4T]] 128 +* [[Sanchez-Vives, M. (2019). "Cortical activity features in transgenic mouse models of cognitive deficits (Fragile X Syndrome).//" EBRAINS//>>https://doi.org/10.25493/ANF9-EG3]] 125 125 126 -== License (to discuss) == 127 127 128 - All text andxampledata in this collab is licensed under CreativeCommons CC-BY 4.0license. Softwarecodeis licensed underamodified BSD license.131 +Code developed at: 129 129 133 +[[image:https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png||height="35" width="35"]][[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]] 134 + 135 +== License == 136 + 137 +Text is licensed under the Creative Commons CC-BY 4.0 license. LENS data is licensed under the Creative Commons CC-BY-NC-ND 4.0 license. IDIBAPS data is licensed under the Creative Commons CC-BY-NC-SA 4.0 license. Software code is licensed under GNU General Public License v3.0. 138 + 130 130 [[image:https://i.creativecommons.org/l/by/4.0/88x31.png||style="float:left"]] 131 131 132 -= ==141 +[[image:https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png||alt="https://i.creativecommons.org/l/by/4.0/88x31.png" style="float:left"]] 133 133 143 +[[image:https://licensebuttons.net/l/by-nc-nd/4.0/88x31.png||alt="https://i.creativecommons.org/l/by/4.0/88x31.png" style="float:left"]] 144 + 145 + 134 134 == Acknowledgments == 135 135 136 136 This open source software code was developed in part or in whole in the Human Brain Project, funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). ... ... @@ -140,12 +140,12 @@ 140 140 ))) 141 141 142 142 143 -== Executingthepipeline==155 +== == 144 144 145 145 (% class="col-xs-12 col-sm-4" %) 146 146 ((( 147 147 {{box title="**Contents**"}} 148 -{{toc/}} 160 +{{toc depth="3"/}} 149 149 {{/box}} 150 150 151 151