Last modified by robing on 2022/03/25 09:55

From version 64.1
edited by robing
on 2020/04/29 13:47
Change comment: There is no comment for this version
To version 65.1
edited by robing
on 2020/04/30 14:20
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -75,12 +75,7 @@
75 75  
76 76  * **Run the notebook**
77 77  In the jupyter hub, navigate to //drive/My Libraries/My Library/pipeline/showcase_notebooks/run_snakemake_in_collab.ipynb//, or where you copied the //pipeline// folder to.
78 -Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake.
79 -
80 -* **Coming soon**
81 -** Use of KnowledgeGraph API
82 -** Provenance Tracking
83 -** HPC support
78 +Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake
84 84  
85 85  === ii) Local execution ===
86 86  
... ... @@ -119,6 +119,14 @@
119 119  
120 120  All results are stored in the path specified in the //settings.py// file. The folder structure reflects the structuring of the pipeline into stages and blocks. All intermediate results are stored as //.nix// files using the [[Neo data format>>https://neo.readthedocs.io/en/stable/]] and can be loaded with ##neo.NixIO('/path/to/file.nix').read_block()##. Additionally, most blocks produce a figure, and each stage a report file, to give an overview of the execution log, parameters, intermediate results, and to help with debugging. The final stage (//stage05_wave_characterization//) stores the results as[[ //pandas.DataFrames//>>https://pandas.pydata.org/]] in //.csv// files, separately for each measure as well as in a combined dataframe for all measures.
121 121  
117 +== Outlook ==
118 +
119 +* Using the **KnowledgeGraph API **to insert data directly from the Knowledge Graph into the pipeline and also register and store the corresponding results as Analysis Objects. Such Analysis Objects are to incorporate **Provenance Tracking, **using [[fairgraph>>https://github.com/HumanBrainProject/fairgraph]],** **to record the details of the processing and analysis steps.
120 +* Adding support for the pipeline to make use of **HPC** resources when running on the collab.
121 +* Further extending the available **methods** to address a wider variety of analysis objectives and support the processing of other datatypes. Additional documentation and guides should also make it easier for non-developers to contribute new method blocks.
122 +* Extending the **application** of the pipeline to the analysis of other types of activity waves and oscillations.
123 +* Integrating and co-developing new features of the underlying **software tools **[[Elephant>>https://elephant.readthedocs.io/en/latest/]], [[Neo>>https://neo.readthedocs.io/en/stable/]], [[Nix>>https://github.com/G-Node/nix]], [[Snakemake>>https://snakemake.readthedocs.io/en/stable/]].
124 +
122 122  == References ==
123 123  
124 124  * [[Celotto, Marco, et al. "Analysis and Model of Cortical Slow Waves Acquired with Optical Techniques." //Methods and Protocols// 3.1 (2020): 14.>>https://doi.org/10.3390/mps3010014]]