Last modified by robing on 2022/03/25 09:55

From version 73.1
edited by denker
on 2020/05/08 15:19
Change comment: There is no comment for this version
To version 80.1
edited by robing
on 2020/05/26 09:41
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.denker
1 +XWiki.robing
Content
... ... @@ -74,10 +74,12 @@
74 74  
75 75  * **Run the notebook**
76 76  In the jupyter hub, navigate to //drive/My Libraries/My Library/pipeline/showcase_notebooks/run_snakemake_in_collab.ipynb//, or where you copied the //pipeline// folder to.
77 -Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake
77 +Follow the notebook to install the required packages into your Python kernel, set the output path, and execute the pipeline with snakemake.
78 78  
79 79  === ii) Local execution ===
80 80  
81 +//tested only with Mac OS and Linux!//
82 +
81 81  * **Get the code**
82 82  The source code of the pipeline is available via Github: [[INM-6/wavescalephant>>https://github.com/INM-6/wavescalephant]] and can be cloned to your machine ([[how to get started with Github>>https://guides.github.com/activities/hello-world/]]).
83 83  
... ... @@ -86,6 +86,10 @@
86 86  In the wavescalephant git repository, there is an environment file ([[pipeline/environment.yaml>>https://drive.ebrains.eu/smart-link/1a0b15bb-be87-46ee-b838-4734bc320d20/]]) specifying the required packages and versions. To build the environment, we recommend using conda ([[how to get started with conda>>https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html]]).
87 87  ##conda env create ~-~-file environment.yaml
88 88  conda activate wavescalephant_env##
91 +
92 +Make sure that neo and elephant were installed as their Github development version, and if necessary add them manually to the environment.
93 +##pip install git+https:~/~/github.com/NeuralEnsemble/elephant.git
94 +pip install git+https:~/~/github.com/NeuralEnsemble/python-neo.git##
89 89  
90 90  )))
91 91  * **Edit the settings**
... ... @@ -113,8 +113,20 @@
113 113  
114 114  All results are stored in the path specified in the //settings.py// file. The folder structure reflects the structuring of the pipeline into stages and blocks. All intermediate results are stored as //.nix// files using the [[Neo data format>>https://neo.readthedocs.io/en/stable/]] and can be loaded with ##neo.NixIO('/path/to/file.nix').read_block()##. Additionally, most blocks produce a figure, and each stage a report file, to give an overview of the execution log, parameters, intermediate results, and to help with debugging. The final stage (//stage05_wave_characterization//) stores the results as[[ //pandas.DataFrames//>>https://pandas.pydata.org/]] in //.csv// files, separately for each measure as well as in a combined dataframe for all measures.
115 115  
116 -== Outlook ==
122 +**Examples of the output figures (for IDIBAPS dataset)**
117 117  
124 +* Stage 01 - [[example signal traces and metadata>>https://drive.ebrains.eu/smart-link/cf2fa914-260d-4d61-a2da-03ea07b7f9be/]]
125 +* Stage 02 - [[background substraction>>https://drive.ebrains.eu/smart-link/586d2f3c-591b-4dfb-94ee-8c0e28050dc4/]]
126 +* Stage 02 - [[logMUA estimation>>https://drive.ebrains.eu/smart-link/c92e4b0c-0938-44e8-9f8d-00522796b2fd/]]
127 +* Stage 02 - [[processed signal trace>>https://drive.ebrains.eu/smart-link/26ed27c6-de56-4b48-a57b-f70aab629197/]]
128 +* Stage 03 - [[amplitude distribution>>https://drive.ebrains.eu/smart-link/8ba80293-ba75-4a37-8a8f-05d44cf6f65c/]]
129 +* Stage 03 - [[UP state detection>>https://drive.ebrains.eu/smart-link/ab172be0-178e-4153-a3e6-b4bace32dd50/]]
130 +* Stage 04 - [[trigger clustering>>https://drive.ebrains.eu/smart-link/4a1f0169-8b43-49ce-80c8-f2fa0f4d50d3/]]
131 +* Stage 05 - [[planar velocities>>https://drive.ebrains.eu/smart-link/f4de8073-cb40-47a7-bc82-f97d36dbae25/]]
132 +* Stage 05 - [[directionality>>https://drive.ebrains.eu/smart-link/5485032d-0121-4cde-9ea2-3e0af3f12178/]]
133 +
134 +=== Outlook ===
135 +
118 118  * Using the **KnowledgeGraph API **to insert data directly from the Knowledge Graph into the pipeline and also register and store the corresponding results as Analysis Objects. Such Analysis Objects are to incorporate **Provenance Tracking, **using [[fairgraph>>https://github.com/HumanBrainProject/fairgraph]],** **to record the details of the processing and analysis steps.
119 119  * Adding support for the pipeline to make use of **HPC** resources when running on the collab.
120 120  * Further extending the available **methods** to address a wider variety of analysis objectives and support the processing of other datatypes. Additional documentation and guides should also make it easier for non-developers to contribute new method blocks.
Collaboratory.Apps.Collab.Code.CollabClass[0]
Public
... ... @@ -1,1 +1,1 @@
1 -No
1 +Yes
XWiki.XWikiRights[3]
Allow/Deny
... ... @@ -1,0 +1,1 @@
1 +Allow
Levels
... ... @@ -1,0 +1,1 @@
1 +view
Users
... ... @@ -1,0 +1,1 @@
1 +XWiki.XWikiGuest
XWiki.XWikiRights[4]
Allow/Deny
... ... @@ -1,0 +1,1 @@
1 +Allow
Groups
... ... @@ -1,0 +1,1 @@
1 +XWiki.XWikiAllGroup
Levels
... ... @@ -1,0 +1,1 @@
1 +view