Changes for page Methodology

Last modified by manuelmenendez on 2025/03/14 08:31

From version 5.1
edited by manuelmenendez
on 2025/01/29 19:11
Change comment: There is no comment for this version
To version 22.1
edited by manuelmenendez
on 2025/02/15 12:55
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -1,109 +1,106 @@
1 -=== **Overview** ===
1 +**Neurodiagnoses AI** is an open-source, AI-driven framework designed to enhance the diagnosis and prognosis of central nervous system (CNS) disorders. Building upon the Florey Dementia Index (FDI) methodology, it now encompasses a broader spectrum of neurological conditions. The system integrates multimodal data sources—including EEG, neuroimaging, biomarkers, and genetics—and employs machine learning models to deliver explainable, real-time diagnostic insights. A key feature of this framework is the incorporation of the **Generalized Neuro Biomarker Ontology Categorization (Neuromarker)**, which standardizes biomarker classification across all neurodegenerative diseases, facilitating cross-disease AI training.
2 2  
3 -This section describes the step-by-step process used in the **Neurodiagnoses** project to develop a novel diagnostic framework for neurological diseases. The methodology integrates artificial intelligence (AI), biomedical ontologies, and computational neuroscience to create a structured, interpretable, and scalable diagnostic system.
3 +**Neuromarker: Generalized Biomarker Ontology**
4 4  
5 -----
5 +Neuromarker extends the Common Alzheimer’s Disease Research Ontology (CADRO) into a comprehensive biomarker categorization framework applicable to all neurodegenerative diseases (NDDs). This ontology enables standardized classification, AI-based feature extraction, and seamless multimodal data integration.
6 6  
7 -=== **1. Data Integration** ===
7 +**Core Biomarker Categories**
8 8  
9 -==== **Data Sources** ====
9 +Within the Neurodiagnoses AI framework, biomarkers are categorized as follows:
10 10  
11 -* **Biomedical Ontologies**:
12 -** Human Phenotype Ontology (HPO) for phenotypic abnormalities.
13 -** Gene Ontology (GO) for molecular and cellular processes.
14 -* **Neuroimaging Datasets**:
15 -** Example: Alzheimer’s Disease Neuroimaging Initiative (ADNI), OpenNeuro.
16 -* **Clinical and Biomarker Data**:
17 -** Anonymized clinical reports, molecular biomarkers, and test results.
11 +|=**Category**|=**Description**
12 +|**Molecular Biomarkers**|Omics-based markers (genomic, transcriptomic, proteomic, metabolomic, lipidomic)
13 +|**Neuroimaging Biomarkers**|Structural (MRI, CT), Functional (fMRI, PET), Molecular Imaging (tau, amyloid, α-synuclein)
14 +|**Fluid Biomarkers**|CSF, plasma, blood-based markers for tau, amyloid, α-synuclein, TDP-43, GFAP, NfL
15 +|**Neurophysiological Biomarkers**|EEG, MEG, evoked potentials (ERP), sleep-related markers
16 +|**Digital Biomarkers**|Gait analysis, cognitive/speech biomarkers, wearables data, EHR-based markers
17 +|**Clinical Phenotypic Markers**|Standardized clinical scores (MMSE, MoCA, CDR, UPDRS, ALSFRS, UHDRS)
18 +|**Genetic Biomarkers**|Risk alleles (APOE, LRRK2, MAPT, C9orf72, PRNP) and polygenic risk scores
19 +|**Environmental & Lifestyle Factors**|Toxins, infections, diet, microbiome, comorbidities
18 18  
21 +**Integrating External Databases into Neurodiagnoses**
19 19  
20 -==== **Data Preprocessing** ====
23 +To enhance diagnostic precision, Neurodiagnoses AI incorporates data from multiple biomedical and neurological research databases. Researchers can integrate external datasets by following these steps:
21 21  
22 -1. **Standardization**: Ensure all data sources are normalized to a common format.
23 -1. **Feature Selection**: Identify relevant features for diagnosis (e.g., biomarkers, imaging scores).
24 -1. **Data Cleaning**: Handle missing values and remove duplicates.
25 +1. (((
26 +**Register for Access**
25 25  
26 -----
28 +* Each external database requires individual registration and access approval.
29 +* Ensure compliance with ethical approvals and data usage agreements before integrating datasets into Neurodiagnoses.
30 +* Some repositories may require a Data Usage Agreement (DUA) for sensitive medical data.
31 +)))
32 +1. (((
33 +**Download & Prepare Data**
27 27  
28 -=== **2. AI-Based Analysis** ===
35 +* Download datasets while adhering to database usage policies.
36 +* (((
37 +Ensure files meet Neurodiagnoses format requirements:
29 29  
30 -==== **Model Development** ====
39 +|=**Data Type**|=**Accepted Formats**
40 +|**Tabular Data**|.csv, .tsv
41 +|**Neuroimaging**|.nii, .dcm
42 +|**Genomic Data**|.fasta, .vcf
43 +|**Clinical Metadata**|.json, .xml
44 +)))
45 +* (((
46 +**Mandatory Fields for Integration**:
31 31  
32 -* **Embedding Models**: Use pre-trained models like BioBERT or BioLORD for text data.
33 -* **Classification Models**:
34 -** Algorithms: Random Forest, Support Vector Machines (SVM), or neural networks.
35 -** Purpose: Predict the likelihood of specific neurological conditions based on input data.
48 +* Subject ID: Unique patient identifier
49 +* Diagnosis: Standardized disease classification
50 +* Biomarkers: CSF, plasma, or imaging biomarkers
51 +* Genetic Data: Whole-genome or exome sequencing
52 +* Neuroimaging Metadata: MRI/PET acquisition parameters
53 +)))
54 +)))
55 +1. (((
56 +**Upload Data to Neurodiagnoses**
36 36  
37 -==== **Dimensionality Reduction and Interpretability** ====
58 +* (((
59 +**Option 1: Upload to EBRAINS Bucket**
38 38  
39 -* Leverage [[DEIBO>>https://drive.ebrains.eu/f/8d7157708cde4b258db0/]] (Data-driven Embedding Interpretation Based on Ontologies) to connect model dimensions to ontology concepts.
40 -* Evaluate interpretability using metrics like the Area Under the Interpretability Curve (AUIC).
61 +* Location: EBRAINS Neurodiagnoses Bucket
62 +* Ensure correct metadata tagging before submission.
63 +)))
64 +* (((
65 +**Option 2: Contribute via GitHub Repository**
41 41  
42 -----
67 +* Location: GitHub Data Repository
68 +* Create a new folder under /data/ and include a dataset description.
69 +* For large datasets, contact project administrators before uploading.
70 +)))
71 +)))
72 +1. (((
73 +**Integrate Data into AI Models**
43 43  
44 -=== **3. Diagnostic Framework** ===
75 +* Open Jupyter Notebooks on EBRAINS to run preprocessing scripts.
76 +* Standardize neuroimaging and biomarker formats using harmonization tools.
77 +* Utilize machine learning models to handle missing data and feature extraction.
78 +* Train AI models with newly integrated patient cohorts.
45 45  
46 -==== **Axes of Diagnosis** ====
80 +**Reference**: See docs/data_processing.md for detailed instructions.
81 +)))
47 47  
48 -The framework organizes diagnostic data into three axes:
83 +**AI-Driven Biomarker Categorization**
49 49  
50 -1. **Etiology**: Genetic and environmental risk factors.
51 -1. **Molecular Markers**: Biomarkers such as amyloid-beta, tau, and alpha-synuclein.
52 -1. **Neuroanatomical Correlations**: Results from neuroimaging (e.g., MRI, PET).
85 +Neurodiagnoses employs advanced AI models for biomarker classification:
53 53  
54 -==== **Recommendation System** ====
87 +|=**Model Type**|=**Application**
88 +|**Graph Neural Networks (GNNs)**|Identify shared biomarker pathways across diseases
89 +|**Contrastive Learning**|Distinguish overlapping vs. unique biomarkers
90 +|**Multimodal Transformer Models**|Integrate imaging, omics, and clinical data
55 55  
56 -* Suggests additional tests or biomarkers if gaps are detected in the data.
57 -* Prioritizes tests based on clinical impact and cost-effectiveness.
92 +**Collaboration & Partnerships**
58 58  
59 -----
94 +Neurodiagnoses actively seeks partnerships with data providers to:
60 60  
61 -=== **4. Computational Workflow** ===
96 +* Enable API-based data integration for real-time processing.
97 +* Co-develop harmonized AI-ready datasets with standardized annotations.
98 +* Secure funding opportunities through joint grant applications.
62 62  
63 -1. **Data Loading**: Import data from storage (Drive or Bucket).
64 -1. **Feature Engineering**: Generate derived features from the raw data.
65 -1. **Model Training**:
66 -1*. Split data into training, validation, and test sets.
67 -1*. Train models with cross-validation to ensure robustness.
68 -1. **Evaluation**:
69 -1*. Metrics: Accuracy, F1-Score, AUIC for interpretability.
70 -1*. Compare against baseline models and domain benchmarks.
100 +**Interested in Partnering?**
71 71  
72 -----
102 +If you represent a research consortium or database provider, reach out to explore data-sharing agreements.
73 73  
74 -=== **5. Validation** ===
104 +**Contact**: [[info@neurodiagnoses.com>>mailto:info@neurodiagnoses.com]]
75 75  
76 -==== **Internal Validation** ====
77 -
78 -* Test the system using simulated datasets and known clinical cases.
79 -* Fine-tune models based on validation results.
80 -
81 -==== **External Validation** ====
82 -
83 -* Collaborate with research institutions and hospitals to test the system in real-world settings.
84 -* Use anonymized patient data to ensure privacy compliance.
85 -
86 -----
87 -
88 -=== **6. Collaborative Development** ===
89 -
90 -The project is open to contributions from researchers, clinicians, and developers. Key tools include:
91 -
92 -* **Jupyter Notebooks**: For data analysis and pipeline development.
93 -** Example: [[probabilistic imputation>>https://drive.ebrains.eu/f/4f69ab52f7734ef48217/]]
94 -* **Wiki Pages**: For documenting methods and results.
95 -* **Drive and Bucket**: For sharing code, data, and outputs.
96 -* **Collaboration with related projects: **For instance: [[//Beyond the hype: AI in dementia – from early risk detection to disease treatment//>>https://www.lethe-project.eu/beyond-the-hype-ai-in-dementia-from-early-risk-detection-to-disease-treatment/]]
97 -
98 -----
99 -
100 -=== **7. Tools and Technologies** ===
101 -
102 -* **Programming Languages**: Python for AI and data processing.
103 -* **Frameworks**:
104 -** TensorFlow and PyTorch for machine learning.
105 -** Flask or FastAPI for backend services.
106 -* **Visualization**: Plotly and Matplotlib for interactive and static visualizations.
107 -* **EBRAINS Services**:
108 -** Collaboratory Lab for running Notebooks.
109 -** Buckets for storing large datasets.
106 +
workflow neurodiagnoses.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.manuelmenendez
Size
... ... @@ -1,0 +1,1 @@
1 +157.5 KB
Content