Attention: The Collaboratory IAM will down for up to 1 hour on Monday, the 7th of July 2025 starting from 5pm CEST (my timezone) for up to 1 hour. Any and all services, which require a user login with an EBRAINS account, will be un-available during that time


Changes for page HPC Resources in EBRAINS

Last modified by korculanin on 2025/01/29 10:46

From version 5.1
edited by korculanin
on 2024/06/10 11:35
Change comment: There is no comment for this version
To version 11.1
edited by korculanin
on 2024/06/10 13:11
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -14,10 +14,14 @@
14 14  (((
15 15  The high-performance computing services for EBRAINS include:
16 16  
17 -* Scalable Computing Services, massively parallel HPC systems that can, e.g., be used for large-scale simulations or heavy machine-learning workloads.
18 -* Interactive Computing Services that allow the use of high-end single compute servers, e.g., to analyse and visualise data interactively or to connect to running simulations, which use scalable computing services.
19 -* Storage
20 -
17 +* Scalable Computing Services
18 +** Massively parallel HPC systems that are suitable for highly parallel brain simulations or for high-throughput data analysis tasks.
19 +* Interactive Computing Services
20 +** Quick access to single compute servers to analyse and visualise data interactively, or to connect to running simulations, which are using the scalable compute services.
21 +* Active Data Repositories
22 +** Site-local data repositories close to computational and/or visualization resources that are used for storing temporary replicas of data sets. In the near future they will typically be realised using parallel file systems.
23 +* Archival Data Repositories
24 +** Federated data storage, optimized for capacity, reliability and availability that is used for long-term storage of large data sets which cannot be easily regenerated. These data stores allow the sharing of data with other researchers inside and outside of HBP
21 21  
22 22  Currently, we have in-kind resources available for EBRAINS users at:
23 23  
... ... @@ -24,20 +24,23 @@
24 24  * CINECA: Galileo100 (scalable + interactive)
25 25  ** 3 M core-h, for EBRAINS members (developers) only
26 26  * JSC: JUSUF Cluster (available until 03/25)
27 -** core-h according to availability
31 +** core-h according to availability 
32 +
28 28  
29 29  Further HPC resources are available through regular national calls at:
30 -- France: [[CEA>>https://www.edari.fr]]
31 -- Germany: [[JSC>>https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/apply-for-computing-time]]
32 -- Italy: [[CINECA>>https://www.hpc.cineca.it/hpc-access/access-cineca-resources/iscra-projects/]] 
33 -- Switzerland: [[CSCS>>https://www.cscs.ch/user-lab/allocation-schemes]]
34 34  
36 +* France: [[CEA>>https://www.edari.fr]]
37 +* Germany: [[JSC>>https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/apply-for-computing-time]]
38 +* Italy: [[CINECA>>https://www.hpc.cineca.it/hpc-access/access-cineca-resources/iscra-projects/]]
39 +* Switzerland: [[CSCS>>https://www.cscs.ch/user-lab/allocation-schemes]] 
40 +
41 +
35 35  In most cases, a prerequisite for the national calls is that the PI be affiliated with an academic institution located in the same country as the hosting site.
36 36  
37 37  
38 38  = (% style="font-family:inherit" %)How to apply for the EBRAINS in-kind resources(%%) =
39 39  
40 -Applying for EBRAINS in-kind resources is straightforward and requires only a technical review. Use the provided [template](URL) and send it to [[base-infra-resources@ebrains.eu>>mailto:base-infra-resources@ebrains.eu]].
47 +Applying for EBRAINS in-kind resources is straightforward and requires only a technical review. Use the provided [[template>>https://drive.ebrains.eu/f/46b93975bde64a2e92e9/]] and send it to [[base-infra-resources@ebrains.eu>>mailto:base-infra-resources@ebrains.eu]].
41 41  
42 42  
43 43  = (% style="font-family:inherit" %)Contact(%%) =