Attention: The Collaboratory IAM will down for up to 1 hour on Monday, the 7th of July 2025 starting from 5pm CEST (my timezone) for up to 1 hour. Any and all services, which require a user login with an EBRAINS account, will be un-available during that time


Changes for page HPC Resources in EBRAINS

Last modified by korculanin on 2025/01/29 10:46

From version 9.1
edited by korculanin
on 2024/06/10 13:07
Change comment: There is no comment for this version
To version 5.1
edited by korculanin
on 2024/06/10 11:35
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -14,18 +14,10 @@
14 14  (((
15 15  The high-performance computing services for EBRAINS include:
16 16  
17 -* Scalable Computing Services
18 -Massively parallel HPC systems that are suitable for highly parallel brain simulations or for high-throughput data analysis tasks.
19 -* Interactive Computing Services
20 -Quick access to single compute servers to analyse and visualise data interactively, or to connect to running simulations, which are using the scalable compute services.
21 -* (((
22 -Active Data Repositories
23 -Site-local data repositories close to computational and/or visualization resources that are used for storing temporary replicas of data sets. In the near future they will typically be realised using parallel file systems.
24 -)))
25 -* (((
26 -Archival Data Repositories
27 -Federated data storage, optimized for capacity, reliability and availability that is used for long-term storage of large data sets which cannot be easily regenerated. These data stores allow the sharing of data with other researchers inside and outside of HBP
28 -)))
17 +* Scalable Computing Services, massively parallel HPC systems that can, e.g., be used for large-scale simulations or heavy machine-learning workloads.
18 +* Interactive Computing Services that allow the use of high-end single compute servers, e.g., to analyse and visualise data interactively or to connect to running simulations, which use scalable computing services.
19 +* Storage
20 +
29 29  
30 30  Currently, we have in-kind resources available for EBRAINS users at:
31 31  
... ... @@ -32,23 +32,20 @@
32 32  * CINECA: Galileo100 (scalable + interactive)
33 33  ** 3 M core-h, for EBRAINS members (developers) only
34 34  * JSC: JUSUF Cluster (available until 03/25)
35 -** core-h according to availability 
36 -
27 +** core-h according to availability
37 37  
38 38  Further HPC resources are available through regular national calls at:
30 +- France: [[CEA>>https://www.edari.fr]]
31 +- Germany: [[JSC>>https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/apply-for-computing-time]]
32 +- Italy: [[CINECA>>https://www.hpc.cineca.it/hpc-access/access-cineca-resources/iscra-projects/]] 
33 +- Switzerland: [[CSCS>>https://www.cscs.ch/user-lab/allocation-schemes]]
39 39  
40 -* France: [[CEA>>https://www.edari.fr]]
41 -* Germany: [[JSC>>https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/apply-for-computing-time]]
42 -* Italy: [[CINECA>>https://www.hpc.cineca.it/hpc-access/access-cineca-resources/iscra-projects/]]
43 -* Switzerland: [[CSCS>>https://www.cscs.ch/user-lab/allocation-schemes]] 
44 -
45 -
46 46  In most cases, a prerequisite for the national calls is that the PI be affiliated with an academic institution located in the same country as the hosting site.
47 47  
48 48  
49 49  = (% style="font-family:inherit" %)How to apply for the EBRAINS in-kind resources(%%) =
50 50  
51 -Applying for EBRAINS in-kind resources is straightforward and requires only a technical review. Use the provided [[template>>https://drive.ebrains.eu/f/46b93975bde64a2e92e9/]] and send it to [[base-infra-resources@ebrains.eu>>mailto:base-infra-resources@ebrains.eu]].
40 +Applying for EBRAINS in-kind resources is straightforward and requires only a technical review. Use the provided [template](URL) and send it to [[base-infra-resources@ebrains.eu>>mailto:base-infra-resources@ebrains.eu]].
52 52  
53 53  
54 54  = (% style="font-family:inherit" %)Contact(%%) =