L2L - Hyper parameter optimization framework
Vast parameter space exploration using L2L on EBRAINS
What can I find here?
- Notebooks with hands on examples running L2L local and remotely on HPC
- Information on how to set up your own optimizee and selecting and optimizer
This workshop features a session on a hyper-parameter optimization framework implementing the concept of Learning to Learn (L2L). This framework provides a selection of different optimization algorithms and makes use of multiple high-performance computing back-ends (multi nodes, GPUs) to do vast parameter space explorations in an automated and parallel fashion (Yegenoglu et al. 2022). During this session, you will learn about the installation and use of this framework within EBRAINS. A TVB (Sanz Leon et al. 2013) simulation used in a study for a scale-integrated understanding of conscious and unconscious brain states and their mechanisms (Goldman et al. 2021) will serve as an example. In this study a set of 5 model variables has been explored, to find optimal parametrization for synchronous and a-synchronous brain states. Participants will learn how to launch a TVB simulation on Fenix’s high performing compute GPU backends using Unicore.
For the OCNS 2023 tutorial:
- Please create a JuDoor account
- Register to this project: https://judoor.fz-juelich.de/projects/join/training2323
Who has access?
Intended as the landing page for L2L workshops or tutorials on EBRAINS.