Warning:  Due to planned infrastructure maintenance, the EBRAINS Wiki and EBRAINS Support system will be unavailable for up to three days starting Monday, 14 July. During this period, both services will be inaccessible, and any emails sent to the support address will not be received.

Attention: We are currently experiencing some issues with the EBRAINS Drive. Please bear with us as we fix this issue. We apologise for any inconvenience caused.


L2L - Hyper parameter optimization framework

Last modified by yegenogl on 2023/07/10 12:06

Vast parameter space exploration using L2L on EBRAINS

What can I find here?

  • Notebooks with hands on examples running L2L local and remotely on HPC
  • Information on how to set up your own optimizee and selecting and optimizer

This workshop features a session on a hyper-parameter optimization framework implementing the concept of Learning to Learn (L2L). This framework provides a selection of different optimization algorithms and makes use of multiple high-performance computing back-ends (multi nodes, GPUs) to do vast parameter space explorations in an automated and parallel fashion (Yegenoglu et al. 2022). During this session, you will learn about the installation and use of this framework within EBRAINS. A TVB (Sanz Leon et al. 2013) simulation used in a study for a scale-integrated understanding of conscious and unconscious brain states and their mechanisms (Goldman et al. 2021) will serve as an example. In this study a set of 5 model variables has been explored, to find optimal parametrization for synchronous and a-synchronous brain states. Participants will learn how to launch a TVB simulation on Fenix’s high performing compute GPU backends using Unicore.

For the OCNS 2023 tutorial:

Who has access?

Intended as the landing page for L2L workshops or tutorials on EBRAINS.