Workshop

Abstract

Mapping behavioral actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioral data increases, there is growing interest in modeling neural dynamics during adaptive behaviors to probe neural representations. In particular, neural latent embeddings can reveal underlying correlates of behavior, yet, we lack non-linear techniques that can explicitly and flexibly leverage joint behavior and neural data to uncover neural dynamics.

 

In a recent work, we fill this gap with a novel encoding method, CEBRA, that jointly uses behavioral and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to produce both consistent and high-performance latent spaces. We show that consistency can be used as a metric for uncovering meaningful differences, and the inferred latents can be used for decoding.

 

We validate its accuracy and demonstrate our tool’s utility for both calcium and electrophysiology datasets, across sensory and motor tasks, and in simple or complex behaviors across species. It allows for single and multi-session datasets to be leveraged for hypothesis testing or can be used label-free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, produces consistent latent spaces across 2-photon and Neuropixels data, and can provide rapid, high-accuracy decoding of natural movies from visual cortex.

 

In the workshop, we will discuss the algorithmic basis of self-supervised representation learning algorithms and discuss the fundamentals behind CEBRA. We will then apply CEBRA and its usage modes in a variety of hands-on sessions. If of interest, participants can bring their own datasets, or explore CEBRA on open source datasets we provide.

Schedule

  • 9.00 – 10:40: Introduction to self-supervised learning and CEBRA

  • 10.40 – 11.00: Coffee Break I

  • 11.00 – 12:30: Hands on: CEBRA

  • fundamentals, Usage modes, Decoding, Visualization

  • 12:30 – 13:30: Lunch break

  • 13:30 – 15:00: Hands on: Consistency, Hypothesis testing, Advanced usage modes

  • 15:00 – 15:20: Coffee Break II

  • 15:20 – 17:00: Hands on: Advanced usage modes, applications to your own datasets

About the Speaker

Steffen Schneider

Research Assistant at EPFL

Biography

Steffen Schneider is an incoming PI at Helmholtz Munich, where he will work on representation learning and inference algorithms for dynamical systems. He is currently a final year PhD student at IMPRS-IS Tübingen and EPFL in the ELLIS PhD & Postdoc program, working with Matthias Bethge and Mackenzie Mathis on robust machine learning methods for scientific inference. His doctoral studies are funded by a Google AI PhD Fellowship. Previously, he interned at Meta AI in New York with Laurens van der Maaten and Ishan Misra, at Amazon Web Services with Matthias Bethge, Peter Gehler and Bernhard Schölkopf, and worked as an AI Resident on self-supervised learning for speech processing at Meta AI in Menlo Park, CA, with Michael Auli, Alexei Baevski and Ronan Collobert. He is also co-founder of the German non-profit organization KI macht Schule which develops AI teaching materials for school education, and co-founder of the machine learning startup Kinematik AI.