ELLIS header
University of Stuttgart Logo
Max Planck Institute for Intelligent Systems Logo
01.10.2025 - Distinguished Lecture Series: Stefanie Jegelka (TU Munich)

01.10.2025 - Distinguished Lecture Series: Stefanie Jegelka (TU Munich)

We are pleased to announce our upcoming Distinguished Lecture Series talk by Stefanie Jegelka (TU Munich)! The talk will take place in person on October 1, in room UN32.101. Professor Jegelka will also be available for meetings on October 1. If you are interested in scheduling a meeting, please email .

Stefanie Jegelka is a Humboldt Professor at TU Munich and an Associate Professor in the Department of EECS at MIT. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award, a Best Paper Award at ICML and an invited sectional lecture at the International Congress of Mathematicians. She has co-organized multiple workshops on (discrete) optimization in machine learning, graph representation learning, weight space learning and other related topics, and has served as an Action Editor at JMLR and a program chair of ICML 2022.

Title: Does computational structure tell us about deep learning? Some thoughts and examples

Does computational structure tell us about deep learning? Some thoughts and examples

Understanding and steering deep learning training and inference is a nontrivial endeavor. In this talk, I will look at training, learning and inference from the perspective of computational structure, via a few diverse examples. First, computational structure may help understand expressiveness and biases in deep learning models. For instance, it can connect graph neural networks to SDPs, indicating their capability of learning optimal approximation algorithms. Looking at LLMs, the graphical structure of computation helps understand inherent biases in the model, such as preferences for certain positions in long contexts, i.e., a preference for looking at the beginning and the end of a sequence.Second, computational structure exists not only in the architecture but also in inference procedures such as chain-of-thought. Finally, if time permits, we will connect architectural structure via neural parameter symmetries to the training and loss landscape of deep models and explore the effect of removing symmetries.

Date: October 1, 2025
Time: 9:45 - 11:15 CET
Place: Universitätstraße 32.101, Campus Vaihingen of the University of Stuttgart.

Looking forward to seeing you all there! No registration necessary.