Schedule

This workshop will be held in-person at ICML 2023 at the Hawaii Convention Center on July 28th 2023. The session will cover a tutorial, invited talks, contributed talks and posters. The schedule in Hawaii Standard Time (GMT-10) can be found below. (Click the talks to see their abstracts)


Hawaii Time Type Title & Speakers
9:00 - 9:45 Tutorial On Optimal Control and Machine Learning
Brandon Amos (Meta AI)
This talk tours the optimal control and machine learning methodologies behind recent breakthroughs in the field. These are crucial components for building agents capable of computationally modeling and interacting with our world via planning and reasoning, e.g. for robotics, aircrafts, autonomous vehicles, games, economics, finance, and language, as well as agricultural, biomedical,chemical, industrial, and mechanical systems. We will start with 1) a lightweight introduction to optimal control, and then cover 2) machine learning for optimal control --- this includes reinforcement learning and overviews how the powerful abstractive and predictive capabilities of machine learning can drastically improve every part of a control system; and 3) optimal control for machine learning --- surprisingly in this opposite direction, some machine learning problems are able to be formulated as control problems and solved with optimal control methods, e.g. parts of diffusion models, optimal transport,and optimizing the parameters of models such as large language models with reinforcement learning.
9:45 - 10:30 Invited Talk Two-for-one: Diffusion Models and Force Fields for Coarse-Grained Molecular Dynamics
Rianne van den Berg (Microsoft Research)
In this work I will cover work from the Microsoft Research AI4Science team on the use of score-based generative modeling for coarse-graining (CG) molecular dynamics simulations. By training a diffusion model on protein structures from molecular dynamics simulations we show that its score function approximates a force field that can directly be used to simulate CG molecular dynamics. While having a vastly simplified training setup compared to previous work, we demonstrate that our approach leads to improved performance across several small- to medium-sized protein simulations, reproducing the CG equilibrium distribution, and preserving dynamics of all-atom simulations such as protein folding events.
10:30 - 10:45 Contributed Talk Transport, Variational Inference and Diffusions
Francisco Vargas (Cambrdige), Nikolas Nusken (King’s College London)
10:45 - 11:30 Invited Talk Imposing and Learning Structure in OT Displacements through Cost Engineering
Marco Cuturi (Apple & ENSAE CREST)
I will highlight in this work the flexibility provided by the Gangbo-McCann theorem, which provides a generic way to tie kantorovich dual potential solutions to optimal maps for the Monge problem. We show in particular how setting the ground cost to the squared-Euclidean distance + a regularizer induces displacements that have a structure that is well suited to that regularizer (e.g. sparse if that regularizer is the L1 norm). We propose an approach, in more recent work, to learn parameters of that regularizer.
11:30 - 12:15 Invited Talk Designing High-Dimensional Closed-Loop Optimal Control Using Deep Neural Networks
Jiequn Han (Flatiron Institute)
Designing closed-loop optimal control for high-dimensional nonlinear systems remains a persistent challenge. Traditional methods, such as solving the Hamilton-Jacobi-Bellman equation, suffer from the curse of dimensionality. Recent studies introduced a promising supervised learning approach, akin to imitation learning, that uses deep neural networks to learn from open-loop optimal control solutions.
12:15 - 13:45 Poster Session & Lunch
13:45 - 14:30 Invited Talk Safe Learning in Control
Claire Tomlin UC Berkeley
In many applications of autonomy in robotics, guarantees that constraints are satisfied throughout the learning process are paramount. We present a controller synthesis technique based on the computation of reachable sets, using optimal control and game theory. Then, we present methods for combining reachability with learning-based methods, to enable performance improvement while maintaining safety, and to move towards safe robot control with learned models of the dynamics and the environment. We will discuss different interaction models with other agents. Finally, we will illustrate these safe learning methods on robotic platforms at Berkeley, discussing applications in automated airspace management and air taxi operations.
14:30 - 14:45 Contributed Talk Bridging Reinforcement Learning Theory and Practice with the Effective Horizon
Cassidy Laidlaw, Stuart Russell, Anca Dragan (UC Berkeley)
14:45 - 15:00 Coffee Break
15:00 - 15:45 Invited Talk Reinforcement Learning and Multi-Agent Reinforcement Learning
Giorgia Ramponi (ETH Zurich)
Reinforcement learning (RL) has emerged as a powerful paradigm for enabling intelligent agents to solve sequential decision-making problems under uncertainties. It has witnessed remarkable successes in various domains, ranging from game-playing agents to autonomous systems. However, as real-world challenges become increasingly intricate and interconnected, there is a need to go beyond the single-agent framework. Multi-agent reinforcement learning (MARL), is an extension of RL that enables multiple agents to learn and interact, introducing a new dimension of complexity and sophistication.

This talk delves into the exciting realm of RL and MARL, exploring the foundational principles, recent advancements, and promising applications of these techniques. We begin by introducing the core concepts of RL. Building upon this foundation, we shift our focus to MARL, where multiple agents learn simultaneously, either cooperating or competing with each other. Then, we examine the challenges posed by MARL, including coordination, communication, and the exploration-exploitation dilemma.
15:45 - 16:00 Contributed Talk Modeling Accurate Long Rollouts with Temporal Neural PDE Solvers
Phillip Lippe1, Bastiaan S. Veeling2, Paris Perdikaris2, Richard E Turner2, Johannes Brandstetter2 (1University of Amsterdam, 2Microsoft Research AI4Science)
16:00 - 17:00 Poster Session