2025 IEEE CDC Tutorial Session on “Contraction Theory in Control, Optimization, and Learning”

banner 

Tutorial session at the 2025 IEEE Conference in Decision and Control

Organizer: Francesco Bullo, UC Santa Barbara

Time: Thursday, December 11, 2025, at 16:30-18:30

Place: rooms: Asia I, II, III, and IV, Windsor Convention Center, Rio de Janeiro, Brazil

Invited Tutorial Paper

F. Bullo, S. Coogan, E. Dall’Anese, I. Manchester, G. Russo, “Advances in Contraction Theory for Robust Optimization, Control and Neural Computation”, 64th IEEE Conference on Decision and Control, December 10-12, 2025, (PDF), to appear on IEEE Xplore

Overview

Contraction theory is a powerful mathematical framework for analyzing convergence, robustness, and modularity of dynamical systems, optimization algorithms, and learning methods. Originating from the seminal works of Banach, Demidovich, Krasovski, Desoer, and Slotine, contraction theory provides a unifying set of concepts and tools to systematically study dynamical systems exhibiting exponential stability, incremental stability, and robustness to perturbations and uncertainties. This tutorial will introduce and survey the state of the art in contraction theory, including theoretical foundations, computational methods, robustness properties, and applications to control, optimization, machine learning, and beyond.

Objectives of the Tutorial Session

This tutorial session aims to achieve the following objectives:

Schedule.

The tutorial session will span two hours, structured into five presentations:

Speaker: Francesco Bullo, UC Santa Barbara
Title: Introduction to Contraction Theory and Advances in Equilibrium Tracking (40 minutes)
Abstract: Contraction theory provides a unifying framework for studying incremental stability, robustness, and convergence in dynamical systems and optimization algorithms. This lecture introduces the historical development of contraction theory and reviews the foundational mathematical results, including Demidovich conditions, incremental stability notions, equilibrium tracking, and robustness guarantees. Emphasis is placed on contraction as a versatile tool applicable to control systems, optimization dynamics, and neural network models.

Speaker: Emiliano Dall'Anese, Boston University
Title: Contractivity of Interconnected Continuous- and Discrete-Time Systems (20 minutes)
Abstract: Many optimization-based controllers rely on the interplay between continuous-time plant dynamics and discrete-time optimization algorithms. This lecture examines contractivity conditions for systems formed by sampling a continuous-time model and coupling it with a discrete-time iteration. We present conditions on the sampling period and the number of discrete-time steps that guarantee exponential stability of the interconnected system, drawing parallels with classical small-gain results and highlighting implications for online and sampled-data implementations.

Speaker: Giovanni Russo, University of Salerno
Title: Contraction in Neural Networks and Biologically Plausible Optimization (20 minutes)
Abstract: This lecture explores network-level aspects of contraction theory with a focus on neural dynamics that solve convex optimization problems. We discuss a normative framework for translating composite optimization tasks into biologically plausible neural networks and use contraction tools to characterize convergence and emergent phenomena. Examples from control, machine learning, and signal processing illustrate how contraction-based analysis can reveal stability and robustness properties in complex neural architectures.

Speaker: Samuel Coogan, Georgia Institute of Technology
Title: Linear Differential Inclusions and Contraction Analysis (20 minutes)
Abstract: Recent computational advances enable the characterization of contraction through linear differential inclusions (LDIs). This lecture presents new LDI-based approaches for establishing contraction toward trajectories of interest, emphasizing techniques based on interval overapproximations, LMI conditions, and ellipsoidal reachable-set computations. The methods provide efficient tools to assess incremental stability and robustness properties in complex nonlinear systems.

Speaker: Ian Manchester, University of Sydney
Title: Neural Networks Designed with Contraction-Theoretic Guarantees (20 minutes)
Abstract: Novel neural network architectures can be designed to satisfy strong convergence and robustness properties inspired by contraction theory. This lecture introduces REN, BiLipNet, and PLNet networks, highlighting their structural features such as strong monotonicity, bi-Lipschitz invertibility, and Polyak–Łojasiewicz conditions. We discuss how these properties ensure stable input–output behavior, provide certified robustness, and enable efficient computation of global minima in machine learning tasks.

Each presentation will focus on key theoretical concepts, computational approaches, and concrete applications, aiming to be accessible yet rigorous.

About the Speakers.

Links to recent educational and research events on contraction theory:

Acknowledgement

Partial funding for this work is provided by the Air Force Office of Scientific Research through grant FA9550-22-1-0059.