# 2023 ACC Workshop on “Contraction Theory for Systems, Control, and Learning”

Organizer: Francesco Bullo, UC Santa Barbara

Full-Day Workshop, in conjection with the 2023 American Control Conference in San Diego, California on May 30, 2023

Links to recent related educational and research events:

## News

• 2022-10-04: Proposal submission

• 2023-01-22: Approval by ACC organizers

• 2023-05-30: Workshop day

## Workshop Abstract

Much recent research has focused on the application of the Banach contraction principle to control and dynamical systems. Similarly, key problems in machine learning and dynamical neuroscience can be addressed with these tools. Contracting dynamical systems automatically enjoy numerous safety and stability guarantees. Moreover, an important complement to these theoretical tools is given by the increasingly-applied theory of monotone operators. The workshop will present an extensive list of presentations by leading scientists worldwide on (1) the foundations of contraction theory, (2) theoretical developments for complex networks, including progress on synchronization and scalability, (3) computational advances in the design of contraction metrics and contracting dynamical systems solving optimization problems, and (4) applications to machine learning, planning and robust control. Of special interest to the AAC audience will be results on robust stability analysis and control design for deterministic and stochastic systems as well as formal robustness and stability guarantees for various learning-based control problems.

This workshop will bring together experts of diverse backgrounds to discuss recent theoretical and computational advances, identify emerging challenges, and discuss rapidly-developing application opportunities. The workshop should be of interest to both junior and senior researchers interested in theoretical and computational advances in systems, control, and learning. A recent successful tutorial session at the 2021 IEEE CDC confirms the interest of the control community in these topics. If approved by the ACC leadership, a tutorial pre-workshop event will be offered to attendees, based upon a recent freely-available textbook: URL.

## List of speakers (in alphabetical order for now) with title/abstracts:

• Zahra Aminzare, University of Iowa, USA

• Soon-Jo Chung, Caltech, USA

• Samuel Coogan, GeorgiaTech, USA

• Mario Di Bernardo, Universita’ di Napoli, Italy

• Sigurdur F. Hafstein, University of Iceland, Iceland

• Ian R. Manchester, University of Sydney, Australia

• Anton Proskurnikov, Politecnico di Torino, Italy

• Giovanni Russo, Universita di Salerno, Italy

• Rodolphe Sepulchre, Cambridge University, UK

• Jean-Jacques Slotine, MIT, USA

• Francesco Bullo, UC Santa Barbara, USA

## Abstracts

Speaker: Zahra Aminzare, University of Iowa
Title: A contraction-based approach to study synchronization properties of complex networks
Abstract: Contraction theory provides an elegant way to analyze the behavior of certain nonlinear dynamical systems. In this lecture, we discuss the application of contraction to network synchronization. First, we review conditions that guarantee synchronization in networks of homogeneous and deterministic systems which are diffusively coupled. Motivated by neural networks, we allow heterogeneity, stochasticity, and non-diffusive coupling across the network and show how these factors may affect synchronization properties.

Speaker: Soon-Jo Chung, Caltech
Title: Contraction Theory for ML-based Control and Planning
Abstract: Contraction theory takes advantage of a superior robustness property of exponential stability used in conjunction with the comparison lemma. This yields much-needed safety and stability guarantees for neural network-based control and estimation schemes, without resorting to a more involved method of using uniform asymptotic stability for input-to-state stability. Such distinctive features permit systematic construction of a contraction metric via convex optimization, thereby obtaining an explicit exponential bound on the distance between a time-varying target trajectory and solution trajectories perturbed externally due to disturbances and learning errors. I will present two examples of using contraction theory for learning-based control: (1) Neural-Fly (ML-based flight control with rapid adaptation of neural networks), and (2) Neural-Rendezvous (a deep learning-based guidance and control framework for encountering any fast-moving objects robustly, accurately, and autonomously in real-time).

Speaker: Sam Coogan, Georgia Tech
Title: Results on Monotonicity and Contraction on Polyhedral Cones
Abstract: We consider contractive systems that are also monotone with respect to polyhedral cones. In particular, we provide necessary and sufficient conditions for a nonlinear system to be contractive with respect to an appropriate norm defined by the cone. These conditions generalize known results for contractive cooperative systems and allow for computationally scalable methods for checking contractivity.

Speaker: Mario Di Bernardo, Universita’ di Napoli
Title: Convergence and contraction in piecewise smooth systems and networks
Abstract: In this talk I will review our work on the extension of contraction theory to study convergence in piecewise-smooth dynamical systems. After classifying the different types of PWS vector fields of interest in terms of their degree of discontinuity, I will present approaches to study their contraction properties using a single or multiple norms. I will then discuss how to prove convergence in networks of PWS dynamical agents expounding how some of the required properties of the agents’ vector fields can be interpreted in terms of their contractivity. Examples will be used to illustrate the main theoretical derivations.

Speaker: Sigurður F. Hafstein, University of Iceland
Title: Numerical computation of contraction metrics
Abstract: We discuss numerical methods for the computation of contraction metrics for ODE systems with stable equilibria or periodic orbits. We present approximation using generalized interpolation in reproducing kernel Hilbert spaces or integration-quadrature formulas from converse theorems. Further, we present a semidefinite optimization problem, whose feasible solutions deliver true contraction metrics (not approximations). Finally, we show how approximations can be used to deliver feasible solutions to the optimization problems, providing a numerically efficient method to compute true contraction metrics.

Speaker: Ian R. Manchester, University of Sydney
Title: Robust learning of dynamics and feedback policies via contracting neural models
Abstract: In this tutorial, we will introduce a new approach to building nonlinear dynamical models with built-in behavioural guarantees. We show how to construct smooth & unconstrained parameterisations of neural model architectures which are guaranteed to satisfy prescribed incremental quadratic constraints. These can include l2 Lipschitz bounds, incremental passivity, and other constraint types. These “direct” parameterizations enable learning of robust models and control policies via simple first-order methods, without any auxiliary constraints or projections. We will illustrate the approach in the context of system identification, observer design, and reinforcement learning.

Speaker: Anton Proskurnikov, Politecnico di Torino
Title: Non-quadratic S-Lemma and Contractivity of Lur'e-type Systems
Abstract: The S-Lemma was proposed by Yakubovich as a criterion of the existence of a quadratic Lyapunov function in the Lur’e problem of absolute stability. A natural question arises: can non-Euclidean norms (or squared norms) serve as Lyapunov functions in stability problems? This talk presents a novel non-polynomial S-Lemma that leads to constructive criteria for the existence of Lyapunov functions of the type $$V(x)=\|Rx\|_pˆ2$$ (squared weighted $$\ell_p$$-norm).

Speaker: Giovanni Russo, Universita’ di Salerno
Title: On the design of scalable network systems: a contraction theory approach
Abstract: Over the last few years, network systems have considerably increased their size and complexity. For these systems, a key challenge is then that of designing control protocols for the network that do not only guarantee the fulfilment of some desired behavior, but also satisfy the following key requirements: (i) rejection of certain classes of disturbances; (ii) non-amplification of the disturbances that are not rejected. In this talk, the fulfilment of these requirements is captured via a scalability property. Subsequently, it is shown how non-Euclidean contraction can be leveraged to design protocols ensuring network scalability. Application examples are leveraged to illustrate the results.

Speaker: Rodolphe Sepulchre, Cambridge University
Title: How to bridge internal and external contraction?
Abstract: The distinction between internal (or Lyapunov) and external (or input-ouput) stability is a pillar of system theory. This paper will reflect on the incremental form of those concepts, starting with the distinction between internal (or Lyapunov) and external (or input-output) contraction. Our particular focus will be on the incremental form of dissipativity. Dissipativity theory provides a fruitful bridge between internal and external stability, but we will argue that its incremental form faces fundamental limitations. We will report on current research avenues to overcome those limitations.

Speaker: Jean-Jacques Slotine, MIT
Title: Twenty-five years of contraction analysis
Abstract: It has been twenty-five years since the paper in Automatica by Lohmiller and Slotine introduced contraction analysis to the nonlinear control community, outlining the role of differential analysis using state-dependent Riemannian metrics and its many potential applications. Research in this domain is now very active, and we will review recent work in our group on applications to machine learning and to non-autonomous partial differential equations.

Speaker: Francesco Bullo, UC Santa Barbara
Title: On contraction theory and monotone operators for control and learning
Abstract: I will present recent results on contraction theory and its twin theory of monotone operators, motivated by the study of neural networks. We will review problems inspired by neuroscience and by machine learning. We will study the interplay between discrete and continuous time dynamics. We will draw examples from implicit models in machine learning, biologically plausible learning in neuroscience, and numerical optimal control problems.

## Proceeds

After costs and taxes, all remaining proceeds from the workshop will donated to the Graduate Student Fellowship Fund for the Mechanical Engineering Department at UC Santa Barbara.

## Acknowledgement

Partial funding for this work is provided by the Air Force Office of Scientific Research through grant FA9550-22-1-0059.