Tutorials

ACC 2023 will offer tutorial sessions to provide introduction to topics of interest from both academia and industry. The tutorials will take place in person during the conference. Below is a tentative schedule of the tutorials. You can find more information from the last column of the program at Program At A Glance.

Title Date Time Room Session ID
Combining Physics and Machine Learning Methods to Accelerate Innovation in Sustainability: A Control Perspective
May 31, 2023, 10:00-11:30
Aqua 314
WeA17
A Tutorial on Policy Learning Methods for Advanced Controller Representations
May 31, 2023, 14:00-15:30
Aqua 314
WeB17
Data Driven Control
May 31, 2023, 16:00-17:30
Aqua 314
WeC17
Control of Floating Wind Energy Systems
June 1, 2023 10:00-11:30
Aqua 314
ThA17
Decomposition and Decomposition-Based Algorithms for Control and Optimization of Large-Scale Systems
June 1, 2023 14:30-16:00
Aqua 314
ThB17
Safe and Constrained Rendezvous, Proximity Operations and Docking
June 2, 2023 10:00-11:30
Aqua Salon D
FrA13
Physics-Informed Machine Learning for Modeling and Control of Dynamical Systems: Opportunities and Challenges
June 2, 2023 10:00-11:30
Aqua 313
FrA16
A Tutorial on Real-Time Computing Issues for Control
June 2, 2023 10:00-11:30
Aqua 314
FrA17
Tutorial On: Game Theory for Autonomy: From Min-Max Optimization to Equilibrium and Bounded Rationality Learning
June 2, 2023 13:30-15:00
Aqua 314
FrB17

Tutorial Abstracts

The growing market of lithium-ion batteries in consumer electronics, automobiles, unmanned aerial vehicles, and power grid sector has stressed the need and relevance for a properly designed advanced Battery Management System (BMS) that can ensure the battery system’s reliability and performance. One of the key aspects of an advanced BMS is monitoring critical battery variables of interest such as State of Charge (SOC) and State of Health (SOH), which remain non-measurable via sensors, and use this information to devise on-line control strategies to utilize the batteries safely and effectively. Improving the efficiency and utilization of battery systems can increase the viability and cost-effectiveness of existing energy storage technologies used for transportation and grid-storage. 

Artificial intelligence/machine learning (AI/ML) methods for predicting battery life have received significant attention recently, especially for electric vehicle batteries. Yet, such lifetime predictions are restricted to a fixed (dis)charging profile and are not transferable to other usage profiles. The main drawback of a pure AI/ML based prediction models is in their inability to extrapolate behaviors to outside regimes of operation not assessed for during the model training phase. More importantly, they do not offer generalizable features that can be transferred across multiple cell types and chemistries. Physics-based models (PBM) describing non-measurable internal states (e.g., states-of-lithiation for cathode and anode) are well established in the battery community. These models are built on first-principles transport and electrochemical equations which are easily transferred from one chemistry/cell variation to another. Yet, identifiability of the parameters is a significant challenge that prevents modelers from uniquely identifying parameters of the model. This is, in part, because parameter values depend on both the (controllable) excitation profiles (e.g., current or power) and (uncontrollable) external conditions (e.g., temperature).

A through overview of physics modeling tools and data driven approaches for life prediction available today will be provided along with pros and cons of each solution.

Reinforcement learning (RL) is subfield of machine learning that focuses on how to use past system measurements to improve the future manipulation/control of a dynamic system. RL can be viewed as a collection of (often approximate) solution approaches to stochastic optimal control problems that can be used to represent optimal state-to-action policies in uncertain control systems. Even though RL has been around for several decades, it recently gained massive publicity in 2016 when it was able to beat the best Go player in the world, which had been previously thought impossible due to the massive state/action space. Although nothing short of amazing, such game applications have a few useful characteristics that often do not hold in most real-world engineering problems: (i) all system states can be measured; (ii) all measurements are perfect (noise-free); and (iii) huge amounts of data can be collected across many different conditions. This dramatic progress in RL does beg the question: Can we exploit the same methods to solve challenging next-generation control tasks such as self-driving vehicles, agile robotic systems, and smart manufacturing? For RL to expand into such areas, however, the methods must be able to work safe and reliably, especially when the three previous assumptions are not satisfied, since the failure of such systems can have severe economic and social consequences  including loss of human life). In this tutorial, we aim to provide an overview of recent advances in policy search RL methods, which have the ability to produce interpretable control representations in a data-efficient manner. Three main topics will be discussed in detail including safe policy gradient methods, Bayesian optimization for non-differentiable control laws, and robust imitation learning for accelerating convergence of these methods. We also plan to demonstrate how these methods work in the context of the next-generation control applications mentioned previously.

The ushering in of the big-data era, ably supported by exponential advances in computation, has provided new impetus to data-driven control in several engineering sectors. The rapid and deep expansion of this topic has precipitated the need for a showcase of the highlights of data-driven approaches. There has been a rich history of contributions from the control systems community in the area of data-driven control. At the same time, there have been several new concepts and research directions that have also been introduced in recent years. Many of these contributions and concepts have started to transition from theory to practical applications. This paper will provide an overview of the historical contributions and highlight recent concepts and research directions

The objective of this Tutorial Session is to provide attendees at ACC 2023 with the opportunity to gain  an insight into important aspects related to the control of floating wind energy systems. Controls  research has played an important role in wind energy, and advances in controls are making wind turbines more efficient, reliable, and cost-effective. By placing a wind turbine on a floating platform, large areas of high-wind resource become possible sites for wind power plants. However, floating wind turbines are more dynamic and potentially closed-loop unstable. Wind turbine controllers must, in  addition to their existing objectives of power production maximization and structural load mitigation, avoid large platform oscillations and accommodate wave disturbances. The attendees of this timely tutorial session will learn about the increased challenges in controlling floating wind turbines, an industry that is expected to grow extremely rapidly over the next decade. While only 123 MW of floating offshore wind power was operational around the world as of December 31, 2021, projections indicate that cumulative floating offshore wind capacity could exceed 8 GW by 2027, representing a 65-fold increase in the next 5 years. Five independent forecasts on floating offshore wind deployment from 2025 to 2050 show more than 10 GW by 2030 and up to 264 GW by 2050.

Given that offshore wind energy is expected to grow rapidly over the next several decades and that the majority of offshore wind resources is over deep water, floating offshore wind is expected ultimately to be the fastest growing portion of wind energy among land-based, fixed-bottom offshore, and floating offshore wind energy. As such, a tutorial on the challenges and opportunities in the control of floating offshore wind energy systems is very timely.

Large-scale systems comprising of components with complex interactions are ubiquitous in modern engineering systems, including integrated and intensified chemical processes and supply chains. For efficient and flexible automated decision making in large-scale systems, including process control but also design, production and maintenance scheduling, and supply chain management, it is a natural but also profound idea to decompose the decision-making problem into a set of easier to solve  subproblems, often corresponding to subsystems that can be controlled or optimized in a distributed architecture. While it is in principle a combinatorial problem to identify a “best” decomposition, significant progress has been made on the automatic generation of promising subsystem configurations based on the detection of latent block structures in networks: in particular, community and core-periphery structures. Such analysis, enabled by recent advances in network science and machine learning, can be exploited by various decomposition-based (distributed control and optimization) algorithms to reduce the effort required to solve large-scale problems. This tutorial session will provide reviews of the algorithms and methods for decomposition and decomposition-based solution approaches in control and optimization, as well as perspectives on their applications in industry.

Within the past few decades, spaceflight has transitioned from being exclusively driven by a small number of governmental organizations and nations to a widespread activity pursued by a larger number of commercial and academic entities. The low cost and ease of access to space has led to a burgeoning space economy, where large corporations and startup companies are regularly launching satellites for commercial applications. The increasing number of spacecraft in orbit and increasing complexity of spacecraft applications in development as part of this new generation of spaceflight has led to interesting new challenges in spacecraft guidance, navigation, and control. It is these problems that the ACC community can, and does, contribute their unique perspective in solving. One of the goals of this tutorial session is to provide an entry point to spacecraft control and estimation research. Many advances in spacecraft research have been motivated by foundational work developed for other applications (e.g., aviation, robotics, autonomous vehicles), which motivates the need to invite researchers from outside this area to join in the development of next-generation spacecraft control and estimation solutions.

This tutorial session in particular will focus on safe and constrained Rendezvous, Proximity Operations and Docking (RPOD) problems of interest from the academic, US government, and industry perspectives. The session will start with an overview describing the guidance, navigation, and control challenges associated with RPOD, as well as a selection of state-of-the-art techniques in the literature that are often used for these applications. The remainder of the tutorial session speakers will present the unique complexities involved in RPOD safety and logistics from a U.S. government and industry perspective, in  addition to a discussion on the use of convex optimization as a tool for real-time optimal control of aerospace systems.  The breadth of the proposed lineup of talks will foster interesting discussion between experts from academia, government, and industry, which we believe will lead to exciting new ideas, applications, and solutions within the realm of safe and constrained RPOD.

Physics-informed machine learning (PIML) is a set of methods and tools that systematically integrate machine learning (ML) algorithms with physical constraints and abstract mathematical models developed in various scientific and engineering domains. As opposed to purely data-driven methods that assume no existence of prior domain knowledge, PIML models can be trained from additional information obtained by enforcing physical laws such as energy and mass conservation, informed by scientific and engineering domains such as thermodynamics and chemical/mechanical engineering. More broadly, PIML models can include abstract properties and conditions such as stability, convexity, or invariance from domains such as dynamical systems and control theory. The basic premise of PIML is that the integration of ML and physics can yield more effective, physically consistent, and data-efficient models. Recent works exploring ML application to dynamical systems have demonstrated that by  embedding physical priors into ML, PIML methods can enhance the effectiveness, correctness, safety, and data efficiency of data-driven modeling and control of dynamical systems. Practical applications of PIML for control include automotive systems, energy systems, mechatronic systems and robotics, and process control.

The goal of this session is to provide a tutorial-like overview of the recent advances in PIML for  dynamical system modeling and control. We will cover not only an overview of the theory, methods, tools, and applications of PIML for modeling and control, but also its opportunities and challenges. The session will start with a tutorial presentation of the fundamental concepts and methods of PIML, followed by the landscape of PIML for control, including: 1) Physics-informed system identification; 2) Physics-informed learning-based control; 3) Analysis, verification, and uncertainty quantification of PIML models; and 4) Physics-informed digital twins. The session  will be concluded by three short presentations on specific applications of PIML in data-driven modeling and control. We have structured the tutorial so that it will be accessible and useful for new researchers and graduate students who are interested in this research topic.

Everybody assumes the control algorithm will be implemented using a computer, but few want to know how that actually happens.  Oftentimes, that means that it can’t.  This oversimplification gives the reason for this tutorial session proposal.  Control engineers often spend so much time developing elegant algorithms that we forget that to make those algorithms do anything in the physical world requires the math to be translated into something that can receive signals from sensors and move actuators quickly enough to outrun (or at least keep up with) the physics of the problem.

Making this work in practice is one of the tedious, time consuming tasks of control engineering.   Beyond being tedious, it has neither a closed-form solution nor time invariance.  The complexities of the signal chains in the physical system, the inputs from that system, the outputs to that system, and the computation itself do not lend themselves to a clean set of differential equations.  As such, the best practices generally involve broad statements about speed and memory, rather than a clean model for optimization.  At the same time, Moore’s Law almost guarantees that any technology specific examples given in one year will be out of date two years later.  This makes it more common to tie discussions of computer implementation to a specific application example.  To that end, the main tutorial paper in this session emphasizes the underlying principles of using computers in the context of feedback loops.  The remaining papers illustrate these principles in a current context with examples draw from current  research efforts. While the specific technologies and speeds are bound to change, it is our hope that the principles stand the test of time.

The tutorial will present various state-of-the-art game-theoretic approaches, as well as a background on how learning and bounded rationality can enable smart autonomy in multi-agent and cyber-physical systems. Given the presence of modeling uncertainties, the potential unavailability of a model, the possibility of cooperative/non-cooperative goals, and the prospect of malicious attacks compromising the security of networked teams, there is a need for such approaches that respond to situations not programmed or anticipated by design.

The tutorial will start with a 20-minute talk from J. P. Hespanha that will provide a background on min-max optimizations with convergence guarantees. Then, J. Shamma, will deliver a 20-minute talk to introduce learning in finite games. L. Pavel, in her 20-minute talk, will provide a tutorial on two representative instances in reinforcement learning-based game theory:  payoff-based play, and

Q-learning and connections with passivity. K. G. Vamvoudakis will present a 15-minute talk on computationally and communicationally efficient approaches for decision-making in non-equilibrium stochastic games.

Finally, M. Liu, in her 15-minute talk, will provide an application of potential games to autonomous driving. 

The outcome of this tutorial will be a combined paper (18 pages lead tutorial) with co-authorship by all the participants. This will eventually give a complete route from min-max optimizations to learning in games that converge to Nash and bounded rational equilibria, both in discrete and continuous time, with discrete and continuous action spaces.

The next section gives the list of participants, their presentation titles, and abstracts.