Skip to content

Commit 7a66731

Browse files
Merge branch 'main' into at/class22
2 parents a0660ad + cab9442 commit 7a66731

34 files changed

+65936
-37
lines changed

README.md

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@
1111
## Overview
1212
This student-led course explores modern techniques for controlling — and learning to control — dynamical systems. Topics range from classical optimal control and numerical optimization to reinforcement learning, PDE-constrained optimization (finite-element methods, Neural DiffEq, PINNs, neural operators), and GPU-accelerated workflows.
1313

14+
## Objective
15+
Create an online book at the end using the materials from all lectures.
16+
1417
## Prerequisites
1518
* Solid linear-algebra background
1619
* Programming experience in Julia, Python, *or* MATLAB
@@ -19,9 +22,22 @@ This student-led course explores modern techniques for controlling — and learn
1922
## Grading
2023
| Component | Weight |
2124
|-----------|--------|
22-
| Participation & paper critiques | **25 %** |
23-
| In-class presentations | **50 %** |
24-
| Projects | **25 %** |
25+
| Participation | **25 %** |
26+
| In-class Presentations and Chapter | **50 %** |
27+
| Projects (Liaison work & Scribe & Admin & ...) | **25 %** |
28+
29+
**Class material is due one week before the lecture!** No exceptions apart from the first 2 lectures.
30+
31+
**Issues outlining references that will be used for lecture preparation are due at the end of the 3rd week (10/05/2025)!**
32+
20 minutes of research should give you an initial idea of what you need to read.
33+
34+
🎯🚲 **Guessing Game**
35+
36+
Here’s how the presentation grading works: we already know the lecture content we expect from you. Any deviations will be penalized **exponentially**. Your mission is twofold:
37+
1. **Check your understanding** — use [discussions](https://github.com/LearningToOptimize/LearningToControlClass/discussions) from previous lectures to ensure you’ve mastered earlier topics. We expect lectures to be extremely linked to each other.
38+
2. **Test your hypotheses** — validate your lecture content by raising and resolving issues, focusing primarily on your *main task issue* (see this example from [class 03](https://github.com/LearningToOptimize/LearningToControlClass/issues/18)).
39+
40+
All interactions will happen **only through GitHub** — no in-person hints will be given.
2541

2642
## Weekly Schedule (Fall 2025 – Fridays 2 p.m. ET)
2743

@@ -30,13 +46,13 @@ This student-led course explores modern techniques for controlling — and learn
3046
| # | Date (MM/DD) | Format / Presenter | Topic & Learning Goals | Prep / Key Resources |
3147
|----|--------------|--------------------|------------------------|----------------------|
3248
| 1 | 08/22/2025 | Lecture — Andrew Rosemberg | Course map; why PDE-constrained **optimization**; tooling overview; stability & state-space dynamics; Lyapunov; discretization issues | [📚](https://learningtooptimize.github.io/LearningToControlClass/dev/class01/class01/) |
33-
| 2 | 08/29/2025 | Lecture - Arnaud Deza | Numerical **optimization** for control (grad/SQP/QP); ALM vs. interior-point vs. penalty methods | |
49+
| 2 | 08/29/2025 | Lecture - Arnaud Deza | Numerical **optimization** for control (grad/SQP/QP); ALM vs. interior-point vs. penalty methods | [📚](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/overview/) |
3450
| 3 | 09/05/2025 | Lecture - Zaowei Dai | Pontryagin’s Maximum Principle; shooting & multiple shooting; LQR, Riccati, QP viewpoint (finite / infinite horizon) | |
3551
| 4 | 09/12/2025 | **External seminar 1** - Joaquim Dias Garcia| Dynamic Programming & Model-Predictive Control | |
3652
| 5 | 09/19/2025 | Lecture - Guancheng "Ivan" Qiu | **Nonlinear** trajectory **optimization**; collocation; implicit integration | |
3753
| 6 | 09/26/2025 | **External seminar 2** - Henrique Ferrolho | Trajectory **optimization** on robots in Julia Robotics | |
3854
| 7 | 10/03/2025 | Lecture - Jouke van Westrenen | Stochastic optimal control, Linear Quadratic Gaussian (LQG), Kalman filtering, robust control under uncertainty, unscented optimal control; | |
39-
| 8 | 10/10/2025 | **External seminar 3** TBD (speaker to be confirmed) | Topology **optimization** | |
55+
| 8 | 10/10/2025 | Lecture - Kevin Wu | Distributed optimal control & multi-agent coordination; Consensus, distributed MPC, and optimization over graphs (ADMM) ||
4056
| 9 | 10/17/2025 | **External seminar 4** — François Pacaud | GPU-accelerated optimal control | |
4157
|10 | 10/24/2025 | Lecture - Michael Klamkin | Physics-Informed Neural Networks (PINNs): formulation & pitfalls | |
4258
|11 | 10/31/2025 | **External seminar 5** - Chris Rackauckas | Neural Differential Equations: PINNs + classical solvers | |
@@ -57,7 +73,7 @@ Students must provide materials equivalent to those used in an in-person session
5773
| 18 | Lecture - Joe Ye | Robust control & min-max DDP (incl. PDE cases); chance constraints; Data-driven control & Model-Based RL-in-the-loop | |
5874
| 19 | Lecture - TBD | Contact Explict and Contact Implicit; Trajectory Optimization for Hybrid and Composed Systems; | |
5975
| 20 | Lecture - TBD | Probabilistic Programming; Bayesian numerical methods; Variational Inference; probabilistic solvers for ODEs/PDEs; Bayesian optimization in control; | |
60-
| 21 | Lecture - TBD | Distributed optimal control & multi-agent coordination; Consensus, distributed MPC, and optimization over graphs (ADMM). | |
76+
| 21 | Lecture - Kevin Wu | Distributed optimal control & multi-agent coordination; Consensus, distributed MPC, and optimization over graphs (ADMM). | |
6177
| 22 | Lecture - Shuaicheng (Allen) Tong | Dynamic Optimal Control of Power Systems; Generators swing equations, Transmission lines electromagnetic transients, dynamic load models, and inverters. | |
6278

6379
## Reference Material

class01/background_materials/math_basics.html

Lines changed: 6 additions & 8 deletions
Large diffs are not rendered by default.

class01/background_materials/math_basics.jl

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,8 @@ md"
3939
| Lecturer | : | Rosemberg, Andrew |
4040
| Date | : | 28 of July, 2025 |
4141
42+
Special thanks to **Guancheng Qiu** for helping fix some of the code!
43+
4244
# Background Math (_Welcome to Pluto!_)
4345
4446
This background material will use Pluto!

class01/background_materials/optimization_basics.html

Lines changed: 2 additions & 2 deletions
Large diffs are not rendered by default.

class01/background_materials/optimization_basics.jl

Lines changed: 31 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,8 @@ md"""
3737
| Lecturer | : | Rosemberg, Andrew |
3838
| Date | : | 28 of July, 2025 |
3939
40+
Special thanks to **Guancheng Qiu** for helping fix some of the code!
41+
4042
"""
4143

4244
# ╔═╡ eeceb82e-abfb-4502-bcfb-6c9f76a0879d
@@ -436,21 +438,32 @@ begin
436438
[7 1 3 9 2 4 8 5 6];
437439
[9 6 1 5 3 7 2 8 4];
438440
[2 8 7 4 1 9 6 3 5];
439-
[3 4 5 2 8 6 1 7 9]])
440-
441-
anss = missing
442-
try
443-
anss = (
444-
x_ss = haskey(sudoku, :x_s) ? JuMP.value.(sudoku[:x_s]) : missing,
445-
)
446-
catch
447-
anss = missing
441+
[3 4 5 2 8 6 1 7 9]],)
442+
443+
anss = (;
444+
x_ss = haskey(sudoku, :x_s) && JuMP.is_solved_and_feasible(sudoku) ? JuMP.value.(sudoku[:x_s]) : missing
445+
)
446+
447+
# Convert 3D binary matrix to 2D solution matrix
448+
function convert_3d_to_solution(x_3d)
449+
if ismissing(x_3d)
450+
return missing
451+
end
452+
solution = zeros(Int, 9, 9)
453+
for i in 1:9, j in 1:9, k in 1:9
454+
if x_3d[i, j, k] 1.0
455+
solution[i, j] = k
456+
end
457+
end
458+
return solution
448459
end
449460

450-
goods = !ismissing(anss) &&
451-
all(isapprox.(anss.x_ss, ground_truth_s.x_ss; atol=1e-3))
461+
solution_matrix = ismissing(anss) ? missing : convert_3d_to_solution(anss.x_ss)
462+
463+
goods = !ismissing(anss) && !ismissing(solution_matrix) &&
464+
all(isapprox.(solution_matrix, ground_truth_s.x_ss; atol=1e-3))
452465

453-
if ismissing(anss)
466+
if ismissing(anss.x_ss)
454467
still_missing()
455468
elseif goods
456469
correct()
@@ -578,8 +591,8 @@ begin
578591
model_nlp = Model(Ipopt.Optimizer)
579592

580593
# Required named variables
581-
@variable(model_nlp, x)
582-
@variable(model_nlp, y)
594+
@variable(model_nlp, x_nlp)
595+
@variable(model_nlp, y_nlp)
583596

584597
# --- YOUR CODE HERE ---
585598

@@ -707,7 +720,7 @@ begin
707720
# Decide which badge to show
708721
if ismissing(ansd) # nothing yet
709722
still_missing()
710-
elseif x == 25.0
723+
elseif ansd == 25.0
711724
correct()
712725
else
713726
keep_working()
@@ -721,8 +734,8 @@ begin
721734
ans3=missing
722735
try
723736
ans3 = (
724-
x = safeval(model_nlp, :x),
725-
y = safeval(model_nlp, :y),
737+
x = safeval(model_nlp, :x_nlp),
738+
y = safeval(model_nlp, :y_nlp),
726739
obj = objective_value(model_nlp),
727740
)
728741
catch
@@ -799,7 +812,7 @@ end
799812
# ╟─bca712e4-3f1c-467e-9209-e535aed5ab0a
800813
# ╟─3997d993-0a31-435e-86cd-50242746c305
801814
# ╠═3f56ec63-1fa6-403c-8d2a-1990382b97ae
802-
# ╟─0e8ed625-df85-4bd2-8b16-b475a72df566
815+
# ╠═0e8ed625-df85-4bd2-8b16-b475a72df566
803816
# ╟─fa5785a1-7274-4524-9e54-895d46e83861
804817
# ╟─5e3444d0-8333-4f51-9146-d3d9625fe2e9
805818
# ╠═0e190de3-da60-41e9-9da5-5a0c7fefd1d7
505 KB
Binary file not shown.

class02/Manifest.toml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# This file is machine-generated - editing it directly is not advised
2+
3+
julia_version = "1.11.6"
4+
manifest_format = "2.0"
5+
project_hash = "da39a3ee5e6b4b0d3255bfef95601890afd80709"
6+
7+
[deps]

class02/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
[deps]

class02/SQP.tex

Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
\section{Sequential Quadratic Programming (SQP)}
2+
3+
% ------------------------------------------------
4+
\begin{frame}{What is SQP?}
5+
\textbf{Idea:} Solve a nonlinear, constrained problem by repeatedly solving a \emph{quadratic program (QP)} built from local models.\\[4pt]
6+
\begin{itemize}
7+
\item Linearize constraints; quadratic model of the Lagrangian/objective.
8+
\item Each iteration: solve a QP to get a step \(d\), update \(x \leftarrow x + \alpha d\).
9+
\item Strength: strong local convergence (often superlinear) with good Hessian info.
10+
\end{itemize}
11+
\end{frame}
12+
13+
% ------------------------------------------------
14+
\begin{frame}{Target Problem (NLP)}
15+
\[
16+
\min_{x \in \R^n} \ f(x)
17+
\quad
18+
\text{s.t.}\quad
19+
g(x)=0,\quad h(x)\le 0
20+
\]
21+
\begin{itemize}
22+
\item \(f:\R^n\!\to\!\R\), \(g:\R^n\!\to\!\R^{m}\) (equalities), \(h:\R^n\!\to\!\R^{p}\) (inequalities).
23+
\item KKT recap (at candidate optimum \(x^\star\)):
24+
\[
25+
\exists \ \lambda \in \R^{m},\ \mu \in \R^{p}_{\ge 0}:
26+
\ \grad f(x^\star) + \nabla g(x^\star)^T\lambda + \nabla h(x^\star)^T \mu = 0,
27+
\]
28+
\[
29+
g(x^\star)=0,\quad h(x^\star)\le 0,\quad \mu \ge 0,\quad \mu \odot h(x^\star) = 0.
30+
\]
31+
\end{itemize}
32+
\end{frame}
33+
34+
% ------------------------------------------------
35+
\begin{frame}{From NLP to a QP (Local Model)}
36+
At iterate \(x_k\) with multipliers \((\lambda_k,\mu_k)\):\\[4pt]
37+
\textbf{Quadratic model of the Lagrangian}
38+
\[
39+
m_k(d) = \ip{\grad f(x_k)}{d} + \tfrac{1}{2} d^T B_k d
40+
\]
41+
with \(B_k \approx \nabla^2_{xx}\Lag(x_k,\lambda_k,\mu_k)\).\\[6pt]
42+
\textbf{Linearized constraints}
43+
\[
44+
g(x_k) + \nabla g(x_k)\, d = 0,\qquad
45+
h(x_k) + \nabla h(x_k)\, d \le 0.
46+
\]
47+
\end{frame}
48+
49+
% ------------------------------------------------
50+
\begin{frame}{The SQP Subproblem (QP)}
51+
\[
52+
\begin{aligned}
53+
\min_{d \in \R^n}\quad & \grad f(x_k)^T d + \tfrac{1}{2} d^T B_k d \\
54+
\text{s.t.}\quad & \nabla g(x_k)\, d + g(x_k) = 0, \\
55+
& \nabla h(x_k)\, d + h(x_k) \le 0.
56+
\end{aligned}
57+
\]
58+
\begin{itemize}
59+
\item Solve QP \(\Rightarrow\) step \(d_k\) and updated multipliers \((\lambda_{k+1},\mu_{k+1})\).
60+
\item Update \(x_{k+1} = x_k + \alpha_k d_k\) (line search or trust-region).
61+
\end{itemize}
62+
\end{frame}
63+
64+
% ------------------------------------------------
65+
\begin{frame}{Algorithm Sketch (SQP)}
66+
\begin{enumerate}
67+
\item Start with \(x_0\), multipliers \((\lambda_0,\mu_0)\), and \(B_0 \succ 0\).
68+
\item Build QP at \(x_k\) with \(B_k\), linearized constraints.
69+
\item Solve QP \(\Rightarrow\) get \(d_k\), \((\lambda_{k+1},\mu_{k+1})\).
70+
\item Globalize: line search on merit or use filter/TR to choose \(\alpha_k\).
71+
\item Update \(x_{k+1} = x_k + \alpha_k d_k\), update \(B_{k+1}\) (e.g., BFGS).
72+
\end{enumerate}
73+
\end{frame}
74+
75+
% ------------------------------------------------
76+
\begin{frame}{Toy Example (Local Models)}
77+
\textbf{Problem:}
78+
\[
79+
\min_{x\in\R^2} \ \tfrac{1}{2}\norm{x}^2
80+
\quad \text{s.t.} \quad g(x)=x_1^2 + x_2 - 1 = 0,\ \ h(x)=x_2 - 0.2 \le 0.
81+
\]
82+
At \(x_k\), build QP with
83+
\[
84+
\grad f(x_k)=x_k,\quad B_k=I,\quad
85+
\nabla g(x_k) = \begin{bmatrix} 2x_{k,1} & 1 \end{bmatrix},\
86+
\nabla h(x_k) = \begin{bmatrix} 0 & 1 \end{bmatrix}.
87+
\]
88+
Solve for \(d_k\), then \(x_{k+1}=x_k+\alpha_k d_k\).
89+
\end{frame}
90+
91+
92+
% ------------------------------------------------
93+
\begin{frame}{Globalization: Making SQP Robust}
94+
SQP is an important method, and there are many issues to be considered to obtain an \textbf{efficient} and \textbf{reliable} implementation:
95+
\begin{itemize}
96+
\item Efficient solution of the linear systems at each Newton Iteration (Matrix block structure can be exploited.
97+
\item Quasi-Newton approximations to the Hessian.
98+
\item Trust region, line search, etc. to improve robustnes (i.e TR: restrict \(\norm{d}\) to maintain model validity.)
99+
\item Treatment of constraints (equality and inequality) during the iterative process.
100+
\item Selection of good starting guess for $\lambda$.
101+
\end{itemize}
102+
\end{frame}
103+
104+
105+
106+
107+
108+
109+
% ------------------------------------------------
110+
\begin{frame}{Final Takeaways on SQP}
111+
\textbf{When SQP vs.\ Interior-Point?}
112+
\begin{itemize}
113+
\item \textbf{SQP}: strong local convergence; warm-start friendly; natural for NMPC.
114+
\item \textbf{IPM}: very robust for large, strictly feasible problems; good for dense inequality sets.
115+
\item In practice: both are valuable—choose to match problem structure and runtime needs.
116+
\end{itemize}
117+
\textbf{Takeaways of SQP}
118+
\begin{itemize}
119+
\item SQP = Newton-like method using a sequence of structured QPs.
120+
\item Globalization (merit/filter/TR) makes it reliable from poor starts.
121+
\item Excellent fit for control (NMPC/trajectory optimization) due to sparsity and warm starts.
122+
\end{itemize}
123+
\end{frame}

class02/class02.md

Lines changed: 46 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,50 @@
66

77
---
88

9-
Add notes, links, and resources below.
9+
## Overview
10+
11+
This class covers the fundamental numerical optimization techniques essential for optimal control problems. We explore gradient-based methods, Sequential Quadratic Programming (SQP), and various approaches to handling constraints including Augmented Lagrangian Methods (ALM), interior-point methods, and penalty methods.
12+
13+
## Interactive Materials
14+
15+
The class is structured around 1 slide deck and four interactive Jupyter notebooks:
16+
17+
1. **[Part 1a: Root Finding & Backward Euler](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/part1_root_finding.html)**
18+
- Root-finding algorithms for implicit integration
19+
- Fixed-point iteration vs. Newton's method
20+
- Application to pendulum dynamics
21+
22+
23+
2. **[Part 1b: Minimization via Newton's Method](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/part1_minimization.html)**
24+
- Unconstrained optimization fundamentals
25+
- Newton's method implementation
26+
- Globalization strategies: Hessian matrix and regularization
27+
28+
3. **[Part 2: Equality Constraints](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/part2_eq_constraints.html)**
29+
- Lagrange multiplier theory
30+
- KKT conditions for equality constraints
31+
- Quadratic programming implementation
32+
33+
4. **[Part 3: Interior-Point Methods](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/part3_ipm.html)**
34+
- Inequality constraint handling
35+
- Barrier methods and log-barrier functions
36+
- Comparison with penalty methods
37+
38+
## Additional Resources
39+
40+
- **[Lecture Slides (PDF)](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/ISYE_8803___Lecture_2___Slides.pdf)** - Complete slide deck
41+
- **[LaTeX Source](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/main.tex)** - Source code for lecture slides
42+
43+
## Key Learning Outcomes
44+
45+
- Understand gradient-based optimization methods
46+
- Implement Newton's method for minimization
47+
- Apply root-finding techniques for implicit integration
48+
- Solve equality-constrained optimization problems
49+
- Compare different constraint handling methods
50+
- Implement Sequential Quadratic Programming (SQP)
51+
52+
## Next Steps
53+
54+
This class provides the foundation for advanced topics in subsequent classes, including Pontryagin's Maximum Principle, nonlinear trajectory optimization, and stochastic optimal control.
1055

0 commit comments

Comments
 (0)