What is the difference between optimal control and robust control?

What is the difference between optimal control and robust control?

I know that Optimal Control have the controllers:

  • LQR - State feedback controller
  • LQG - State feedback observer controller
  • LQGI - State feedback observer integrator controller
  • LQGI/LTR - State feedback observer integrator loop transfer recovery controller (for increase robustness)

And Robust Control have:

  • $H_{2}$ controller
  • $H_{\infty}$ controller

But what are they? When are they better that LQ controllers? Have the H-contollers a Kalman filter? Is the H-controllers multivariable? Are they faster that LQ-controllers?


Solution 1:

There's a huge difference. Optimal control seeks to optimize a performance index over a span of time, while robust control seek to optimize the stability and quality of the controller (its "robustness") given uncertainty in the plant model, feedback sensors, and actuators.

Optimal control assumes your model is perfect and optimizes a functional you provide. If your model is imperfect your optimal controller is not necessarily optimal! It is also only optimal for the specific cost functional you provide! LQ optimal control is ONLY truly optimal for a completely linear plant (unlikely) and a quadratic cost index. Anything else and there's no rigorous claim to optimality.

Robust control assumes your model is imperfect. Suppose, for instance, some parameters in your model are believed to be in a certain range but are not known for sure. An $H_2$ or $H_{\infty}$ controller will decide which control signals are admissible based on the level of uncertainty in the core parameters. For example, if you have the plant $$ P(s) = \frac{1}{s+a} $$ but only know $a \in [b,c]$ for some given $b$ and $c$, a robust controller will clamp overly aggressive control signals that would risk pushing the pole at $-a$ into the right-hand plane.

Solution 2:

Optimal control requires that your dynamic model of the system be perfect, whereas in reality your model is going to be nowhere close to the actual system. An incorrect model could even result in unstable system behavior.

Robust control on the other hand renders your control law impervious to modelling uncertainties thereby allowing for a realistic margin of error. The simplest and most effective robust control strategy is sliding mode control, SMC. In SMC, denote by $s \in \mathbb{R}$ a linear combination of the system's error in state, i.e.: $$s = \sum_{r = 0}^{n-1}\sigma_re^{(r)} \tag{1}$$

$\begin{align}\text{where,}& \\ &e \in \mathbb{R} : e= x - x_d \text{ is the error in system state} \\ &x_d \text{ is the desired state} \\ &e^{(r)}\text{ represents the r}^{th}\text{ derivative wrt time of }e \\ &\sigma_r\text{ are chosen such that the monic polynomial formed using them as coefficients is Hurwitz}\end{align}$

The insensitivity of SMC to model uncertainties becomes clear if we consider the following problem:

Consider that the system in consideration is given by the following dynamics, $$x^{(n)} = f(\textbf{x}) + g(\textbf{x})u\tag{2}$$ $\begin{align}\text{where,} & \\ &\textbf{x}\in\mathbb{R}^{(n-1)}: \textbf{x} = \begin{bmatrix}x \ \dot{x} \ .... \ x^{(n-1)}\end{bmatrix}^{T} \text{ is the system state} \\ &f, g \in \mathbb{R}\text{ are scalar functions assumed smooth} \\ &u \text{ is the control input}\end{align}$

For conciseness, assume that $g(\textbf{x}) = 1$, but the derivation extends trivially to the general case, and that the following relation is known, $$\mid\text{ }f - \hat{f}\mid\text{ } \leq \text{ F} \tag{3}$$

where, $\text{ }F\text{ is a constant , }\hat{f}\text{ is our estimate on the dynamics, i.e., on f. }$

Now taking derivative of (1): $$\begin{align}&\dot{s} = x^{(n)} - x_d^{(n)} + \sigma_{n-2}e^{(n-1)}\text{ + ...} \\ \implies\text{ } & \dot{s} = f + u - v\tag{from (2)}\end{align}$$

where, $v = x_d^{(n)} - \sigma_{n-2}e^{(n-1)}\text{ + ...}$

Let u = $v - \hat{f} -Ksgn(s)$ where K is a constant whose value will be made clear further. Now consider the Lyapunov function, $$\begin{align} & V = \frac{1}{2}s^2 \\ \implies\text{ } & \dot{V} = s\dot{s} \\ \implies\text{ } &\dot{V} = s(f + u - v) \\ \implies\text{ } & \dot{V} = s(f - \hat{f} - Ksgn(s))\end{align}$$

if we chose, K = F + $\eta$, where $\eta > 0$ then, $$\dot{V} \leq -\eta\mid{s}\text{ }\mid$$

Thus, by Lyapunov theory we have finite convergence of $s$ to $0$. How does this help? When $s = 0$, from (1), $$e^{(n-1)} + \sum_{r = 0}^{n-2}\sigma_re^{(r)} = 0$$ which represents a stable error dynamics, which asymptotically dies down. Thus, we have formulated a control law that does its best to "cancel" out the dynamics (by using our estimate) and then we have also taken care of the uncertainty.

There is tons of literature on SMC and it is definitely worth a read. Rate of convergence is decided by your choice of the parameters and you cannot really compare it to optimal controllers as its a whole different ballpark.