Sliding Motion and Fillippov system

I have problems to understand how works Fillipov system (see page 2 and 3) : So to make things easier, let consider the example $$\dot x=-\text{sgn}(x),\tag{E}$$ where $\text{sgn}(x)$ is the sign function (i.e. is 1 if $x>0$ and $-1$ if $x<0$). So indeed the vector field $f(x)=-\text{sgn}(x)$ is not continuous. So what looks commonly done is to consider $$F(x)=\begin{cases}-1&x>0\\ 1&x<0\\ co\{-1,1\}=[-1,1]&x=0,\end{cases}$$ where $co\{f_1,f_2\}=\{\alpha f_1+(1-\alpha )f_2\mid \alpha \in [0,1]\}$ is the convex hull of $\{f_1,f_2\}$. So instead of considering $(E)$ one consider rather $$\dot x(t)\in F(x(t))\tag{E'}.$$ If someone knows a bit about this theory, could you explain me the motivation behind ? I'm not sure to really understand.


In the example equation, considered as a conventional ODE, the domain is the largest open set so that the right side is continuous, which is the real line without zero. A conventional solution is $x(t)=x_0-t$ if $x_0>0$. But it only exists for $t<x_0$ before leaving the domain of the ODE. There also does not exist a suitable solution on the other side $\{x<0\}$ that one could glue to this solution to get at least a continuous function.

To provide a way out, Fillippov's approach essentially is that you consider all continuous approximations of the right side in the distributional sense (or here in some stronger functional norm like the $L^1$ norms on bounded intervals). If the solutions of the approximating equations converge to the same function, you can say that this is a generalized solution. Then any infinitesimally close approximation fills the gap of the discontinuity with the convex hull of the limit values, without large variations outside this convex set. Any generalized solution will take one of these values as derivative at the jump.

One could in the example approximate the sign function by $$ {\rm sign}(x)\approx h_{a,b}(x)=\begin{cases} \dfrac{2x-a-b}{b-a}&a\le x\le b\\ +1&x>b\\-1&x<a \end{cases} $$ with some very small $a<0<b$. The corresponding equation $\dot x=-h_{a,b}(x)$ has solutions $x(t)=x_0-t$ for $t<t_b=x_0-b$, and after that $x(t)=\frac{a+b}2+\frac{b-a}2\exp(-2\frac{t-t_b}{b-a})$, which converges to the small number $\frac{a+b}2$. In the limit $a,b\to 0$ one obtains $x(t)=\max(0,x_0-t)$.

Now this approach is quite impractical due to the numerous amount of approximating functions. Your cited generalized equation is a result of investigating the "nice" cases where this convergence is automatic, the solution exists as a piecewise smooth function that is overall Lipschitz continuous. Then one can demand that the generalized derivative of a solution $x(t)$ falls within the convex hull of the possible values of the right side at $x(t)$, that is the limits of the function values in a small neighborhood.

Note that the graphs of the continuous approximations converge towards the curve where the jump is filled with the vertical segment, that is, the convex hull of the limit values.