Floquet Theory - Reducing ODE's to Constant Coefficient ODE's?
The trick here, I believe, is that the coordinate transformation itself is must be periodic. If this is allowed, then the statement is correct, viz:
Suppose
$\dot x = A(t)x \tag{1}$
is a time-dependent, linear system of ODEs with $A(t)$ a continuous periodic matrix of period $T$:
$A(t + T) = A(t), \; \text{all} \; t \in \Bbb R, \tag{2}$
and $X(t)$ is a fundamental matrix solution of (1); that is, $X(t)$ is an $n \times n$ matrix of functions $x_{ij}(t)$ such that
$\dot X(t) = A(t)X(t) \tag{3}$
and
$\det(X(t)) \ne 0 \tag{4}$
for all $t \in \Bbb R$. It is well-known that such $X(t)$ exist; indeed, this assertion is a basic tenet of the theory of linear ODEs. Consider $X(t + T)$; we have
$\dot X(t + T) = A(t + T)X(t + T) = A(t)X(t + T) \tag{5}$
by the periodicity of $A(t)$; thus $X(t + T)$ is also a fundamental matrix solution. Now let $Y(t)$ and $Z(t)$ be any two matrix solutions of (3), with $\det (Y(t)) \ne 0$ for all $t \in \Bbb R$, i.e., $Y(t)$ is a fundamental matrix solution, though $Z(t)$ need not be so. Then
$d(Y^{-1}Z) /dt = \dot Y^{-1}Z + Y^{-1} \dot Z, \tag{6}$
and if in (6), the well-known identity
$d(Y^{-1}) / dt = -Y^{-1} \dot Y Y^{-1} \tag{7}$
(which readily follows by differentiating the equation $Y^{-1}Y = I$) is deployed, we obtain
$d(Y^{-1}Z) / dt = -Y^{-1} \dot Y Y^{-1} Z + Y^{-1} \dot Z, \tag{8}$
and via the assumption that $\dot Y = AY$ and $\dot Z = AZ$ (8) becomes
$d(Y^{-1}Z) / dt = -Y^{-1} A Y Y^{-1} Z + Y^{-1} A Z = -Y^{-1}AZ + Y^{-1}AZ = 0, \tag{9}$
which means there is a constant matrix $C$ such that
$Y^{-1}Z = C \tag{10}$
or
$Z = YC. \tag{11}$
Applying this result to $X(t), X(t + T)$ yields
$X(t + T) = X(t)C; \tag{12}$
here $C$ must be nonsingular since $X(t), X(t + T)$ are. The fact that $C$ is nonsingular implies it has a matrix logarithm $BT$; that is, there exists an $n \times n$ matrix $B$ such that
$C = e^{BT}. \tag{13}$
Setting
$P(t) = X(t)e^{-Bt} \tag{14}$
we see that $P(t)$ is periodic of period $T$:
$P(t + T) = X(t + T)e^{-B(t + T)} = X(t)Ce^{-BT} e^{-Bt} = X(t)e^{-Bt} = P(t), \tag{15}$
where we have used (12) and (13) in (15). The matrix $P(t)$ will be used to transform the coordinates, and the matrix $B$ will hold the new, constant coefficients for the transformed system. Indeed, setting
$y = P^{-1}(t)x \tag{16}$
so that
$x = P(t)y, \tag{17}$
we see that
$A(t)P(t)y = A(t)x = \dot x = \dot P y + P \dot y, \tag{18}$
whence
$\dot y = P^{-1}(AP - \dot P)y, \tag{19}$
and from (14) it follows that
$\dot P = \dot X e^{-Bt} - Xe^{-Bt}B = AXe^{-Bt} - Xe^{-Bt}B = AP -PB; \tag{20}$
inserting (20) into (19) gives
$\dot y = P^{-1}(PB)y = By, \tag{21}$
the promised constant-coefficient equation. Thus it is seen that a periodoc system may be converted to one with constant coefficients via a periodic, though linear, change of variables.
The problem with this transformation, from a practical point of view, is that we have to know $X(t)$ to find $P(t)$ and $B$. Thus though capable of yielding insights of considerable interest, it is of limited utility for the purposes of actually finding solutions to (1), (3). The eiigenvalues of $C = e^{BT}$ are, of course, the characteristic multipliers of (1); those of $B$ are its characteristic exponents.
I believe the above treatment, in its essence, is ultimately attributable to Floquet; however, the treatment I have presented here very closely follows that of J. K. Hale from his book, Ordinary Differential Equations, second edition (1980), section III.7. Further details may be found therein. I have both amplified and condensed Hale's treatment for the present purposes; indeed, what I have written here may in some ways be regarded as a paraphrase of Hale's work. The business about $Y$, $Z$, $Y^{-1}Z$ and their derivatives is, however, mine, though it may be of nearly infinitessimal consequence.
If one were to try and apply these ideas to the given system,
$y'' + (\sin x)y' + (\cos x)y, \tag{22}$
the first step would probably be to set
$z = y', \tag{23}$
so that (22) could be written
$\begin{pmatrix} y \\ z \end{pmatrix}' = A(x) \begin{pmatrix} y \\ z \end{pmatrix}, \tag{24}$
with
$A(x) = \begin{bmatrix} 0 & 1 \\ -\cos x & -\sin x \end{bmatrix}; \tag{25}$
but there is no straightforward way that I can present here for attacking such a problem and finding $X(x)$, the fundamental solution, short of numerical integration. And there I must let the matter rest, 'til the Queen of Sciences, Mathematics, bids the like of Newton, Gauss, Riemann or Poincare to join us mere mortals once again!
Now that would be a Christmas present!
And by the way, sawtooth functions are OK here!
Hope this helps. Holiday Blessings to One and All,
and as always, especially on these Solstice nights,
Fiat Lux!!!
This just a comment
I just stumbled over this question. The specific differential equation you mentioned \begin{equation} y''+\sin(x)y'+\cos(x)y=0\hspace{2cm}(1) \end{equation} can be solved exactly without Floquet theory. It can be rewritten as $$ y''+\frac{d}{dx}[\sin(x)y]=0\implies y'(x)+\sin(x)y=c $$ If you set the initial values at $x=0$, for example: $y(0)=0$ and $y'(0)=1$, then $$ y'(x)+\sin(x)y=y'(0)=1 $$ Now multiply this by an integrating factor $A(x)$: $$ A(x)y'(x)+A(x)\sin(x)y=A(x) $$ Taking $A'(x)=\sin(x)A(x)$, we can integrate this equation and obtain $$ A(x)=A_0e^{-\cos(x)+1} $$ Hence, you find $$ \frac{d}{dx}[A(x)y(x)]=A(x), $$ which can be solved. You find $$ y(x)=e^{\cos(x)}\int_0^x e^{-\cos(x')}dx' $$ One can then expand the above solution in Fourier series (with some Bessel function coefficients) and compare it with the Floquet theory solution. You can then find the Floquet multipliers. Further, this solution is the Green's function for equation (1). Consequently, you can also solve equations like these