Laurent Series of operator-value function

For finite-dimensional spaces, you can work in terms of matrices. This provides a smooth transition because you should be used to vector functions of a variable, which is all that a matrix really is. When you integrate a matrix $R(\lambda)=[r_{j,k}(\lambda)]_{j,k}$ where $1 \le j,k \le N$, you can just integrate each entry of the matrix separately. So, the integral of a matrix function is the matrix of integrals of the individual terms of the matrix: $$ \int R(\lambda)d\lambda = \left[\int r_{j,k}(\lambda)d\lambda\right]_{j,k}. $$ Matrices are nice because you know things about inversion of matrices. For example, the inverse of an $N\times N$ matrix $A=[a_{j,k}]$ comes out of the adjunct matrix $A_{\mbox{adj}}=[(-1)^{j+k}A_{k,j}]$ where $A_{j,k}$ is the determinant of the $(N-1)\times(N-1)$ co-factor matrix obtained by eliminating the $j$-th row and $k$-th column of $A$ (note that the swap of $j,k$ in the adjunct matrix for $A$ is deliberate.) This is because $$ AA_{\mbox{adj}}=A_{\mbox{adj}}A = \mbox{det}(A)I, $$ where $I$ is the $N\times N$ identity matrix. Therefore, $A$ is invertible iff $\mbox{det}(A)\ne 0$ and, in that case, the inverse is $$ A^{-1}=\frac{1}{\mbox{det}(A)}A_{\mbox{adj}}. $$ That gives you a way to study the resolvent operator through the characteristic polynomial $p(\lambda)=\mbox{det}(\lambda I- A)=(-1)^{N}\mbox{det}(A-\lambda I)$. The resolvent operator for $A$ is $$ (A-\lambda I)^{-1} = \frac{(-1)^{N}}{p(\lambda)}(A-\lambda I)_{\mbox{adj}} $$ All of the entries of the adjunct matrix $(A-\lambda I)_{\mbox{adj}}$ are formed from determinants of the entries of $(A-\lambda I)$, which means that the following is a matrix whose entries are polynomial $q_{j,k}(\lambda)$ whose orders never exceed $N-1$: $$ (A-\lambda I)_{\mbox{adj}}=[q_{j,k}(\lambda)] $$ By expanding all of the entries out and gathering like powers into separate matrices, you get $$ (A-\lambda I)_{\mbox{adj}}=A_{0}+\lambda A_{1}+\lambda^{2}A_{2}+\cdots+\lambda^{N-1}A_{N-1}, $$ where $A_{j}$ are constant matrices for all $0 \le j \le N-1$. So the resolvent always has the form $$ (A-\lambda I)^{-1} = (-1)^{N}\left[\frac{1}{p}A_{0}+\frac{\lambda}{p}A_{1}+\cdots+\frac{\lambda^{N-1}}{p}A_{N-1}\right]. $$ Now when you integrate, you can treat the matrices $A_{j}$ as constants and just worry about scalar integration of the coefficient functions $\lambda^{n}/p(\lambda)$. Everything reduces to scalar integrals with big fat constants--which are matrices--along for the ride. You can show that $$ q(A) = \frac{1}{2\pi i}\oint_{C} q(\lambda)(\lambda I-A)^{-1}d\lambda = -\frac{1}{2\pi i}\oint_{C} q(\lambda)(A-\lambda I)^{-1}d\lambda $$ for any scalar polynomial $q$ and any simple closed positively-oriented contour enclosing all of the roots of the characteristic polynomial $p$ in its interior. In particular, because $p(\lambda)(A-\lambda I)^{-1}$ has only removable singularities, you get the Cayley-Hamilton theorem right away: $$ p(A) = 0. $$