Real life examples for eigenvalues / eigenvectors
There are already good answers about importance of eigenvalues / eigenvectors, such as this question and some others, as well as this Wikipedia article.
I know the theory and these examples, but now in order to do my best to prepare a course I'm teaching, I'm looking for ideas about good real life examples of usage of these concepts.
Do you know some good simple real-life examples (in economics or data analysis or anything else), in which the usage of eigen values/vectors is a crucial tool?
Here are just some of the many uses of eigenvectors and eigenvalues:
Using singular value decomposition for image compression. This is a note explaining how you can compress and image by throwing away the small eigenvalues of $AA^T$. It takes an $8$ megapixel image of an Allosaurus, and shows how the image looks after compressing by selecting $1$,$10$,$25$,$50$,$100$ and $200$ of the largest singular values.
Deriving Special Relativity is more natural in the language of linear algebra. In fact, Einstein's second postulate really states that "Light is an eigenvector of the Lorentz transform." This document goes over the full derivation in detail.
Spectral Clustering. Whether it's in plants and biology, medical imaging, buisness and marketing, understanding the connections between fields on Facebook, or even criminology, clustering is an extremely important part of modern data analysis. It allows people to find important subsystems or patterns inside noisy data sets. One such method is spectral clustering which uses the eigenvalues of a the graph of a network. Even the eigenvector of the second smallest eigenvalue of the Laplacian matrix allows us to find the two largest clusters in a network.
Dimensionality Reduction/PCA. The principal components correspond the the largest eigenvalues of $A^TA$ and this yields the least squared projection onto a smaller dimensional hyperplane, and the eigenvectors become the axes of the hyperplane. Dimensionality reduction is extremely useful in machine learning and data analysis as it allows one to understand where most of the variation in the data comes from.
Low rank factorization for collaborative prediction. This what Netflix does (or once did) to predict what rating you'll have for a movie you have not yet watched. It uses the SVD, and throws away the smallest eigenvalues of $A^TA$.
The Google Page Rank algorithm. The largest eigenvector of the graph of the internet is how the pages are ranked.
In control theory and dynamical systems you have modal decomposition, which is a very useful tool to quickly create the dynamic equation for a given (real life) system
Given a system of differential equation:
$\dot x(t) = Ax(t)$, $x(0) = x_o$, $A$ has distinct eigenvalues
Then the solution to this equation is given as:
$x(t) = \sum\limits_{i=1}^n c_ie^{\lambda_it}v_i$
where $c_i$ are the coefficient corresponding to initial condition $x(0)$, $v_i$ is the $i$th eigenvector, and $\lambda_i$ is the $i$th eigenvalue, needless to say $v_i, \lambda_i$ forms a pair
The physical interpretation is that the solution corresponds to the unforced/natural response of the system and is used to analyze bridge models, RC circuits, mass-spring-damper, magnetic suspension, fluid dynamics, acoustics, neuron models...
Further, we can look at the eigenvalue of the $A$ matrix to determine the stability of the system. If all eigenvalues lie in the open left half plane, then the matrix $A$ is known simply as Hurwitz (a linear algebra result completely detached from dynamical system), and the system is asymptotically stable. Otherwise it will either have a state that never goes to zero, or blow up as time goes to infinity.
This result is extremely well known, but goes by different names, in some field this is simply known as the eigenvector-eigenvalue problem: http://jupiter.math.nctu.edu.tw/~tshieh/teaching/Math254_summerI2009/MAth254_summer_note/lecture16.pdf http://tutorial.math.lamar.edu/Classes/DE/RealEigenvalues.aspx https://see.stanford.edu/materials/lsoeldsee263/11-eig.pdf
You can also consult basic references on ODE, such as Boyce and DiPrima
In real life, we effectively use eigen vectors and eigen values on a daily basis though sub-consciously most of the time.
Example 1: When you watch a movie on screen(TV/movie theater,..), though the picture(s)/movie you see is actually 2D, you do not lose much information from the 3D real world it is capturing. That is because the principal eigen vector is more towards 2D plane the picture is being captured and any small loss of information(depth) is inferred automatically by our brain. (reason why we most of the times take photos using camera facing directly at us, not from the top of the head). Each scene requires certain aspects of the image to be enhanced, that is the reason the camera man/woman chooses his/her camera angle to capture most of those visual aspects. (apart from colour of costume, background scene and background music)
Example 2: If you eat pizza, french fries,...or any food.... you are typically translating their taste into sour, sweet, bitter,salty, hot, etc ... principal components of taste -- though in reality the way a food is prepared is formulated in terms of ingredients' ratios (sugar,flour,butter, etc...10's of 100's of things that go into making a specific food) ... However our mind will transform all such information into the principal components of taste(eigen vector having sour, bitter, sweet, hot,..) automatically along with the food texture and smell. So we use eigen vectors every day in many situations without realizing that's how we learn about a system more effectively. Our brain simply transforms all the ingredients, cooking methods, final food product into some very effective eigen vector whose elements are taste sub parts ,smell and visual appearance internally. (All the ingredients and their quantities along with the cooking procedure represent some transformation matrix A and we can find some principal eigen vector(s) V with elements as taste+smell+appearance+touch having some linear transformation directly related. AV = wV , where w represent eigen values scalar and V an eigen vector) (top wine tasters probably have bigger taste+smell+appearance eigen vector and also with much bigger eigen values in each dimension. This concept can be extended to any field of study.)
Example 3: if we take pictures of a person from many angles(front , back, top, side..) on a daily basis and would like to measure the changes in the entire body as one grows,... we can get the most information from the front angle with the axis of camera perpendicular to the line passing from crown of the head to a point passing between one's feet. This axis/camera angle captures the most useful information to measure a person's outer physical body changes as the age progresses. This axis becomes a principal eigen vectors with the largest eigen values. (Note: the data/images that we capture directly from the top of the person may give very less useful information compared to the camera directly facing him/her in this situation. That is the reason why we use PCA-Princial Component Analysis technique in determining most effective eigen vectors and related eigen values to capture most of the needed information without bothering about all the remaining axes of data capture.)
Hope this helps in understanding why and how we use eigen vectors and eigen values for better perception in whatever we do on a day to day . Eigen vectors represent those axes of perception/learn along which we can know/understand/perceive things around us in very effective way(s).
Finally it boils down to the differences between person to person, in consciously/sub-consciously building/refining such principal eigen vectors and related eigen values, in each field of learning that differentiate one person from the other. ( ex: musicians, artists, scientists, mathematicians, camera men, directors, teachers, doctors, engineers, parents, stock market brokers, weather prediction, ....)