Why are skew-symmetric matrices of interest?
I am currently following a course on nonlinear algebra (topics include varieties, elimination, linear spaces, grassmannians etc.). Especially in the exercises we work a lot with skew-symmetric matrices, however, I do not yet understand why they are of such importance.
So my question is: How do skew-symmetric matrices tie in with the topics mentioned above, and also, where else in mathematics would we be interested in them and why?
I don't know how much background knowledge you have, maybe all of this is known to you and you are looking for something else, but it's the first thing to come to my mind. I've tried to phrase the same statement in different ways.
The Lie algebra of skew-symmetric matrices is the Lie algebra corresponding to the Lie group of orthogonal matrices. In other words, the space of skew matrices is the tangent space at the identity of the manifold of orthogonal matrices. The space of skew matrices can in some sense be thought of as the infinitesimal version of orthogonal transformations. I don't have time to get into why this is useful in any detail, but in many cases Lie algebras are significantly easier to handle and still give a great deal of information about the corresponding group.
A part of all this is the following observation. If $A$ is a skew matrix, then its exponential, $\exp(A)$, is an orthogonal matrix.
There are many books and lecture notes available on Lie groups and algebras. Some sources can be found in the answers to this question.
This is not the area of math you're interested in, but here's an example I might as well write down. In convex optimization we are interested in the canonical form problem $$ \text{minimize} \quad f(x) + g(Ax) $$ where $f$ and $g$ are closed convex proper functions and $A$ is a real $m \times n$ matrix. The optimization variable is $x \in \mathbb R^n$. This canonical form problem is the starting point for the Fenchel-Rockafellar approach to duality.
The KKT optimality conditions for this optimization problem can be written as $$ \tag{$\spadesuit$} 0 \in \begin{bmatrix} 0 & A^T \\ -A & 0 \end{bmatrix} \begin{bmatrix} x \\ z \end{bmatrix} + \begin{bmatrix} \partial f(x) \\ \partial g^*(z) \end{bmatrix}, $$ where $g^*$ is the convex conjugate of $g$ and $\partial f(x)$ is the subdifferential of $f$ at $x$ and $\partial g^*(z)$ is the subdifferential of $g^*$ at $z$. The notation $\begin{bmatrix} \partial f(x) \\ \partial g^*(z) \end{bmatrix}$ denotes the cartesian product $\partial f(x) \times \partial g^*(z)$.
The condition $(\spadesuit)$ is a great example of a "monotone inclusion problem", which is a type of problem that generalizes convex optimization problems. The subdifferential $\partial f$ is the motivating example of a "monotone operator", but the operator $$ \begin{bmatrix} x \\ z \end{bmatrix} \mapsto \begin{bmatrix} 0 & A^T \\ -A & 0 \end{bmatrix}\begin{bmatrix} x \\ z \end{bmatrix} $$ is a good example of a monotone operator which is not the subdifferential of a convex function.
Natural numerical discretizations of odd-order derivatives are skew-symmetric. Thus skew-symmetric matrices are important in the study of numerical methods for hyperbolic PDEs. They are also related to the properties of the PDEs themselves, since an odd-order derivative can be viewed as an infinite-dimensional skew-symmetric operator.