Why is an infinite dimensional space so different than a finite dimensional one?
It certainly cannot be "only because the unit ball becomes non-compact", because there are important algebraic differences between finite and infinite dimension that don't even depend on having a norm available (so "unit ball" would be meaningless).
One of the simplest differences is that if $V$ is finite-dimensional, then a linear transformation $V\to V$ is injective if and only if it is surjective, whereas both directions of this fail in infinite dimensions.
Fundamentally I think the underlying cause is that in a finite-dimensional space, a (multi)set of vectors whose cardinality is less than or equal to the dimension always has a sum. This is not the case for an infinite-dimensional space, and that changes everything.
(More precisely, this is responsible for most things that are always true in finite-dimensional space but can fail in infinite dimension. For the converse -- things that are always true in infinite dimension but can fail in finite dimensions -- the combinatorial properties of infinite sets that celtschk's answer lists are probably the place to put the blame.)
I think most of the differences can ultimately be traced back to the following properties of infinite sets (where the relevant sets would be bases of the vector space or functions of the basis vectors):
- There exist bijections between an infinite set and proper subsets of that infinite set. For example, there's a bijection between the integers and the even numbers, $k\mapsto 2k$.
- For an infinite set, its power set and the set of its finite subsets are different sets (they of course agree for finite sets).
- Finite sets of real numbers always have a maximum (and thus a supremum), while infinite sets may be unbounded.
For example, the points listed in this answer to one of your linked questions:
-
Any bijection from a basis to a proper subset of that basis induces such an endomorphism by just extending to the full space through linearity. For example, if you have a countable basis $e_k$ indexed by $\mathbb Z$, there is an endomorphism $T$ that sends $e_k$ to $e_{2k}$ and consequently $\sum_k \alpha_k e_k$ to $\sum_k \alpha_k e_{2k}$.
It is easy to check than no non-zero vector is mapped to $0$, so $\operatorname{Ker} T=\{0\}$. Also, it's obvious that no vector is mapped to $e_1$, so $T$ is not surjective.
A vector $v$ is always a linear combinations of finitely many basis vectors, therefore any function $x\mapsto \langle v,v\rangle$ is non-zero only on a finite-dimensional subspace. But of course there exist linear forms which are non-zero on all basis vectors; in particular for any subset of a basis there exists a linear form that maps any basis vector from that subset to $1$, and any basis vector not in that subset to $0$. Since there are more subsets than finite subsets, there exist more linear forms than vectors.
For the linear mapping $f$ and the basis $b_n$, the set $\{\left\|f(b_n)\right\|\}$ may not have a supremum if $\{b_n\}$ is infinite.
The set $\{\left\|b_n\right\|_A / \left\|b_n\right\|_B\}$ may be unbounded for infinite basis.
On a finite-dimensional real (or complex) vector space all nontrivial norms are equivalent and the metrics associated to those norms are all complete. In the infinite-dimensional setting the choice of norm is much more crucial, as the resulting metrics need not be complete and can have very different completions. Consider on $C[0,1]$ the uniform norm, $L^1$-norm, and $L^2$-norm. They are inequivalent, the space is complete for the first norm, but not for the second and third.
A harder result is that a finite-dimensional real (or complex) vector space has just one Hausdorff topology making it a topological vector space (i.e., vector addition and scalar multiplication are continuous). This topology is of course the one coming from any nontrivial norm. (It is not a topological vector space using the discrete topology on it since scalar multiplication is not continuous.) Infinite-dimensional vector spaces can have lots of different topologies making them topological vector spaces since they can have inequivalent norms. They also have topologies making them topological vector spaces that do not come from a norm, like the weak* topology.
Every linear transformation between two finite-dimensional real (or complex) vector spaces is continuous wrt the canonical topology on those spaces (the topology coming from any norm). So there is no need for a "Closed Graph Theorem" in finite dimensions. And this is also why the dual space is much simpler in finite dimensions: every linear functional is automatically continuous.
One of the more interesting properties of a finite-dimensional linear space $X$ is that the space of linear operators on $X$ is also finite-dimensional. That translates to Algebra. Indeed, if $A : X \rightarrow X$ is linear, then $I,A^{2},\cdots A^{N^{2}}$ must be a linearly-dependent set if $N=\dim(X)$, which gives the existence of a polynomial $p$ such that $p(A)=0$, and a minimal polynomial $m$ with highest order coefficient $1$ such that $m(A)=0$. Furthermore, and quite surprisingly, the order of the minimal polynomial never exceeds $N$. None of this is true in infinite dimensions, and most of it doesn't make sense.
If $m(\lambda)=\lambda^{n}+a_{n-1}\lambda^{n-1}+\cdots+a_{1}\lambda + a_{n}$ is the minimal polynomial then $$ \begin{align} -a_{n}I & = A(A^{n-1}+a_{n-1}A^{n-2}+\cdots+a_{1}I) \\ & = (A^{n-1}+a_{n-1}A^{n-2}+\cdots+a_{1}I)A. \end{align} $$ From this is becomes obvious that $A$ has a left inverse iff it has a right inverse, and this occurs iff $a_{n}=m(0) \ne 0$. The above shows that a left inverse is always a right inverse and a right inverse is always a left inverse, which makes either a full inverse; furthermore, if the inverse exists, it must be a polynomial in $A$.
The same arguments show that $A-\lambda I$ is invertible iff $m(\lambda)\ne 0$, which means that there are only a finite number of $\lambda$ for which $A-\lambda I$ is not invertible. This becomes an issue of polynomials only, and the inverse has the form $$ (A-\lambda I)^{-1}=-\frac{1}{m(\lambda)}(\lambda^{n-1}I+\lambda^{n-2}C_{n-2}+\cdots \lambda^{1} C_1+C_0), $$ where the coefficient matrices $C_{k}$ are polynomials in $A$. If you're working over $\mathbb{C}$ instead of $\mathbb{R}$, then you can write this in terms of a partial fraction decomposition involving the roots $\{ \lambda_{1},\cdots,\lambda_{k}\}$ of $m$: $$ (A-\lambda I)^{-1} = \sum_{l=1}^{k}\sum_{j=1}^{r_l} \frac{1}{(\lambda-\lambda_{l})^{j}}A_{l,j}. $$ All of the coefficient matrices $A_{l,j}$ are polynomials in $A$, and from this you can derive the tools need to obtain Jordan normal form, including the existence of cyclic subspaces associated with eigenvalue $\lambda_{l}$--these subspaces have maximum order equal to the order of the pole at $\lambda_{l}$.
So Finite-dimensional linear space has an amazingly deep effect on the algebra of linear operators, and this is one of the most important differences. Compactness and topology don't buy you much in this context; they're not really necessary. However, topology is necessary in order to deal effectively with infinite-dimensional spaces, so that you can build up from the finite. The linear algebra of operators on infinite-dimensional spaces, however, is just not clean; and you wouldn't expect it to be.