Generate a vector space based on a finite set

Let $X$ be a finite set and choose a labeling for it, so it will be $X$= $\{x_1,...,x_n\}$. $\mathbb F$ is a field. Then the vector space over $\mathbb F$ generated by this finite set $X$ is a set containing "formal linear combination" of the elements in $X$. And it should be $a_1x_1+...+a_nx_n$ where$\{a_1,...,a_n\}$ are from field $\mathbb F$. As a result, the basis of this vector space is $(x_1,...,x_n)$

Then I wonder if let $X$ be a finite set $X$=$\{1,2,3,...,n\}$, and field is $\mathbb C$. Then how can this finite set $X$ and the field together generate a vector space? Specifically,1,2,3,..,n are elements in set $X$, and how can they become linearly independent to become the basis of vector space?


Don't let the names of the elements of $X$ trick you into thinking they have any kind of implicit properties. In your example (for $n\geq2$), a vector might look like $$ (5+2i){1}+(-3-5i){2} $$ You could expand the brackets and write $$ 5\cdot{1}+2i\cdot {1}-3\cdot{2}-5i\cdot{2} $$ but apart from that, not much simplification can be done (I would argue that that's not a simplification, but that's beside the point). Which makes it a bad idea to use that particular $X$, because it really, really feels like those multiplications can be executed in some way. But they can't. That's what the "formal" in "formal linear combinations" means. It is also a bad idea to use that $X$ because it's difficult to tell apart certain elements of $\Bbb C$ that have algebraic properties, from the elements of $X$, that don't.

It can be done, if you're really careful. But most of the time it won't be worth it. It will just confuse your readers, and maybe even yourself.


Perhaps a different choice of notation might clarify things. Given any set $X$ and a field $F$ (you can do this with rings, or other stuff as well), let $F^{\oplus X}$ denote the set of all functions $f:X\to F$ such that $\{x\in X\,: f(x)\neq 0\}$ is a finite set.

Then, $F^{\oplus X}$ is a subset of $F^X$, which is the set of all possible functions $f:X\to F$. The latter space can very clearly be given the structure of a vector space over the field $F$, and it is easy to verify that $F^{\oplus X}$ is a subspace of $F^X$.

Very important examples of elements of $F^{\oplus X}\subset F^X$ are: for each $x\in X$, define $\delta_x:X\to F$ by setting \begin{align} \delta_x(y)&:= \begin{cases} 1&\text{if $y=x$}\\ 0 & \text{else} \end{cases} \end{align} Then, $\{\delta_x\}_{x\in X}$ forms a (Hamel) basis for the vector space $F^{\oplus X}$, precisely because the definition requires that the support of the functions be finite. Now, given $x_1,\dots, x_n\in X$ and scalars $a_1,\dots, a_n\in F$, it makes perfect sense to consider the linear combination \begin{align} a_1\delta_{x_1}+\cdots +a_n\delta_{x_n} \end{align} This is just a linear combination of certain functions $X\to F$. So, when speaking of a formal linear combination, you can think of it in this manner.


In the special case that our index set is $X=\{1,\dots, n\}$, then the resulting vector space we get $F^{\oplus \{1,\dots, n\}}$ may set-theoretically be different from $F^n$ (defined as the set of all ordered $n$-tuples), but it's the same idea. So, in terms of my above notation, a basis for the space $F^{\oplus\{1,\dots, n\}}$ is $\{\delta_1,\dots, \delta_n\}$, where $\delta_i:\{1,\dots, n\}\to F$ is the function \begin{align} \delta_i(j)&= \begin{cases} 1&\text{if $j=i$}\\ 0&\text{else} \end{cases} \end{align} But if you think about it, this is precisely what everyone writes as $\{e_1,\dots, e_n\}$ being a basis for the vector space $F^n$, where $e_i=(0,\dots, \underbrace{1}_{\text{$i^{th}$ spot}},\dots, 0)$.


Extra ramblings about Polynomials:

If you start with the index set $X=\Bbb{N}_0$ the non-negative integers, then the resulting space you get is the space of polynomials in one variable. It's clear how the vector space structure is defined, because it's a special case of what I've already mentioned above. The multiplication is defined by $\delta_i\cdot \delta_j:=\delta_{i+j}$ for all $i,j\in X$, and then extending bilinearly (again, we can do this because $F^{\oplus X}$ has in its definition the finite support condition).

Of course, when we write the polynomial ring as $F[x]$, to indicate "finite formal sums in the indeterminate $x$", we can either think of $x$ as "a symbolic object to be manipulated according to some rules", or we can think of it as the function $\delta_1$ (which ok you could also argue is a specific symbol and so on), and more generally, $x^i$ as $\delta_i$.

More generally, by taking $X=(\Bbb{N}_0)^k$ for some $k\in\Bbb{N}$, the resulting vector space $F^{\oplus X}$ is what we can think of as the space of polynomials in $k$ variables, with coefficients in the field $F$. Again, the pure vector space structure is clear. The multiplication is defined by extending bilinearly the definition $\delta_{(i_1,\dots, i_k)}\cdot \delta_{(j_1,\dots,j_k)}:=\delta_{(i_1+j_1+\dots, i_k+j_k)}$. In the usual $F[x_1,\dots, x_k]$ notation, what I'm calling $\delta_{(i_1,\dots, i_k)}$ is what would be written as $x_1^{i_1}\cdots x_k^{i_k}$.