Representation of a linear functional in vector space
In the book Functional Analysis, Sobolev Spaces and Partial Differential Equations of Haim Brezis we have the following lemma:
Lemma. Let $X$ be a vector space and let $\varphi, \varphi_1, \varphi_2, \ldots, \varphi_k$ be $(k + 1)$ linear functionals on $X$ such that $$ [\varphi_i(v) = 0 \quad \forall\; i \in \{1, 2, \ldots , k\}] \Rightarrow [\varphi(v) = 0]. $$
Then there exist constants $\lambda_1, \lambda_2, \ldots, \lambda_k\in\mathbb{R}$ such that $\varphi=\lambda_1\varphi_1+\lambda_2\varphi_2+\ldots+\lambda_k\varphi_k$.
In this book, the author used separation theorem to prove this lemma. I would like ask whether we can use only knowledge of linear algebra to prove this lemma.
Thank you for all helping.
Your assumption is that $\ker{\varphi} \supseteq \bigcap_{i=1}^k \ker{\varphi_i}$.
Consider the linear map $\ell \colon X \to \mathbb{R}^k$ given by $\ell(x) = (\varphi_1(x),\dots,\varphi_k(x))$ and let $V = \operatorname{im}\ell = \{\ell(x):x \in X\} \subseteq \mathbb{R}^k$ be the image. We have $\ker{\ell} = \bigcap_{i=1}^k \ker{\varphi_{i}} \subseteq \ker\varphi$. Therefore $\varphi = \tilde{\varphi} \circ \ell$ for some linear functional $\tilde{\varphi}\colon V \to \mathbb{R}$ [explicitly, $\tilde{\varphi}(v) = \varphi(x)$ where $x$ is such that $\ell(x) = v$. This is well-defined and linear.]
Every linear functional defined on a subspace $V$ of $\mathbb{R}^k$ can be extended to a linear functional on all of $\mathbb{R}^k$ (write $\mathbb{R}^k = V \oplus V^{\bot}$ and set the extension to be zero on $V^{\bot}$) and every linear functional on $\mathbb{R}^k$ is of the form $\psi(y) = \sum_{i=1}^k a_i y_i$. Thus, there are $\lambda_1,\dots,\lambda_k \in \mathbb{R}$ such that $\tilde\varphi(v) = \sum_{i=1}^k \lambda_i v_i$ for all $v \in V$. In other words, $\varphi = \sum_{i=1}^k \lambda_i \varphi_i$.
Look at Rudin's "Functional Analysis" Lemma 3.9. The only issue I see is that the proof requires the extension of a functional from a subspace of a finite dimensional space to the entire finite dimensional space, but this is purely algebraic as far as I can see.
Here is a repackaging of linearalgebraist's answer:
Let $$ L: X\to\mathbb{R}^k\\ Lx:=(\phi_1(x),\dots, \phi_k(x)) $$ Then $$ {L}^*: (\mathbb{R}^k)^* \to X^* \\ {L}^*f:=f\circ {L}= f_1\phi_1+\dots+f_k\phi_k , $$ so the conclusion you seek is equivalent to the general algebraic fact that $$ \text{im} L^* = (\ker L)^{\bot}:=\{\phi \in X^* : \phi (x) = 0\; \forall x \in \ker L\} $$
More intuitively,
$L$ is injective on $X/\ker L$, which means that knowing $Lx$ amounts to knowing $x$ up to an element of $\ker L$. Hence, knowing $Lx$ implies knowing $\phi(x)$, because the element of $\ker L$ doesn't affect that. Clearly, all of this "knowing things" is linear, so you can write $\phi(x)$ as a linear combination of $\phi_1(x),\dots,\phi_k(x)$.