Proof that any linear system cannot have exactly 2 solutions.
Suppose that $\vec v$ and $\vec w$ are distinct solutions for the system $A\vec x = \vec b$ so that $A \vec v = A \vec w = \vec b$. Then $\frac{1}{2}(\vec v + \vec w)$ must be distinct from both $\vec v$ and $\vec w$ and must also solve the system since: $$ A(\tfrac{1}{2}(\vec v + \vec w)) = \tfrac{1}{2}(A\vec v + A\vec w) = \tfrac{1}{2}(\vec b + \vec b) = \vec b $$ We can then apply the same argument to $\vec v$ and $\frac{1}{2}(\vec v + \vec w)$ in order to get infinitely many distinct solutions.
Your second proof sounds fine. Let's imagine that the system of equations is written as $Ax = b$, where $A$ is the matrix of coefficients, $x$ is the vector of variables we are solving for, and $b$ are the constants. Now assume we have two distinct solutions $x_1$ and $x_2$. Then $rx_1+sx_2$ is also a solution, if $r+s=1$ - just plug it in and use linearity of matrix multiplication.
$$A(rx_1+sx_2) = A(rx_1)+A(sx_2) = rA(x_1)+sA(x_2) = (r+s)b = b$$
We thus have an infinite number of solutions. If you don't like matrices, you can easily just write out each term in the system of equations.
If you want to get some intuition for what is happening, the linear combination $rx_1+sx_2$, where $r+s=1$, is a line that connects the two points. When $r$ is 1 and $s$ is 0, we get point $x_1$. When $s$ is 1 and $r$ is 0, we get point $x_2$. Other combinations of $r$ and $s$ give other points.
You need to be more careful with your first argument, though. In general, when graphing systems of $n$ equations in $n$ unknowns, the equations are not lines, but $(n-1)$-dimensional planes in $n$-dimensional space. So for a system of three equations and three unknowns, for example, we have three planes in three dimensional space. Unless the planes are parallel, the intersection of two planes is a line, and the intersection of a line with a plane is a point. That's the one solution case. If two planes are parallel, they never intersect - no solutions. And finally, if the line/plane overlaps with another line/plane, we get an infinite number of solutions. That's the intuition behind this theorem.
We can represent a linear system in matrix notation as:$A\vec x=\vec v$. Now suppose that we have $A\vec x=\vec v$ and $A\vec y=\vec v$ with $\vec x \ne \vec y$, for $a,b$ with $a+b=1$ we have, by linearity: $A(a\vec x + b\vec y)=aA\vec x +b A \vec y=\vec v$, so we have infinitely many solutions.
In general, in a vector space over a general field $F$, the number of solutions $\vec{x}$ to the system $$ A \vec{x} = \vec{w} $$ is either $0$ or $|\ker A|$. (Because if $\vec{x}_0$ satisfies the equation, then the set of solutions is $\vec{x}_0 + \ker A$.) And $|\ker A| = |F|^k$ for some $k \ge 0$.
Assuming the base field $F$ has infinite cardinality $\alpha$ and the vector space is finite-dimensional, it follows that the number of solutions is $0, 1,$ or $\alpha$ (since $|F|^0 = 1$ and $|F|^k = \alpha$ for $k \ge 1$).