Picture of Root System of $\mathfrak{sl}_{3}(\mathbb{C})$
Solution 1:
Your confusion is understandable. It is true that the roots are originally defined as elements of $\mathfrak h^*$, which is a $\mathbb C$-vector space (and two-dimensional, hence abstractly isomorphic to $\mathbb C^2$). However, note that there are only finitely many roots; and further, if you choose two (linearly independent ones) of them, all the other roots are actually $\mathbb Z$-linear combinations of those two; in other words, all the roots actually live in a $\mathbb Z$-lattice inside that big complex vector space. In a way, we do not need complex scalars to describe the relation between the roots, just integer coefficients. (And this is "almost true" for all root systems, at worst you have to use very simple fractions like $1/2$ or $1/3$ beyond integers.)
In more intricate parts of the theory, this "root lattice", which here abstractly would just be $\mathbb Z^2$, and related concepts, play an important role.
Now why, instead of talking about the $\mathbb Z$- or $\mathbb Q$-span of the roots, do we go "almost all the way" up to $\mathbb C$ again, but stop at putting that $\mathbb Z$-lattice into a $\mathbb R$-vector space? I think because this is just the most intuitive way to visualise it: We have a good feeling for the geometry of Euclidean space, and you'll notice that the next thing is to look at certain scalar products, visualise reflections and rotations etc. This is all best visualised as happening in "lattices which sit inside a Euclidean space". Compare also the question: root system of semi-simple Lie algebra and passing into euclidean space, where it was asked why we not just look at the $\mathbb Q$-vector space spanned by the roots. (Here and here are other recent questions where I came up with the answer via imagining Euclidean space, as the idea of "hyperplanes" kind of demands.)
Added in reply to your comment: The next thing is that on the root system, one can define a kind-of-standard scalar product, and with this, we can talk about lengths of roots, and angles between them. So if we want to use our intuition for Euclidean space, we should make that scalar product match the standard Euclidean one.
In the case at hand, we can choose two roots $\alpha, \beta$ such that the full root system consists of $\alpha, \beta, \gamma:=\alpha+\beta$, and their negatives. The scalar product is made so that $(\rho, \rho)=2$ for all roots $\rho$, whereas $(\alpha, \beta)=-1$, and from this one can compute $(\alpha, \gamma)=1$ and the products for all other combinations of roots.
So to "realise" (pun intended) those roots in the standard $\mathbb R^2$ with the standard Euclidean scalar product $( , )_{Euclid}$, e.g. all roots should have length $\sqrt 2$. One realisation of this root system in $\mathbb R^2$ would be $\alpha \mapsto (\sqrt2,0)$, $\beta \mapsto (-\frac12 \sqrt 2, \frac12 \sqrt 6)$, accordingly $\gamma \mapsto (-\frac12 \sqrt 2, \frac12 \sqrt 6)$ etc. -- basically a standard hexagon but stretched to radius $\sqrt 2$. If one does not care about the scaling, it's easier to map $\alpha \mapsto (1,0)$, $\beta \mapsto (-\frac12 , \frac12\sqrt 3)$ etc. Either is what you see in your linked picture, where the length of the roots is up to your imagination. Of course you can also rotate this picture by the craziest irrational angles you can come up with, as long as the roots' relative positions to each other stay rigid (accordingly, the picture does not show a coordinate system "under" the roots).
Funnily, there is an easier realisation if instead of using $\mathbb R^2$ itself we embed the root system into a "skew" plane inside $\mathbb R^3$, with (the restriction of) the standard Euclidean scalar product there. Namely, send $\alpha \mapsto (1,-1,0)$, $\beta \mapsto (0, 1,-1)$, accordingly $\gamma \mapsto (1,0,-1)$ etc. See that the scalar product matches exactly, and we have nice integer coefficients! The only downside is that technically the $2$-dimensional vector space spanned by the roots is not $\mathbb R^2$ itself, but rather $V:= \lbrace (v_1,v_2,v_3) \in \mathbb R^3: \sum v_i=0 \rbrace$. Still, one often finds this identification the easiest. It also generalises nicely for higher $n$.
However, to map $\alpha$ to $(1,0)$ and $\beta$ to $(0,1)$ is not a good idea, because for this one would have to use a strange nonstandard scalar product on $\mathbb R^2$. The fact that in the root scalar product $(\alpha, \beta) =-1$ really means that the angle between $\alpha$ and $\beta$ is $2\pi/3$ a.k.a. $120°$, and to work with that, we should identify $\alpha, \beta$ with vectors which "really" have this angle in Euclidean space.