Why is orthogonal basis important?
If $\{v_1, v_2, v_3\}$ is a basis for $\mathbb{R}^3$, we can write any $v \in \mathbb{R}^3$ as a linear combination of $v_1, v_2,$ and $v_3$ in a unique way; that is $v = x_1v_2 + x_2v_2+x_3v_3$ where $x_1, x_2, x_3 \in \mathbb{R}$. While we know that $x_1, x_2, x_3$ are unique, we don't have a way of finding them without doing some explicit calculations.
If $\{w_1, w_2, w_3\}$ is an orthonormal basis for $\mathbb{R}^3$, we can write any $v \in \mathbb{R}^3$ as $$v = (v\cdot w_1)w_1 + (v\cdot w_2)w_2 + (v\cdot w_3)w_3.$$ In this case, we have an explicit formula for the unique coefficients in the linear combination.
Furthermore, the above formula is very useful when dealing with projections onto subspaces.
Added Later: Note, if you have an orthogonal basis, you can divide each vector by its length and the basis becomes orthonormal. If you have a basis, and you want to turn it into an orthonormal basis, you need to use the Gram-Schmidt process (which follows from the above formula).
By the way, none of this is restricted to $\mathbb{R}^3$, it works for any $\mathbb{R}^n$, you just need to have $n$ vectors in a basis. More generally still, it applies to any inner product space.
Short version: An orthonormal basis is one for which the associated coordinate representations not only faithfully preserve the linear properties of the vectors, but also the metric properties.
Long version:
A basis gives a (linear) coordinate system: if $(v_1,\dotsc,v_n)$ is a basis for $\mathbb{R}^n$ then we can write any $x\in\mathbb{R}^n$ as a linear combination $$ x = \alpha_1v_1 + \dotsb + \alpha_nv_n $$ in exactly one way. The numbers $\alpha_i$ are the coordinates of $x$ wrt the basis. Thus we associate the vector $x$ with a tuple of its coordinates: $$ x \leftrightarrow \left[\begin{matrix} \alpha_1 \\ \vdots \\ \alpha_n\end{matrix}\right] $$
We can perform some operations on vectors by performing the same operation on their coordinate representations. For example, if we know the coordinates of $x$ as above, then the coordinates of a scalar multiple of $x$ can be computed by scaling the coordinates: $$ \lambda x \leftrightarrow \left[\begin{matrix} \lambda\alpha_1 \\ \vdots \\ \lambda\alpha_n\end{matrix}\right] $$ In other words, $$ \lambda x = (\lambda\alpha_1) v_1 + \dotsb + (\lambda\alpha_n) v_n $$ For another example, if we know the coordinates of two vectors, say $x$ as above and $$ y = \beta_1v_1 + \dotsb + \beta_nv_n $$ then the coordinates of their sum $x+y$ can be computed by adding the respective coordinates: $$ x+y \leftrightarrow \left[\begin{matrix} \alpha_1+\beta_1 \\ \vdots \\ \alpha_n+\beta_n\end{matrix}\right] $$ In other words, $$ x+y = (\alpha_1+\beta_1)v_1 + \dotsb + (\alpha_n+\beta_n)v_n $$ So, as far as the basic vector operations (scalar multiplication and vector addition) are concerned, the coordinate representations are perfectly good substitutes for the vectors themselves. We can even identify the vectors with their coordinate representations, in contexts where only these basic vector operations are relevant.
But for other operations, the coordinate representation isn't a substitute for the original vector. For example, you can't necessarily compute the norm of $x$ by computing the norm of its coordinate tuple: $$ \|x\| = \sqrt{\alpha_1^2+\dotsb+\alpha_n^2}\qquad\text{might not hold.} $$ For another example, you can't necessarily compute the dot product of $x$ and $y$ by computing the dot product of their respective coordinate tuples: $$ x\bullet y = \alpha_1\beta_1+\dotsb+\alpha_n\beta_n\qquad\text{might not hold.} $$ So in contexts where these operations are relevant, coordinate representations wrt an arbitrary basis are not perfectly good substitutes for the actual vectors.
The special thing about an orthonormal basis is that it makes those last two equalities hold. With an orthonormal basis, the coordinate representations have the same lengths as the original vectors, and make the same angles with each other.