The theorem deals with the following situation. We're given that $\lim_{x\to a}g(x)=b$ and $\lim_{x\to b}f(x)=L$. We want to conclude that $\lim_{x\to a}f(g(x))=L$. The intuition is that as $x\to a$, $g(x)$ is close to $b$ (by the first limit), and if $g(x)$ is close to $b$, then $f(g(x))$ is close to $L$ (by the second limit). Putting these two statements together, the desired conclusion "should" follow.

However, the two versions you cited aren't saying the same thing. Notice that the first version additionally assumes that $f$ is continuous at $x=b$. The second version is actually wrong. Here's a simple example which demonstrates the issue: $$g(x)=0\text{ for all }x\in\mathbb{R}\\ f(x)=\begin{cases}1, & x\ne 0\\ 0, & x=0\end{cases} $$ We have $\lim_{x\to 0}g(x)=0$ and $\lim_{x\to 0}f(x)=1$. If you were to apply the second version, you'd expect $\lim_{x\to 0}f(g(x))=1$. But that's wrong because $f(g(x))=0$ for all $x$, so the last limit is $0$. Meanwhile, you can't apply the first version because $f$ is not continuous at $x=0$. In other words, an additional assumption (as in the first version) is needed to make this theorem work (more details here). In practice, though, the functions we work with are "nice" enough that we usually don't have to bother checking these things.