Difference between Minimax theorem and Nash Equilibrium existence
Solution 1:
Wikipedia agrees with you, saying "In zero-sum games, the minimax solution is the same as the Nash equilibrium" (second statement of contents of the article about Minimax). So the existence of Nash equilibrium is a generalization of the Minimax theorem.
Presumably, the proof of the minimax theorem is much simpler than the proof of the general theorem. Another crucial difference is that the proof of the minimax theorem is constructive (it amounts to linear programming), whereas finding a Nash equilibrium is PPAD-complete, even for two player games. It is even hard to find an approximate Nash equilibrium.
Solution 2:
If a game has a value, it need not to have a mixed strategy Nash equilibrium. In the Von Neumann's Minimax theorem stated above, it is assumed that, for each player, the set of mixed strategy best responses is nonempty given the other player's mixed strategy. However, this assumption is not needed for a game to have a value.