Why is the entropy of a posterior Gaussian distribution higher than its prior?

Your example is not very clear, and I'm not sure if this is the case, but:

In general, (and, more simply, with discrete variables, and true Shannon entropies [*]), "conditioning reduces the entropy", that's true... but only in average.

That is: $H(X | Y) \le H(X)$ is true. But this does not imply $H(X | Y =y) \le H(X)$ for all $y$. The entropy conditioned on some particular values can increase.

For example, let $X, Y$ have this joint probability:

$$ \begin{array}{c|cc} X \backslash Y & 0 & 1 \\ \hline 0 & \frac{1}{3} & 0 \\ 1 & \frac{1}{3} & \frac{1}{3} \\ \end{array} $$

Then $H(Y | X=0)= 0 $ but $H(Y | X=1)= 1 $ , hence $H(Y | X=1) > H(Y)$


[*] Remember that differential entropy is not really the Shannon entropy, and not all the properties of the latter apply to the former. But this does not seem to be your problem.