Expectation of the maximum of gaussian random variables
How precise an answer are you looking for? Giving (upper) bounds on the maximum of i.i.d Gaussians is easier than precisely characterizing its moments. Here is one way to go about this (another would be to combine a tail bound on Gaussian RVs with a union bound).
Let $X_i$ for $i = 1,\ldots,n$ be i.i.d $\mathcal{N}(0,\sigma^2)$.
Defining, $$ Z = [\max_{i} X_i] $$
By Jensen's inequality,
$$\exp \{t\mathbb{E}[ Z] \} \leq \mathbb{E} \exp \{tZ\} = \mathbb{E} \max_i \exp \{tX_i\} \leq \sum_{i = 1}^n \mathbb{E} [\exp \{tX_i\}] = n \exp \{t^2 \sigma^2/2 \}$$
where the last equality follows from the definition of the Gaussian moment generating function (a bound for sub-Gaussian random variables also follows by this same argument).
Rewriting this,
$$\mathbb{E}[Z] \leq \frac{\log n}{t} + \frac{t \sigma^2}{2} $$
Now, set $t = \frac{\sqrt{2 \log n}}{\sigma}$ to get
$$\mathbb{E}[Z] \leq \sigma \sqrt{ 2 \log n} $$
The $\max$-central limit theorem (Fisher-Tippet-Gnedenko theorem) can be used to provide a decent approximation when $n$ is large. See this example at reference page for extreme value distribution in Mathematica.
The $\max$-central limit theorem states that $F_\max(x) = \left(\Phi(x)\right)^n \approx F_{\text{EV}}\left(\frac{x-\mu_n}{\sigma_n}\right)$, where $F_{EV} = \exp(-\exp(-x))$ is the cumulative distribution function for the extreme value distribution, and $$ \mu_n = \Phi^{-1}\left(1-\frac{1}{n} \right) \qquad \qquad \sigma_n = \Phi^{-1}\left(1-\frac{1}{n} \cdot \mathrm{e}^{-1}\right)- \Phi^{-1}\left(1-\frac{1}{n} \right) $$ Here $\Phi^{-1}(q)$ denotes the inverse cdf of the standard normal distribution.
The mean of the maximum of the size $n$ normal sample, for large $n$, is well approximated by $$ \begin{eqnarray} m_n &=& \sqrt{2} \left((\gamma -1) \Phi^{-1}\left(2-\frac{2}{n}\right)-\gamma \Phi^{-1}\left(2-\frac{2}{e n}\right)\right) \\ &=& \sqrt{\log \left(\frac{n^2}{2 \pi \log \left(\frac{n^2}{2\pi} \right)}\right)} \cdot \left(1 + \frac{\gamma}{\log (n)} + \mathcal{o} \left(\frac{1}{\log (n)} \right) \right) \end{eqnarray}$$ where $\gamma$ is the Euler-Mascheroni constant.