How Entropy scales with sample size
Solution 1:
Use the normalized entropy:
$$H_n(p) = -\sum_i \frac{p_i \log_b p_i}{\log_b n}.$$
For a vector $p_i = \frac{1}{n}\ \ \forall \ \ i = 1,...,n$ and $n>1$, the Shannon entropy is maximized. Normalizing the entropy by $\log_b n$ gives $H_n(p) \in [0, 1]$. You will see that this is simply a change of base, so one may drop the normalization term and set $b = n$. You can read more about normalized entropy here and here.
Solution 2:
A partial answer for further reference:
In short, use the integral formulation of the entropy and pretend that the discrete distribution is sampling a continuous one.
Thus, create a continuous distribution $p(x)$ whose integral is approximated by the Riemann sum of the $p_i$'s: $$\int_0^1 p(x)dx \sim \sum_i p_i\cdot \frac{1}{N} = 1$$ This means that the $p_i$'s must first be normalized so that $\sum_i p_i = N$.
After normalization, we calculate the entropy: $$H=-\int_0^1 p(x)\log\left(p(x)\right)dx \sim -\sum_i p_i \log(p_i)\cdot \frac{1}{N}$$
As $N\to \infty$ this gives an entropy which is solely related to the distribution shape and does not depend on $N$. For small $N$, the difference will depend on how good the Riemann sum approximates the integrals for given $N$.