Why isn't the gamma function defined so that $\Gamma(n) = n! $?

Solution 1:

I find it more illuminating to see what that extra $ t^{-1} $ does to the integral. As a generalization of the factorial, $ \Gamma $ is inherently multiplicative*. On the other hand, integration, which is essentially a sum, is inherently additive. Thinking about it this way it seems a bit odd that an integral could give an appropriate generalization. However, there is a simple function that takes the (positive) multiplicative reals to the reals under addition: the logarithm. Thinking of gamma as $$ \Gamma(s) = \int_0^{\infty} t^s e^{-t} \frac{dt}{t} $$ we see that the natural log arises naturally (what's the first thing to come to mind when you see $ \frac{dt}{t} \: $?) in this context (and there are no pesky $s-1$'s left).

I haven't done much with integration theory (so I'm not sure my terminology is correct), but I believe this intuitive argument may be made rigorous by considering the integral formula for $ \Gamma(s) \: $ as an integral over the positive reals under multiplication (i.e. the interval $ (0,+\infty) $ ) with respect to the multiplicative Haar measure.

*note that by "multiplicative" I just mean that it's realized as a product, not that it's a multiplicative arithmetic function or anything like that

Solution 2:

$$ \Gamma(\alpha) = \int_0^\infty x^{\alpha-1} e^{-x}\,dx. $$ Why $\alpha-1$ instead of $\alpha$? Here's one answer; there are probably others. Consider the probability density function $$ f_\alpha(x)=\begin{cases} \dfrac{x^{\alpha-1} e^{-x}}{\Gamma(\alpha)} & \text{for }x>0 \\[12pt] 0 & \text{for }x<0 \end{cases} $$ The use of $\alpha-1$ instead of $\alpha$ makes the family $\{f_\alpha : \alpha > 0\}$ a "convolution semigroup": $$ f_\alpha * f_\beta = f_{\alpha+\beta} $$ where the asterisk represents convolution.