Motivation for definition of Mobius function

Why is the Mobius function defined the way it is? \begin{align*} \mu(n) = \begin{cases} (-1)^r & \text{ if $n$ is square-free and is of the form }n=p_1p_2\ldots p_r\\ 0 & \text{ if $n$ is not square-free} \end{cases} \end{align*}

I can see that the function takes $-1$ on all primes. But why is extended in a way it is just multiplicative and not completely multiplicative?

Also, why is this particular function interesting to study? I can understand studying other arithmetic functions like the divisor function, totient function, etc. This function definition seems to be pulled out of thin air.


Thanks


Let $\mathbb Z^+$ be the set of all positive integers and let $\mathbb C$ be the set of all complex numbers. A function $f:\mathbb Z^+ \to \mathbb C$ is called multiplicative if $f(1) = 1$ and $f(ab) = f(a)f(b)$ for all relatively prime $a$ and $b$.

Let $p_i$ represent the $i^\text{th}$ prime number. Then every positive integer $x$ can be written uniquely as an infinite product $\displaystyle x = \prod_{i=1}^\infty p_i^{\alpha_i}$ where we require that all but a finite number of the $\alpha_i$ be equal to $0$. It follows that, if $f$ is a multiplicative function, then $\displaystyle f(x) = \prod_{i=1}^\infty f(p_i^{\alpha_i})$.

The convolution of two multiplicative functions, say $f$ and $g$, is defined as $f*g$ where $$(f*g)(n) = \sum_{ab=n} f(a)g(b)$$

The set $\mathcal F$ of all multiplicative functions is an abelian group with respect to the convolution operator.

Two important multiplicative functions are $\epsilon$ and $\mathbf 1$ defined as $$\epsilon(n) = \left\{ \begin{array}{ll} 1 & \text{If}\; n = 1\\ 0 & \text{If}\; n \ne 1 \end{array} \right.$$

and

$$ \mathbf 1(n) = 1$$

It is easy to prove that $\epsilon$ is the multiplicative identity of the group $[\mathcal F, *]$.

For any multiplicative function $f$, note that $$\displaystyle (f*\mathbb 1)(n) = \sum_{a|n} f(a)$$

$\mathbf{Theorem. }$ Let $f$ and $g$ be multiplicative functions. Define $F = f*\mathbf 1$ and $G = g*\mathbf 1$. Define $h = f*g$ and $H = h*1$. Then $H(n) = F(n)G(n)$ for all positive integers $n$.

In a way, $\; f*1$ behaves very much like a Fourier transform of $\; f$.

It follows that there are times when we know what $F = f*\mathbf 1 $ is and we need to know what $f$ is. This is where $\mu$, the Mobius inversion function, comes to the rescue. $\mu$ is defined as the inverse of $\mathbf 1$. That is

$$\mathbf 1 * \mu = \epsilon$$

We make a few computations. Let $p$ be a prime number and let $\alpha$ be a non negative integer.

\begin{align} (1*\mu)(1) &= \epsilon(1)\\ \mu(1) &= 1 \end{align}

\begin{align} (1*\mu)(p) &= \epsilon(p)\\ 1 + \mu(p) &= 0\\ \mu(p) &= -1 \end{align}

\begin{align} (1*\mu)(p^2) &= \epsilon(p^2)\\ 1 + \mu(p)+\mu(p^2) &= 0\\ \mu(p^2) &= 0\end{align}

\begin{align} (1*\mu)(p^3) &= \epsilon(p^3)\\ 1 + \mu(p)+\mu(p^2)+\mu(p^3) &= 0\\ \mu(p^3) &= 0\end{align}

We see that

$$\mu(p^\alpha) = \left\{ \begin{array}{rl} 1 & \text{If}\; \alpha = 0\\ -1 & \text{If}\; \alpha = 1\\ 0 & \text{If}\; \alpha \ge 2 \end{array} \right.$$

You can infer the usual definition of $\mu$ from this and the fact that $\mu$ is a multiplicative function.


If you know about the Riemann zeta function, $\zeta(s)$, then $${1\over\zeta(s)}=\sum_{n=1}^{\infty}{\mu(n)\over n^s}$$ for all complex $s$ with real part exceeding 1.

If you don't know about the Riemann zeta function, look it up --- it's the most important function in analytic number theory.