The logic behind partial fraction decomposition

In the general case of any function would be interesting but my question is concerning the general case of polynomials with integer powers. I can use the method of partial fractions in the simple case required for an introductory course on integration, but I'm not sure I really understand it or could describe WHY it works.

For an arbitrary example $\frac{P(x)}{Q(x)}$ where $P$ and $Q$ are some string of polynomials, I know to check that this is indeed a proper fraction before continuing. Next I know that the number of constants I need to find is determined by the degree of $Q$. But why is this? I don't really get how this is known in advance. Further, in the case of repeated roots, what is the mechanism at play behind how a fraction like this is expanded? [i.e. $(x+1)^3$ in the denominator would be expanded with $\frac{A}{(x+1)} + \frac{B}{(x+1)^2} + \frac{C}{(x+1)^3}]$. I'm not really that advanced in mathematics but I know that in the case of a repeated root it would be necessary to include all the powers of the root to "account for" all the possible cases of what you may have started with. From the theoretical aspect though, why does this work? Why do we know that the degree of $Q$ determines the number of constants and that we can find all the constants by incrementing every power of the root in the case of a repeated root? I want lift the rug up and see what mechanisms are at play in order to understand this better.


Solution 1:

Great question! First, let's tackle just the case where the roots of $Q(x)$ are all distinct. One way to conceptualize what's going on is the following: if $r$ is a root of $Q(x)$, then as $x \to r$, the function $f(x) = \frac{P(x)}{Q(x)}$ (we always assume $P, Q$ have no common roots) goes to infinity. How quickly does it go to infinity? Well, write $Q(x) = (x - r) R(x)$. Then

$$\frac{P(x)}{Q(x)} = \frac{1}{x - r} \left( \frac{P(x)}{R(x)} \right)$$

and $R(r) \neq 0$. So we see that as $x \to r$, this expression goes to infinity like $\frac{1}{x - r}$; more precisely, it goes to infinity like $\frac{c_r}{x - r}$ where $c_r = \frac{P(r)}{R(r)}$. This number is referred to in complex analysis as the residue of the pole at $x = r$. So the upshot of all of this is that we can subtract this pole away and write

$$f(x) - \frac{c_r}{x - r} = \frac{1}{x - r} \left( \frac{P(x)}{R(x)} - \frac{P(r)}{R(r)} \right).$$

The expression in parentheses approaches $0$ as $x \to r$, and in fact it is a rational function whose numerator is divisible by $x - r$, so we can actually divide by $x - r$. The result is a new rational function which no longer has a pole at $r$.

We can repeat this process for every root of $Q$ until we get a rational function with no poles whatsoever. But this must be a polynomial. So now we've written $f$ as a sum of fractions of the form $\frac{c_r}{x - r}$ plus a polynomial. (Note that in general we need to consider the complex roots of $Q$.)


Okay, so what if $Q$ has repeated roots? Then $f$ might go to infinity more quickly as $x \to r$. If $r$ is a root with multiplicity $n$, then writing $Q(x) = (x - r)^n R(x)$ we now have

$$\frac{P(x)}{Q(x)} = \frac{1}{(x - r)^n} \left( \frac{P(x)}{R(x)} \right)$$

where $R(r) \neq 0$. So we see that as $x \to r$, this expression goes to infinity like $\frac{1}{(x - r)^n}$; more precisely, it goes to infinity like $\frac{c_{r,n}}{(x - r)^n}$ where $c_{r, n} = \frac{P(r)}{R(r)}$. So we can do the same thing as before and just subtract this off, getting

$$f(x) - \frac{c_{r,n}}{(x - r)^n} = \frac{1}{(x - r)^n} \left( \frac{P(x)}{R(x)} - \frac{P(r)}{R(r)} \right).$$

The expression in parentheses approaches $0$ as $x \to r$, so again it is divisible by $x - r$, but this time we're not done! We still have to subtract off terms that look like $\frac{1}{(x - r)^k}$ where $k < n$ until the resulting rational function no longer goes to infinity as $x \to r$.


The above is nice as far as it goes, but let me mention that algebraically partial fraction decomposition is more general than rational functions over $\mathbb{C}$. It also generalizes to, for example, rational numbers! Like rational functions, rational numbers also have partial fraction decompositions, like

$$\frac{5}{12} = \frac{2}{3} - \frac{1}{4}.$$

Explaining all of this in a unified framework requires the language of abstract algebra, in particular the notion of a group, of a field, and of a principal ideal domain. Partial fraction decomposition in this setting describes the additive group of the field of fractions of a principal ideal domain using essentially the Chinese remainder theorem, but that's a story for another day...

(It might not seem like uniqueness of partial fraction decomposition holds for rational numbers because we can also write $\frac{5}{12} = \frac{3}{4} - \frac{1}{3}$, but the correct notion of uniqueness here is subtle; it is uniqueness "mod $1$.")