I really am taking my own classes, and thinking about my own mathematics. But so far my classes have discussed supermathematics, which is cool but to which I have nothing so far to add, and classical (Lagrangian and Hamiltonian) mechanics, which I had intended to blog about last year. Perhaps I will some day write about such stuff; for now, I'd like to tell you about another topic we've been discussing in my calculus class.
Let's say I have a (proper) fraction m/n (in lowest terms). It's not a very simple expression: n most likely has lots of factors, and it would be nice to understand how much each factor contributes to the whole. For instance:
7/15 = 2/3 - 1/5
Can we always split up a number like this? At best, we could write a proper fraction as a sum of (proper) fractions with very small denominators.
Let's start by considering the case when n has two relatively prime factors: n = rs. We want to write
m/n = A/r + B/s.
Multiple both sides by n; we see that this is equivalent to solving the following (easy) Diophantine equation:
m = As + Br
This certainly has a solution. For instance, we can use Euclid's algorithm to write
1 = Xs + Yr
and then use A = mX and B = mY. Of course, this choice of A and B will generally be much larger than hoped-for. Never fear, though: we can always shift A and B simultaneously in opposite directions multiples of r and s. Thus we can assure that
0 < A < r
in which case
0 < As = m - Br < rs = n
-n < m-n < Br < m < n
-s < B < s.
Thus, we can factor the denominator into prime-power parts, and use induction. Going directly: we're looking for A, B, ..., C such that
m/(rs...t) = A/r + B/s + ... + C/t
If we multiply both sides by (s...t), this is
m/r = A(s...t)/r + integer
so we're looking for an A such that
m = A(s...t) (mod r).
Since A, r, and (s...t) are pairwise relatively prime, we can definitely do this, and we can be sure that
0 < A < r.
Doing this for each term yields
m/n = A/r + B/s + ... + C/t + integer
and this integer is definitely negative and no more (in absolute value) than the number of other summands, since each fraction is between 0 and 1. Thus we can subtract 1 from some of the summands to make the integer disappear.
11/60 = 1/4 + 1/3 + 3/5 - 1 = -3/4 + 1/3 + 3/5 = 1/4 - 2/3 + 3/5 = 1/4 + 1/3 - 2/5.
This last step, where we have to subtract, reminds us that this decomposition is not unique. It's close: we have two choices for each term, but of course making some choices constrains others.
If we're working with polynomials, on the other hand, we never have to subtract. The division works exactly as with integers, but now all inequalities should be written in terms of the degrees of the polynomials. By counting total degree, we see that the left-hand side has total degree less than 0, so that "+ integer" on the right-hand side must be 0. This is a proof that the partial-fractions decomposition of polynomial fractions is unique.
Or, rather, there's one more step in the partial-fractions decomposition. What I've written so far allows us to factor the denominator into prime powers, and write one fraction for each power. But we can go one step further: if q < p^d, then we can write
q/p^d = A_1/p + A_2/p^2 + ... + A_d/p^d
with 0 ≤ A_i < p for each i. This is, of course, trivial, and is how we traditionally write integers in terms of a certain "base":
q = q_1 p + A_1
q_1 = q_2 p + A_2
One more point bears stating. In the case when we're working with polynomials, and when we can completely factor the denominator into linear factors, partial-fractions decomposition becomes extremely easy, because dividing by a linear factor is trivial:
m(x) = (x-a)q(x) + m(a)
I.e. the remainder mod (x-a) is just the value of the polynomial at x=a. To get the quotients, synthetic division is very fast. This makes the last step trivial, and repeatedly dividing by p allows us to divide by p^d, so really we can do the initial steps quickly as well.