Since I'm behind in my series of posts on fields, quantum or otherwise, I will instead talk today about some linear algebra, and not define most of my terms.
The category Vect of vector spaces (over generic field \R = "real numbers") nicely generalizes the category Set of sets. Indeed, there is a "forgetful" functor in which each set forgets that it has a basis. Yes, that's the direction I mean. A vector space generalizes ("quantizes") in a natural way the notion of "set": rather than having definite discrete elements — two elements in a set either are or are not the same — a vector space allows super-positions of elements. A set is essentially "a vector space with a basis": morphisms of sets are morphisms of vector spaces that send basis elements to basis elements. So our forgetful functor takes each set X to the vector space Hom(X,\R) (Hom taken in the category of sets). But, I hear you complain, Hom(-,\R) is contravariant! Yes, but in this case, where I forgot to tell you that all sets are finite and all vector spaces finite-dimensional, we can make F = Hom(-,\R) covariant by F(\phi): f \mapsto g(y) = \sum_{x\in X s.t. \phi(x)=y} f(x). Actually, of course, if I'm allowing infinite sets, then I should specify that I don't quite want X \to Hom(X,\R), but the subspace of functions that send cofinitely many points in X to zero.
Anyhoo, so Set has an initial object 1 = {one element} and a terminal object 0 = {empty set}, and well-defined (up to canonical isomorphism) addition and multiplication (respectively disjoint union and cartesian product). These generalize in Vect to 1 = \R and 0 = {0}, and to direct sum and tensor product; if we identify n = "\R^n" (bad notation, because it's really n\R; I want n-dimensional space with a standard basis, so the space of column vectors), then it's especially clear that sums and products are as they should be. So Vect is, well, not quite a rig (ring without negation), because nothing is defined uniquely, but some categorified version, where all I care is that everything be defined up to canonical isomorphism (so, generically, given by a universal property).
But I can do even better. To each vector space V is associated a dual space V^* = Hom_{Vect}(V,\R), and most of the time V^{**} = V. (I need to learn more linear algebra: I think that there are various kinds of vector spaces, e.g. finite-dim ones, for which this is true, and I think that there's something like V^* = V^{***}. If so, then I should certainly always pass immediately from V to V^{**}, or some such; I really want dualing to be involutive.) By equals, of course, I always mean "up to a canonical isomorphism". Now, V\times V^* = Hom(V,V) is rather large, but there is a natural map Trace:Hom(V,V)\to\R, and this allows us to define a particular product "." which multiplies an element v\in V with w\in V^* by v.w = Tr(v\tensor w). Then . is multi-linear, as a product ought to be, and we can thus consider V.V^* = \R. Indeed, we can imagine some object 1/V that looks like V^* — a physicists wouldn't be able to tell the difference, because their elements are the same — so that V \tensor 1/V = \R. (Up to canonical isomorphism. It's not, of course, clear which copy of V we should contract 1/V with in V\tensor V. But either choice is the same up to canonical isomorphism.) There is even a natural trace from, say, \Hom(2,4) \to 2 — take the trace of the two 2x2 squares that make up the 4x2 matrices — "proving" that 4/2 = 2.
So it seems that, well, Vect is not a division rig, but it naturally extends to one. But what about that n in "ring"? What about negative dimensions? This I don't know.
See, it's an important question. Because, consider the tensor algebra T^{.}(V) = \R + V + V\tensor V + ... — this is an \N-graded algebra of multilinear functions on V^*. This looks an awful lot like the uncategorified 1+x+x^2+..., which we know is equal to 1/(1-x). (Why? Because (1-x)(1+x+...) = 1-x+x-x^2+x^2-... = 1, since every term cancels except for the -x^\infty, which is way off the page.) Anyhoo, so we ought to write the tensor algebra as 1/(1-V).
Which doesn't make any sense at all. 1-V? Well, we might as well define 1-V as dual to the tensor algebra: there should be a natural way to contract any element of 1-V with any multilinear function on V^*. But this has a much shorter algebraic expression, which ought to have Platonic meaning. So, what's a natural object that we can construct out of V that contracts (linearly) with all multilinear functions to give real-valued traces?
If we could answer this, then perhaps we could find out what -V is. How? Not, certainly, by subtracting 1=\R from 1-V. No, I suggest that whatever our proposal be, we then try it on 1-2V = (T^.(V+V))^* = 1/(\R + V+V + (V+V)\tensor(V+V) + ...), and compare. What out to happen is that there should be some natural object W such that 1-2V = W + 1-V, and it should turn out that 1-V = 1 + W. Whatever the case is, there should be a natural operation that "behaves like +" such that 1-V + V = 1. It's certainly not standard direct sum, just like how V \times 1/V is not the standard tensor product. But it should be like it in some appropriate sense. Most necessarily, it should satisfy linearity: if v_1,v_2\in V and w_1,w_2\in W, then v_1+w_1 and v_2+w_2 \in V+W should sum to (v_1+v_2)+(w_1+w_2). And, of course, if you have the right definition, then all the rest of arithmetic should work out: 1/(-V) = -(1/V), -V = -\R \times V, (-V)\times W = -(V\times W), and, most importantly, --V = V (up to canonical isomorphism).
One can go further with such symbolic manipulation. You've certainly met the symmetric tensor algebra S^{.}(V) of multilinear symmetric tensors, and you've probably defined each graded component S^{n}(V) as V^{\tensor n} / S_n, where by "/ S_n" I mean mean "modulo the S_n action that permutes the components in the n-times tensor product." (If you are a physicists, you probably defined the symmetric tensors as a _subspace_ of all tensors, rather than a quotient space, but this is ok, because the S_n identification generates a projection operator Sym: \omega \to (1/n!)\sum_{\pi\in S_n} \pi(\omega), and so the subspace is equal to the quotient. At least when the characteristic of the ground field is 0.) Well, S_n looks an awful lot like n!, so the symmetric algebra really looks like 1 + V + V^2/2! + ... = e^V. Which is reasonable: we can naturally identify S^{.}(V+W) = S^{.}V\tensor S^{.}W.
It's not quite perfect, though. The dimension of S^{.}V, if dim V = n, is not e^n, but 1 + n + n(n+1)/2 + n(n+1)(n+2)/6 + ..., which is only correct in the limit n\to\infty. Well, so why is that the dimension? When we symmetrize v\tensor w to 1/2(vw+wv), we generically identify different tensors. But v^2 symmetrizes to itself. Baez, though, says how to think about this: when a group action does not act freely, we should think of points like v^2 as only going to "half" points. So, for example, the group 2 can act on the vector space \R in a trivial way; we should think of \R/2 as consisting of only "half a dimension".
Anyway, the point is that we can divide by groups, and this is similar to our division by (dual) vector spaces: in either case, we are identifying, in a linear way, equivalence classes (either orbits or preimages).
Now, though, it becomes very obvious that we need to extend what kinds of spaces we're considering. Groups can act linearly in lots of ways, and it's rare that the quotient space is in fact a vector space. Perhaps the physicists are smart to confuse fixed subspaces and quotients: it restricts them just to projection operators. But, for instance, if we mod out \C by 2 = complex conjugation (which is real-linear, although not complex-linear), do we get \R or some more complicated orbifold? Is there a sense in which \R/2 + \R/2 = \R, where 2 acts by negation? \R/2 is the ray, so perhaps the direct sum model works, but you don't naturally get \R, just a one-dim space? To give interesting physics, it would be nice if these operations really did act on the constituent parts of each space. And what about dividing by 3? Every field has a non-trivial square root of 1, but only \C has nontrivial nth roots. So perhaps we really should just work with Vect of \C-linear spaces. Then we can always mod out by cyclic actions, but we don't normally get vector spaces.
Of course, part of the fun of categorifying is that there are multiple categorical interpretations of any arithmetic object: 6 may be the cyclic group C_6 = C_2 \times C_3, but 3! is the symmetric group S_3, and the groups 4 and 2x2 are also unequal. But if we come up with a coherent-enough theory, we ought to be able to make interesting discussion of things like square roots: there's an important sense in which the square root of a Lorentz vector is a spinor, and we should be able to say (1+V)^{1/2} = 1 + (1/2)V + (1/2)(-1/2)V^2/2 + (1/2)(-1/2)(-3/2)V^3/3! + ....
Overall, the move from Set to Vect deserves to be called "quantization" — well, really quantization doesn't yield vector spaces but (complex) Hilbert spaces, so really it should be the forgetful functor Set \to Hilb. If we have a coherent theory of how to categorify from numbers to Set, then it should match our theory of how to categorify from numbers to Hilb. And, ultimately, we should be able to understand all of linear algebra as even more trivial than how we already understand it: linear algebra is simply properly-categorified arithmetic.
No comments:
Post a Comment