05 December 2006

Poisson brackets and scalar fields

Today's* ODEs class was rather amazing. In standard Yasha style, the topics darted around (he once explained that "It is impossible to be lost in my class. Because topic changes every two minutes, so if you are lost, you won't be lost."): a taste of KM theory, a dash of geodesics on ellipsoids, and a little infinite-dimensional Hamiltonian systems.

It is this last discussion that I found rather mind-blowing. We did not say anything new; rather, we started defining what will eventually give us the quantum behavior of fields, from an entirely classical viewpoint.

Consider one PDE \d u(x,t)/\d t = F(u), where u is a function and F is some map from functions to functions. Let's say, for example, that we're interested in complex functions on the circle: u:S\to\C is what the physicists would call a (complex) scalar field. Let V be the set of all scalar fields; I will not be precise what conditions I want (presumably some smoothness conditions, say complex-analytic, and perhaps some convergence conditions, for instance that the square integral converges. I will call maps from V \to \R "functionals" and those from V \to V "operators"; V is a (complex) vector space, so it makes sense to talk about (real-, complex-, anti-) linear functionals and operators. For instance, \d/\dx is a linear operator; (1/2\pi) \int_0^{2\pi} -- g(x) dx is a linear functional. (I will from now on write \int for \int_0^{2\pi}; consistent with half of mathematics, the volume of the circle is 2\pi.) Rather than thinking of my problem as a PDE, I should think of it as an ODE in this infinite-dimensional space.

Imposing analyticity, etc., conditions on V curtails the freedom of functions: the value of a function at a point largely determines the value at nearby points. We ought to perform a change-of-basis so that we can better tell functions apart: let's assume, for instance, that each function has a Fourier expansion

u(x) = u_0 + \sum p_k e^{-ikx} + q_k e^{ikx}

(Given, of course, by

u_0 = (1/2\pi) \int u(x) dx; p_k = (1/2\pi) \int u(x) e^{ikx} dx; q_k = (1/2\pi) \int u(x) e^{-ikx} dx.)

((Sometime soon I will figure out what I believe to be the proper class of functions with which to do physics. One might hope for (complex) holomorphic, or for complex analytic, or for real C^\infty, or real analytic, or maybe just real integrable. But the Fourier transformation is vital to modern physics, and none of these is particularly the class of things with natural Fourier transforms, because I often end up with \delta functions. I ought to study the Fourier transform a bit more, because I don't understand something basic: it seems that the \delta functions supply a continuum of basis states, whereas the fourier modes are countable. But every delta function can fourier-transform, and modulo convergence this should be a change-of-basis. Perhaps it has something to do with the fact that the Fourier transform doesn't care about individual values of functions, just their integrals. We really ought to pick a class of "functions" so that Fourier really is a legitimate "change-of-basis" in the appropriate infinite-dimensional sense.))

Then our manifold V is, more or less, an odd-dimensional manifold. We cannot hope to put a symplectic structure on it. On the other hand, the ps and qs so naturally line up that we really want to write down \omega = \sum a_k dp_k \wedge dq_k, and we can do so on the even-dimensional subspace \{u_0 = 0\}. (The coefficients a_k have yet to be determined; \omega is a symplectic form for any choice of a_k, and we should choose judiciously to match other physics. Throughout this entry, I eschew the Einstein summation conventions.)

Well, almost. This \omega is almost certainly not going to converge if you feed it most pairs of functions, and I think that to restrict our functions to those that have Fourier expansions that converge rapidly enough is premature, especially since we have yet to determine the a_k. Rather, Yasha suggests, we should discuss the Poisson bracket, which is what really controls physics.

How so? you ask. And what, fundamentally, is a Poisson bracket? Consider our original Poisson bracket \{F,G\} = \omega(X_G,X_F), where X_F is defined by \omega(X_F,-) = dF(-). Then unwrapping definitions gives that \{F,G\} = dF(X_G) = X_G[F], where on the RHS I'm treating X_G as a differential operator. Our Poisson bracket knows exactly the information needed: given a Hamiltonian H, the flow is generated by the vector field X_H = \{-,H\}.

In general, a Poisson bracket is any bracket \{,\} satisfying
(a) Bilinearity (for now \R-linear; perhaps we will ask for \C-linear soon?)
(b) Anti-symmetry and Jacobi (i.e. \{,\} is a Lie bracket)
(c) It behaves as a (first-order) differential operator in each variable: \{FG,H\} = F\{G,H\} + \{F,H\}G.

Condition (c) guarantees that a Poisson bracket never cares about overall constants in F, etc.: \{F,G\} depends only on dF and dG. Symplectic forms, being non-degenerate, are dual to antisymmetric _0^2-tensors (two raised indices), and Poisson brackets are exactly of this form. But Poisson brackets need not be nondegenerate to give us physics. Indeed, the Poisson bracket as described is plenty to define a vector field to be the "derivative" of each function. (Not quite the gradient, but an antisymmetric version of one.) And this is what's needed.

So, to describe the physics of our system, it suffices to pick an appropriate Poisson bracket. Returning now to the system we were originally interested in, of scalar fields on the circle, our Hamiltonians should be functionals of u(x). Let's assume that every functional (at least all the physical ones) can be written in Taylor expansion as polynomials in p_k and q_k. Then to define the Poisson bracket, and continuing to ignore issues of convergence, it suffices to define the brackets between our various coefficients p_k, q_l, and u_0. Given \omega = \sum a_k p_k \wedge q_k (on our subspace of u_0=0), we get \{p_k,q_k\} = a_k^{-1}, and all other brackets are 0.

What would be nice now is to find some physical reason to pick particular a_k, or to even motivate a Poisson bracket of this form at all. Perhaps you vaguely recall a physicists telling you that the Hamiltonian density functional for free scalar fields should look something like h(u(x)) = 1/2 (u^2 + m^2 u'^2), where u'(x) = du/dx. Then the Hamiltonian would be the integral of this: H(u) = \int_0^{2\pi} h(u(x)) dx = u_0^2/2 + \sum_{k=1}^\infty (1-m^2k^2) p_k q_k. With the Poisson bracket given, we can solve this explicitly: p_k(t) = p_k(t=0) e^{(m^2k^2-1) a_k t} and q_k(t) = q_k(t=0) e^{(1-m^2k^2) a_k t}. But this doesn't particularly match physical expectations --- why, for m^2 small and positive and a_k positive, should we expect fields to tend to develop lots of p_k oscillation for large k and lots of q_k oscillation for small k, and for the rest of the oscillation to die?

I'll come back to this kind of physical justification, and perhaps argue my way to a Hamiltonian and a Poisson bracket from the other direction, later. First, I want to explain why Yasha likes this set-up, via mathematical, rather than physical, elegance.

Many interesting Hamiltonians, Yasha observes, are of forms similar to the one I considered in the previous example: the functional H(u) is defined as the integral of some density functional h(x,u(x),u'(x),...). Let's consider this case. Yasha, in fact, uses a particularly restricted case: isotropy is easy (no x dependence), but Yasha also says no derivatives: h = h(u). Then H(u) = (1/2\pi) \int h(u(x)) dx

Then what happens? Well, \dot{q_k} = \{H,q_k\} = a_k \dH/\dp_k = (1/2\pi) \int a_k \dh/\dp_k, and \dot{p_k} = (-1/2\pi) \int a_k \dh/\dq_k/. Plugging these into \dot{u}(x) = \sum \dot{p_k} e^{ikx} + \dot{q_k} e^{-ikx} gives

\dot{u}(x) = (1/2\pi) \int \sum [-a_k \dh/\dq_k e^{ikx} + a_k \dh/\dp_k e^{-ikx}].

Can we recognize this as anything simpler? Recall the chain rule: \dh/\du = \sum (\dh/\dq_k \dq_k/\du + \dh/\dp_k \dp_k/\du). Then, since q_k = (1/2\pi) \int e^{ikx} u(x) dx, we see that \dq_k/\du(x) = (1/2\pi) e^{ikx}. So we see that

h'(u)(x) = \dh/\du(x) = (1/2\pi) \sum [ \dh/\dq_k e^{ikx} + \dh/\dp_k e^{-ikx} ]

This isn't quite what we want, because we have opposite signs on the two a_k. But if we differentiate h'(u) with respect to x, we get

\d/\dx [h'(u)(x)] = (1/2\pi) \sum [ ik \dh/\dq_k e^{ikx} + -ik \dh/\dp_k e^{-ikx} ]

And so, if we pick a_k = k, we see that

\dot{u}(x) = i \d/\dx [h'(u)]

I've been a bit sloppy with this calculation, and you may have trouble following the factors of 2\pi. In particular, I write \int, when I probably should have written \int_{y=0}^{2\pi} dy. But then I would have had to keep track of what's a function of what. Anyway, somewhere in here there's a delta-function, and the formula is correct.

Yasha doesn't do this calculation, preferring one a bit more general and rigorous, which I will sketch:

He observes that since we're a vector space, we can identify points and tangent vectors. Then what are the cotangent vectors? Our vector space of functions has a natural dot product: <u,v> = (1/2\pi) \int_0^{2\pi} u(x) v(x) dx; where now I'm thinking of u and v not as points but as tangent vectors at 0. So to each vector u(x) I can identify the linear functional (1/2\pi) \int u(x) -- dx, which Yasha calls \delta u. Then, knowing that he's chosen {p_k,q_k} = k, Yasha guesses a Poisson bracket:

P(\delta u,\delta v) = (1/2\pi i) \int u v' dx

Recalling the expressions for p_k and q_k as functions of u, we can recognize dp_k = \delta[e^{-ikx}] and dq_k = \delta[e^{ikx}]. Then we can check whether P is the proper Poisson bracket (really the tensor form, eating the derivatives of the functions we would feed into \{,\}) by evaluating it on our ps and qs:

\{p_k,q_k\} = P(e^{-ikx},e^{ikx}) = (1/2\pi i) int e^{-ikx} ik e^{ikx} dx = k; all other brackets are 0, and P is antisymmetric by integration by parts, so must be correct.

Then if F and G are two (real- or complex-valued) functionals on our space of scalar fields, what is their bracket \{F,G\}(u)? Let's say that F and G have the nice form F(u) = (1/2\pi) int f(u(x)) dx (and G is similar). Then \{F,G\} = P(dF,dG) ... what is dF? Well, it's the linear part of F. If \epsilon v is a small change in u, then

(1/\epsilon) (F(u + \elpsilon v) - F(u)) = (1/2\pi) \int f'(u(x)) v(x) dx

where f'(u) is the functional derivative of f with respect to u. So we can recognize dF(u) as \delta[f'(u)], and conclude that

\{F,G\}(u) = (1/2\pi i) \int (f'(u)) (g'(u))' dx

where on the second multiplicand, the inside prime is w.r.t. u, and the outside is w.r.t. x.

Then \dot(u)? Well, for any function(al) F(u), we have \dot{F} = \{H,F\}. u is not a functional, of course, but it is a vector of functionals: u(y) = (1/2\pi) \int u(x) 2\pi \delta(x-y) dx. So du(y) is the covector field (in x) given by \delta[2\pi \delta(x-y)]. (And I'm unfortunately using \delta for too many things, because I want it is an operator and as a Dirac delta function.) So, all in all,

\dot{u}(y) = -\{u(y),H\} = (-1/2\pi i) \int 2\pi \delta(x-y) (h'(u))' dx = i d/dx [h'(u)].


I would like to complete my discussion of scalar fields from entirely a different direction. Given a unit circle, I'd like to describe the propagation of "free scalar fields" around the circle, where now I'm thinking of these as some sort of wave. Remembering very little physics, I can imagine two different interesting dynamics. Either all waves move the same speed, regardless of their shape, or waves propagate at different speeds, with the "high-energy" ones moving faster.

Let's write down some differential equations and see what happens. I'm interested in \dot{u} = some functional of u. Of course, we should demand some isotropy, so x should not appear explicitly. What are the effects of different terms? Keeping everything linear — I want free field propagation, so everything should superimpose — we could ask about \dot{u} = cu, for constant c, but this is boring: the value of the field at a point never cares what the value is ant neighboring points. (Indeed, the whole field evolves by multiplication by e^{ct}. If, for instance, c=i, then sure, different "modes" move at different "speeds", but this is the wrong analysis, since really the whole field is just rotating by some time-varying phase.)

More interesting is if \dot{u} = -u', say. Then expanding u(x) = \sum_{-\infty}^\infty u_k e^{ikx}, we can solve and conclude that \dot{u_k} = -ik u_k, and \dot{u(x)} = u(x-t). So this is what happens if waves travel all at constant velocity.

But let's say that the kinds of waves we care about are surface waves. For instance, we might have a taught string, and waves are small oscillations. Then really physics should act to even out curvatures: we should expect an upwards pull on any point where the field has positive curvature. If we don't remember freshman mechanics, we might write down \dot{u} = u'', which gives us u_k(t) = e^{-k^2 t} u_k(0). This isn't bad: different modes move with velocity proportional to t. It's not quite perfect, though, because really it's the force in that direction, not the derivative, so really we should have acceleration \ddot{u} = u''. Then we get back our original waves, except we have left-movers and right-movers. (More generally, we can add a mass term, and get H(u) = (1/2\pi) \int (1/2) [\dot{u}^2 - (u')^2 + m^2 u^2] = 1/2 \sum_{-\infty}^\infty [\dot{u_k}\dot{u_{-k}} + (k^2 + m^2) u_k u_{-k}], and the modes really do move at different velocities.)

Anyway, the point is that I really do expect, in this world, to have 2\infty total dimensions: \Z worth of "position" coordinates u_k and \Z momentum coordinates, not the \Z/2 of each that Yasha was considering. By just inventing conjugate momentum coordinates v_k to position coordinates u_k, we can get, for the free field, such simple equations of motion as \dot{u_k} = v_{-k} and \dot{v_k} = (k^2+m^2) u_{-k}.

So why does every quantum field theory course start quantizing the (real) scalar field by expanding in Fourier modes and imposing a nontrivial bracket between the coefficients? Because the (free) equations of motion, not the original setup, demand relationships between the fourier modes of u and \dot{u}, and the nontrivial bracket is between u and \dot{u}.


Perhaps next time I will venture into the realm of quantum mechanics. I'd really like to understand how the classical Poisson bracket becomes the quantum Lie bracket, and where the hell that i\hbar comes from. First, of course, I will have to talk more about sets, Hilbert spaces, and the like, and I'll probably stay finite-dimensional for a while. Eventually, of course, I want to describe Feynman diagrams, and tie them back to the Penrose birdtracks and the tensors that started this series of entries.

That is, of course, if I ever get that far. I tend to be distracted by other time-consuming tasks: I am only a few hours of work away from being done applying to graduate schools.


*Of course, I started this entry a few weeks ago.