If any physicist asked to use any particular function, I would never begrudge it of her. Absolutely, if a particular mathematics is useful in calculating something about the world, by all means take advantage of it. But over the entire history of humanity we will never write down even countably many functions, and it's unlikely that any physical theory will ever require more than countably many (mathematical theories do so all the time). Moreover, it's unlikely that the physicists will ever need even easily-definable pathological functions like Weierstrass'. A fruitful way for mathematicians to help physicists is by suggesting which conditions are reasonable to assume of any theory, and which should be explored: one such question is to ask for an "upper bound" — a collection of functions that the physicists are unlikely to ever need to escape.
Of course, "real-valued functions on a real line" is one possible answer, but the whole point is that we're likely to find a better bound. Most physicists seem to believe that the universe satisfies some sort of continuity or regularity conditions, and most physicists leave such issues aside when writing their actual arguments; at the very least, I've never seen a physicist to want a function that it discontinuous at more than discretely many points. This again, though, is not all together helpful. Hoepfully, we will get down to countably many functions (or, possibly, a class of functions defined with reference to an as-yet-to-be-determined class of coefficients, so that if our possible coefficients are countably many, then so are our functions), and we're no where near there. A big step is to remind ourselves that physicists certainly will never need more than the definable functions. Still, this definition allows such functions as Weierstrass', and is terribly inductive: it would be nice to have a closed-form definition of the set in question.
What are the tools a physicist is likely to want in writing down a physical theory? It's reasonable to expect our functions to include all the constant functions, and the identity. Moreover, our set of functions should be closed under addition, subtraction, multiplication, and division. This should lead to no complaints — so far, we could stop just with rational functions.
I think, though, that every function should have an inverse (away from singularities, of course — given an allowed function and an open ball in which the function has (in the classical sense) an inverse, this inverse should also be an allowed function). Again, no problem — we can allow all algebraic functions — although now we already escape the realm of good notation (c.f. Abel's Theorem). And our class of allowed functions should be closed under composition. (Question: is the composition of two algebraic functions algebraic? By algebraic function I mean a solution y=y(x) to a polynomial equation 0=F(x,y).)
Thing is, though, we don't yet have the functions physicists use most often: the exponential and trigonometric functions. This must absolutely be fixed. Why do physicists use these? Because they solve simple differential equations.
Indeed, I don't know any functions that appear in physics that do not solve polynomial differential equations, by which I mean equations of the form 0=F(x,y,y',...). In fact, although physicists use second-order equations all the time, I have a hard time coming up with any equations that are not first order (and, of course, if we allow multiple variables, then all differential equations can be cast as first-order). This leads me to suggest that an appropriate answer to the question posed by this post is "solutions to polynomial differential equations".
Does this class of functions possess the properties I want? Certainly it includes algebraic, exponential, and trigonometric functions, and consists only of extremely smooth definable functions. And it's easily closed under inverses. To wit: given an equation 0=F(x,y,y',...) and solution y=y(x), I'm looking for an equation 0=G(y,x,x',...) for which x=x(y) is a solution (where x() is the inverse function of y(), and x'=dx/dy); it suffices, of course, to find a rational function G. But we can let G(y,x,x',...) = F(x,y,1/x',...), since by easy calculus y'=1/x', and by induction higher-order derivatives or y are also rational functions of the derivatives of x:
y^{(n)} = d(y^{(n-1)})/dx = 1/(dx/dy) d/dy [y^{(n-1)}]
But when we start to ask about the binary operations — arithmetic and composition — we see that disaster strikes. Composition is the easiest to visualize, and I will restrict to first-order ODEs.
A differential equation is a surface in 1-jet space
{0=F(x,y,y')} \subseteq \J^1(\R)
(where I use \J for {\cal J} and \R for {\mathbb R}). 1-Jet space, for us, is just \R^3 parameterized by coordinates x, y, and y'; more generally, it's the cotangent bundle cross \R. It comes equipped with a canonical contact structure 0 = dy - y' dx. When this contact structure is not tangent to the surface (a contact on \R^3 structure cannot be tangent to a surface at more than a curve), it defines curves that foliate the surface, so in small balls around generic points, solutions to the differential equation exist and are unique.
So, let's say I have to differential equations 0=F(x,y,y') and 0=G(y,z,z') and solutions y=y(x) and z=z(y) (where I'm thinking of z'=dz/dy). Then I really want a differential equation 0=H(x,Z,Z') so that Z(x)=z(y(x)) is a solution. And I'd like to be able to pick this H in some algorithmic way.
But the geometry makes this hard. Generically, what I'm setting up is a five-dimensional space with coordinates x,y,y',z,z' (or equivalently Z'=y'z'). I have surfaces in each of two three-spaces that intersect on a line. Then what in the five-space would project to two surfaces? A three-manifold. (Away from non-generic points, the surfaces are parameterized by y and another variable perpendicular to y, with a third variable running perpendicular, so the lift should be parameterized by y and two more perpendicular dimensions, with two dimensions of normal bundle.) But, generically, a three-manifold projects to a three-space in the x,Z=z,Z'=z'y' direction.
Because a first-order equation is two-dimensional, it takes one ordered pair of data (e.g. a value for x_0 and y_0=y(x_0)) to specify the particular solution. But the composition of two differential equations takes an ordered triple of data. (For comparison, the composition of two equations F(x,y)=0=G(y,z), which is what we use to compose functions, can be visualized as the quest for a lift to \R^3 of curves living in the xy- and yz-planes.) Perhaps the composition is a second-order differential equation? But our canonical three-surface does not have any Z''-direction, so this is impossible.
For the other arithmetic operations, a similar dimensional analysis occurs: we'd have two surfaces in 1-jet space, which we can think of as two fields of curves in the \R^2 bundle over the x-axis. But there's not a natural way to add, say, two curves to get another curve: with two generic curves on the plane (as opposed to two generic points), there's a whole surface worth of points that are the sum of some pair of points, one on each curve.
But this all leaves open the other version of the question. I've shown that there's no way to compose or add two generic differential equations. What I haven't answered is if the sum or composition of any two given solutions to differential equations is itself a solution to some differential equation. Probably I won't be satisfied even if it is, because I want all of physics to be definable, and I really want these differential equations (along with initial data) to _be_ the definitions of the functions. But nevertheless I cannot think of how to begin looking for a pair of functions, each solutions to polynomial differential equations, whose composition is not.
13 May 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment