30 September 2007

Whole grains, it bears repeating, are tasty and nutritious. They cook easily, but many take a fair amount of time. Whole grains are processed and sold dried: before eating, they must be boiled in (potentially salted or flavored) water. Most grains should be combined with a prescribed amount of water in a pot with a well-fitting lid, brought to a boil, and simmered covered for a prescribed amount of time. Length of time is determined by the grain; amount of water should be just enough to have almost entirely evaporated off / been soaked up in that amount of time. If your pot does not have a well-fitting lid, you'll need to use more water. Some cooks prefer to soak their grains overnight, as this reduces cooking time. If you use too much water, boil uncovered for the last few minutes to evaporate off the excess. Do not stir your grains unless you want to develop the starches into a mushy mix. Grains hold their heat covered exceedingly well. To make a better seel,

What follows is a first-approximation of how much water and for how long for different grains. For details on grains' nutrition and substitutions, I refer you to The Cook's Thesaurus. For continual updates, check here.

Grain Type
Amount of Water per cup grain
Cooking Time
Notes
Corn
Enough
10 minutes
Corn can be steamed or boiled (or grilled or microwaved).
Oats, rolled
1, and add more if starts to burn
5 minutes, stirring (uncovered), or until desired consistency
A traditional breakfast cereal, cooked as a mush. I suggest cooking with raisins, a stick of cinnamon, and some maple syrup. Rolled oats have been steamed once, so cooks fast.
Quinoa
1.5
10 minutes
Very fast, high protein. Rinsing first will reduce the slightly bitter flavor.
Rice, brown
1.5
20 minutes, plus 30 minutes with heat turned off
Do not remove lid during the entire process. Just turn off the heat and let the rice continue to cook in the steam in the pot.
Rice, white, Persian style
2, or enough to cover by 2 inches
10 minutes uncovered, then 45 minutes covered
Boil rice, then drain, rinse in cold water, and drain again. Melt in a large saucepan 1 Tbsp butter per cup uncooked rice, and add rice and stir once to coat well. Cover and steam over very low heat. Bottom should be crispy and golden when done.
Wheat, berry
2.5
1 hour
Good pasta substitute, especially with tomato sauce. Given the time involved in cooking, many suggest soaking first, or slow-cooking overnight. I haven't tried these techniques.
Wheat, bulgur
.75
7 minutes, plus 15 minutes with heat turned off
Do not remove lid during the entire process. Just turn off the heat and let the wheat continue to cook in the steam in the pot. Bulgur has been steel-cut, soaked, and baked, so cooks fast. For a tasty pilaf, sauté thin-sliced onion with two-inch pieces of vermicelli, then add bulgur.

16 September 2007

Partial Fractions

I really am taking my own classes, and thinking about my own mathematics. But so far my classes have discussed supermathematics, which is cool but to which I have nothing so far to add, and classical (Lagrangian and Hamiltonian) mechanics, which I had intended to blog about last year. Perhaps I will some day write about such stuff; for now, I'd like to tell you about another topic we've been discussing in my calculus class.

Let's say I have a (proper) fraction m/n (in lowest terms). It's not a very simple expression: n most likely has lots of factors, and it would be nice to understand how much each factor contributes to the whole. For instance:

7/15 = 2/3 - 1/5

Can we always split up a number like this? At best, we could write a proper fraction as a sum of (proper) fractions with very small denominators.

Let's start by considering the case when n has two relatively prime factors: n = rs. We want to write

m/n = A/r + B/s.

Multiple both sides by n; we see that this is equivalent to solving the following (easy) Diophantine equation:

m = As + Br

This certainly has a solution. For instance, we can use Euclid's algorithm to write

1 = Xs + Yr

and then use A = mX and B = mY. Of course, this choice of A and B will generally be much larger than hoped-for. Never fear, though: we can always shift A and B simultaneously in opposite directions multiples of r and s. Thus we can assure that

0 < A < r

in which case

0 < As = m - Br < rs = n

so

-n < m-n < Br < m < n

and thus

-s < B < s.

Thus, we can factor the denominator into prime-power parts, and use induction. Going directly: we're looking for A, B, ..., C such that

m/(rs...t) = A/r + B/s + ... + C/t

If we multiply both sides by (s...t), this is

m/r = A(s...t)/r + integer

so we're looking for an A such that

m = A(s...t) (mod r).

Since A, r, and (s...t) are pairwise relatively prime, we can definitely do this, and we can be sure that

0 < A < r.

Doing this for each term yields

m/n = A/r + B/s + ... + C/t + integer

and this integer is definitely negative and no more (in absolute value) than the number of other summands, since each fraction is between 0 and 1. Thus we can subtract 1 from some of the summands to make the integer disappear.

11/60 = 1/4 + 1/3 + 3/5 - 1 = -3/4 + 1/3 + 3/5 = 1/4 - 2/3 + 3/5 = 1/4 + 1/3 - 2/5.

This last step, where we have to subtract, reminds us that this decomposition is not unique. It's close: we have two choices for each term, but of course making some choices constrains others.

If we're working with polynomials, on the other hand, we never have to subtract. The division works exactly as with integers, but now all inequalities should be written in terms of the degrees of the polynomials. By counting total degree, we see that the left-hand side has total degree less than 0, so that "+ integer" on the right-hand side must be 0. This is a proof that the partial-fractions decomposition of polynomial fractions is unique.

Or, rather, there's one more step in the partial-fractions decomposition. What I've written so far allows us to factor the denominator into prime powers, and write one fraction for each power. But we can go one step further: if q < p^d, then we can write

q/p^d = A_1/p + A_2/p^2 + ... + A_d/p^d

with 0 ≤ A_i < p for each i. This is, of course, trivial, and is how we traditionally write integers in terms of a certain "base":

q = q_1 p + A_1
q_1 = q_2 p + A_2
...

One more point bears stating. In the case when we're working with polynomials, and when we can completely factor the denominator into linear factors, partial-fractions decomposition becomes extremely easy, because dividing by a linear factor is trivial:

m(x) = (x-a)q(x) + m(a)

I.e. the remainder mod (x-a) is just the value of the polynomial at x=a. To get the quotients, synthetic division is very fast. This makes the last step trivial, and repeatedly dividing by p allows us to divide by p^d, so really we can do the initial steps quickly as well.

14 September 2007

Lavender and Apple

This article is also posted (semi)permanently on my recipe files under the same title.

I was very concerned when I saw this month's They Go Really Well Together: Apples aren't in season! I thought. It's the start of September. We're still eating peaches and tomatoes. I had not contributed to any of the previous TGRWTs: one was inconveniently timed, and most included ingredients I was not a fan of. But apples and lavender! Those ingredients are amazing! If only she had waited a month.

Little did I know that Nature and the Farmer's Market had conspired to force me to enter. That Saturday was the first day of apple season: every stand, all of a sudden, was overflowing with amazing apples. I bought close to a dozen. Lavender grows near my house, but one stand also had bunches of gorgeous dried lavender. So I came home that Saturday with all the ingredients I needed.

Besides, my roommate and I were planning on going out to Lavender Country Contra Dance, which was to begin with a potluck. What better to bring than an entirely experimental dish? I ended up making two recipes: the Vegan Apple Lavender Crisp I brought to the contra dance, and a Lavender Apple Iced Tea that I've been enjoying at home. I'll end with Reviews.


Vegan Apple Lavender Crisp

Preheat oven 350°F. Line a 9x13-inch pan with aluminum foil for ease of removing the crisp later.



Slice
  • 4 cooking apples.



Take six stalks dried lavender; chop and mortar-and-pestle the flowers. Makes
  • 1 Tbsp dried lavender flowers.



Toss with apples. Also toss in
  • 1/2 tsp cinnamon
  • 1/4 tsp lemon zest
  • 1 tsp lemon juice
  • 1 tsp cornstarch



Pour apples into prepared pan.

In medium bowl, combine with fingers
  • 1/2 cup oats
  • 1 cup whole wheat pastry flour
  • 1 Tbsp canola oil
  • 3 Tbsp honey
  • 1 Tbsp water
  • 1/4 tsp salt
  • 1/4 tsp vanilla

Sprinkle over apples. Bake 350°F for 35 minutes, or until crust turns golden.




Lavender Apple Iced Tea

If you buy dried lavender, you will find yourself wanting to use the flowers in recipes, but most of what you bought is stem. Here's a recipe that can perfectly use up the extra.

Cut lavender stems into two-inch pieces.



Place in a half-gallon container.



Also add
  • 1 bag plain black tea (I use PG Tips)
  • 1 packet instant hot apple cider powder (it's cheating, but I had some lying around)

and fill with water. Refrigerate over night.


Reviews

The apple crisp was a huge hit at the dance. It did taste overwhelmingly of lavender, and it's not clear that the apple and the lavender married well. The cinnamon and lemon also each add their own zing; the four main flavors did make for a nice ensemble (and I'd be curious to try this with more honey in the crust). Overall, there were many compliments, and even more comments of the form "this is really weird". I liked it.

The iced tea, on the other hand, was divine. Without straining, you do get the occasional sip full of stem, but the flavors married amazingly well: it tasted of lavender and apple and tea without tasting of any of these individually. The adjective that comes to mind for the flavor is "smooth".

On the other hand, now a week later, the crisp is gone, and the drink tastes mostly of the tannins in the black tea, and the spices in the cider. C'est la vie.

13 September 2007

Integration

This semester I am a graduate-student instructor for Math 1B — the second half of a first-year calculus sequence. Students attend large three hours a week of lectures with 400 classmates, and also have three hours of GSI-led "section". We get remarkable freedom with our classes: I write my own quizzes and worksheets, and plan my own material. I spend a lot of my time with "big picture" material: I feel like it's not that valuable for the students to watch me solve problems at the chalkboard. Or rather, it's very valuable to do so in an interactive setting, but I have yet to figure out how to make an interactive class with twenty to thirty students. In my office hours I often have closer to six students, and there we work through homework problems together.

We just finished a unit on techniques with which to approximate an integral, and it is about these that I would like to tell you. Although elementary, approximation techniques connect with both advanced mathematics and deep philosophical questions.

Let me begin with the latter. Why do we do calculus? Certainly, few mathematicians will ever use much calculus in her later work. But physicists and engineers and statisticians do: indeed, anyone whose job involves processing numerical data will use calculus for every calculation, even if it seems like she's just asking the computer to do some magic.

So calculus, and, indeed, arithmetic, is primarily a tool for dealing with "physical numbers": measurements and quantities and values. What's interesting about numbers in the real world is that they are never precise: physical numbers are "fat" in the sense that they come (or ought to come) equipped with error terms. Fat numbers cannot be added or multiplied on the nose — errors can propagate, although they also tend to cancel out — and fat numbers do not form a group (the number "exactly zero" is not physical). The universe has a built-in uncertainty principle, not because of quantum mechanics, but simply because we are limited beings, only able to make finitely many measurements. But a "real number" in the mathematicians' sense has infinitely much data in it, and so can never exist in the real world.

Much of the time, the hardest of physical calculations consist of evaluating integrals. We have a (fat) function (which may approximate an honestly "true" function, or may be a convention that we are using to think about messy data), usually consisting of a collection of data-points, which we would like to integrate over some interval. Riemann tells us how to do this: divide the interval into a series of small subintervals, thereby dividing the area under the curve into a series of small regions; estimate the heights of each small region; add these together.

We are now left with the difficult task of estimating the heights of the regions. In some sense, our estimates don't matter: provided our function is sufficiently nice (and if it isn't, we have no right to call it a "function"), then all possible procedures of estimates will eventually converge to the same value as the number of regions gets large. But some methods will converge faster and slower, and we need to be able to calculate this integral in a reasonable amount of time: neither humans nor computers can make truly large numbers of calculations. Much better would be if we picked a method for which we could bound the number of calculations needed to get within an allowable error.

One easy procedure is to use the height of the function at the left-hand side of the region. To investigate this, let's do a simple example: let's integrate

\int_0^1 x dx

By algebra, we know that this area is actually 1/2. If we use the "left-hand rule" for estimating heights, we get an estimate of the total area equal to 0. Our error is 1/2.

Why did I start with this function to test the rule? Let's say I had any linear function y=mx+b. Then the "+b" part cannot affect the error — the error depends on first (and higher) derivatives, and y=x is the simplest function with a first derivative.

By multiplying an using linearity, the error from using the left-hand rule in evaluating

\int_0^1 (mx + b) dx

will be m/2: error estimates must always scale linearly with the derivtive on which they depend. And integrating over [a,b]? Holding m constant but scaling [0,1] to [a,b] requires scaling both the x- and y-ranges, so our error should be proportional to m(b-a)^2.

This we can read from dimensional analysis. If x has units=seconds, say, and y is in feet, then m is in ft/sec, but the integral and the error are both in foot-seconds. If we want to estimate the error E in terms of m and the domain of integration, we must have E = Am, and since (b-a) is the only other unitful input, and has to correct for the seconds in A, we must have E \propto m(b-a)^2, and using the above coefficient of proportionality, we have E = m(b-a)^2/2.

What other parameters do we have? We could divide [a,b] into n regions. If we do, then our total error is

E = n*m((b-a)/n)^2/2 = m(b-a)^2/(2n).

This is the error for a line. But we expect every function to behave locally like a line, so if we have a generic function, we expect this to estimate the error. We can make that estimate into a stronger inequality by being more precise: E should be a number, so we should use some m that estimates the derivative; if we pick m bigger than the (absolute value) of the derivative, and we will bound the (absolute) error. (You can make this rough inequality exactly strict by simply imagining some curvature in the graph. It's clear the the line passing through the left endpoint whose slope is the maximum slope of the function is strictly above the function everywhere.)

We could have instead used the right endpoint of each interval to estimate the total integral. In the large-n limit, we expect the function over each small interval to behave like a straight line; the right-hand rule thus gives us an error almost exactly the same as the left-hand rule, except with the opposite sign (this is certainly true for a straight line). Thus we can get a much better estimate of the true integral by averaging the right- and left-hand estimate; doing this gives the "trapezoid" estimate, because of how it behaves geometrically on each subinterval.

When we average out the errors of the left- and right-hand rules, we are averaging out any dependence of the error on the first derivative of the function. Indeed, the trapezoid rule gives exactly the right results for the straight line. But there may still be an error. Our estimates on the errors are only true when the functions are straight lines; the failure of the errors to exactly cancel out must depend to first-order on the second derivative, as this measures the deviation in the (first) derivative. Dimensional analysis gives the correct error estimate for the trapezoid method:

Error(trapezoid) \leq m (b-a)^3 / (c n^2)

where c is some numerical constant, and m is a bound on the absolute second derivative. We can calculate c by working a particular example: the trapezoid estimate is exact for lines, and so up to scaling and subtracting a line, any parabola is y = x^2. But the trapezoid rule predicts an area of 1/2 for the integral from 0 to 1, whereas the true area is 1/4. Thus c=12.

How do we get a better estimate? If we had another method that also had an error depending on the second derivative, we could average again, and get an error of order the third derivative. Notice that by dimensional analysis the power of n in the denominator is always equal to the order of the derivative: the more accurately we approximate our curves by polynomials, the more quickly our estimates converge on the true values.

One such second-derivative-dependent measure is given by the "midpoint rule": estimate the height of each subinterval by the value of the function on the midpoint of the interval. I'll say why this is second-derivative-dependent in a moment. Dimensional analysis gives the same formula for the error as for trapezoids, and looking at y=x^2 yields c=24.

Error(midpoint) \leq m (b-a)^3 / (24 n^2).

Thus the midpoint rule is twice as accurate as the trapezoid rule; taking the appropriately weight average yields "Simpson's rule". This should be called the "parabola rule", since geometrically Simpson estimates each subinterval by drawing the parabola that passes through the midpoint and the two endpoints.

Naively, we might predict that Simpson's rule, by averaging out any dependence on the second derivative, should give an error linear in the third derivative. But in fact it is linear in the fourth. A hands-on way of seeing this is to simply apply it to a cubic. We might as well be interested in the integral from -1 to +1, and we may assume n=1. Given any cubic function, we can subtract some parabola so that it goes through y=0 at x=-1, 0, and +1; the parabola rule predicts an area of 0. And the true area? There is a one-dimensional family of cubics through three points, and we have a one-dimensional family through these three points: namely, scalar multiples of x^3 - x = x(x-1)(x+1). Thus, Simpson's rule is exact on cubics, and the fourth derivative is the smallest that can create an error.

There is a more elegant way of seeing that Simpson's error depends on the fourth and not third derivatives, which will also explain why the midpoint rule, which naively just measures the height at a point and so ought to error-ful with the first derivative really depends on the second. In particular, both the midpoint rule and Simpson's rule are symmetric under the reflection x \to -x. But the first and third derivatives (and indeed all odd derivatives) pick up minus signs under this reflection. By dimensional analysis, the error in an estimation method must be linear in the controlling derivative, and so any symmetric method cannot depend on odd derivatives.

Thus, I end with a cute challenge: come up with an (interesting, simple) estimation method with an error that really does scale as the third derivative of the function.

03 September 2007

Don't tell the department

I've been spending so much more time thinking about food than about mathematics. I had promised myself that math, the lowest-priority of my three main passions (with cooking and dancing) last year, would move to first in graduate school. So far it's second only because I haven't been dancing in months.

Is it a bad sign that I'm already fantasizing about dropping out and starting a restaurant or bakery? On my bookshelf, waiting to be read cover-to-cover, is a copy of Culinary Artistry, which is about cheffing. Dorenburg and Page distinguish between three kinds of cooking:
  1. "Cooking as a trade", where your primary goal is sustenance, and with your limited repertoire you're hoping your customers go away thinking "I'm full."
  2. "Cooking as craft", the style promulgated in the greatest cookbooks, has its main goal enjoyment; a chef should have a wide repertoire of classic dishes, and hope that the customers go away thinking "That was delicious."
  3. In "cooking as art", on the other hand, a chef prepares her own recipes, and any given night the menu will be very limited; customers should be entertained, and go away thinking "Life is beautiful."


This trade/craft/art distinction is useful in other disciplines. Everyone should be able to (but many can't) do mathematics at the most basic of trade levels — monetary arithmetic, being duly suspicious of newspaper statistics, etc. It bears remembering that tradesmen often have incredible skill: Alice Waters a few years ago left the artistry of Chez Panisse a few years ago to try to reform the public school food; she is now cooking trades food for the masses. I wonder whether an actuary considers her mathematics to be trade, craft, or art.

I feel like the majority of expository mathematics writing, and the entirety of an undergraduate math major curriculum, falls under "mathematics as craft": classic results presented (hopefully) well. There is certainly an artistry to teaching well, but the teacher-as-artist focuses on the delivery, not the mathematics itself. I have not yet tried serious mathematics research; I know I am excited by artistic mathematics, but I also know that I adore teaching, and I have often comforted myself with the reminder that, if research is not for me, I can have a good life teaching at a liberal arts college.

To do original, beautiful research, however, requires the creativity of an artist (and lots of crafts- and tradesmanship). A calculation can prove a new theorem, just like a recipe can create a new meal; a few geniuses create new recipes, new fields, new mathematical insights. If I can learn to be a mathematician-as-artist, then I will stay in research.

An advanced social dancer is an artist, although often not a great trades- or craftsman. An advanced ballerina is a virtuosic tradesman. When I go to Pilates, I am engaging in movement-as-craft.

Chez Panisse has a fixed menu every night: Mondays are $55 + drinks + 17% tip + 8.75% tax / person, and the price increases through the week. Other great restaurants also have small, constantly changing menus. Moosewood offers a limited, changing vegetarian menu every night: your choice of three or four entrées, a couple salads, etc. In both cases, recipes are original, based on seasonal organic ingredients and the tastes of the chef.

If I were to start a restaurant, I'd want it to be like the Moosewood. Cooking as craft does not excite me. I like knowing how to make all sorts of dishes, of course, because I like to understand how food works — the science and history — but I would never want to work in a restaurant where customers pick dishes from an extensive fixed menu, and I make those. As chef, I am the dom: I pick and create a menu that you will eat and hopefully find transcendent.

But the fantasy is not that I would run a restaurant (at best, I'd be a member of a cooperative restaurant like Moosewood). Rather, I'd like a bakery, making gourmet breads, and possibly serving pastries, coffee, sandwiches, and soup. Here it is food-as-craft, but as baker I don't serve anyone. I still make my loaves, and then sell the completed objects to you. Your choice is limited, and what I have available will change: there will be some staples, of course, but each day two soups will be available, and they will change by the week (one on Sundays, one on Wednesdays), and salads will be based on seasonality. If the bakery is successful, I will continue to expand: breads first, then pastries and coffee, then sandwiches and salads, and then dinners with a daily-changing menu. Such an operation is much more work than one person can manage.

In fact, however, I will never own a bakery, although I will continue to bake for myself, friends, and family. What I like best is that I can create everything from scratch, from raw ingredients. And I like sharing my food, and eating what other people have made, also from raw ingredients. I'm exceedingly happy when I am spending my time creating not just food, but also objects: I would love to learn more about woodworking, plumbing, pottery, knitting. However, I do hope to remain an academic. I love thinking about mathematics and communicating it.

The current fantasy, then, does not include selling food, but also does not include purchasing much. I'd rather move away, as much as possible, from the industrialized economy, towards one populated by human craftsmen and artists. Thus, the fairy-tale future involves a university, yes, where I will work six to nine months a year, but also a large house on a lot of land, outside the city but within the public-transportation network (there is no vehicle I enjoy more than the train, except possibly the bicycle). For the three months of summer I will be a full-time farmer, growing, pickling, and canning enough vegetables to live on.

Especially in the North, where the growing season is short but furious, I could avoid too much overlap between Spring planting, Fall harvesting, and Winter teaching. I have grown up in the West, and would like to return to the Pacific Northwest; on the other hand, I have no real desire to stay in the U.S.; perhaps I will live in British Columbia. At such latitudes, in this fairy-tale I will practice mathematics by night and farming by day.

Many of my recipes, some posted here, some e-mailed, some posted elsewhere, are available here.