dot_fennel: (Default)
Dorothy Fennel ([personal profile] dot_fennel) wrote2005-10-14 04:43 pm

[books] GEORGE LAKOFF & RAFAEL NUNEZ - Where Mathematics Comes From

I can't criticize the book as a whole until I've finished it, but from where I am (maybe 1/3 of the way), its primary argument seems so ludicrous that I am not optimistic they'll pull it off.

The gist, as I understand it: Mathematicians believe that math is abstractly true in some Platonist-like sense, but they're wrong! Mathematicians have created math using metaphors that come from the human brain's structure. Math makes sense to us because its roots are in the particular way our senses work, not in The Way Things Are.

I was expecting a lot of information about studies in cognitive psychology; the books opens with a description of how one can show that very young infants expect putting one object in a box and then putting another one in to yield two items, and that this innate arithmetic works in general (1 - 1 = 0, 1 + 2 = 3, etc.) for numbers up to 3. But that is (so far) the end of that. Other cognitive results are just barely alluded to as the authors use them.

So, it seems that the brain has several schemata like the one that makes babies with almost no experience of the world believe that a box with three balls in it will have two balls after one is removed. This is more like a perception than what we think of as calculation; it precedes (I'm assuming) the baby apprehending that people sometimes speak a word 'three' which relates to having three objects around, or anything else we would associate with math as an abstraction. It's right there in the way our brains run right after the operating system has been installed. The other major schema Lakoff and Nunez talk about has to do with distances: motion causes something to be in a place other than it was, distances are additive, and so on. I don't think their failure to detail how cognitive scientists know most of these things hobbles their argument, but I would find it interesting.

From these underlying, physical, contingent, brain-related schemas, L&N carefully derive many basics of mathematics. I can believe that this is, in fact, how things work, and that metaphors between physical domains account for the creation of many new abstractions in math-- for example, negative numbers have no obvious physical correlate in the domain 'moving pebbles in and out of a box' the way they do in 'walking in multiples of a unit distance along a line' (i.e. you can walk back the way you came). But these two domains behave the same otherwise, and so, abstracting the unusual aspect of one domain into the other, you get a concept of negative numbers in general and debt in particular. Something like that.

What L&N seem in no hurry to prove is that any of this constitutes the indictment of mathematical truth's supposed independence of humanity that they promise it does. The closest they've come yet is on page 102:

To the novice in metaphor theory it may not be obvious that not just any metaphorical mapping will meet these constraints. Indeed, hardly any do. For example, arithmetic is not an apple. The structure of an apple--growing on trees; having a skin, flesh, and a core with seeds; being about the size to fit in your hand, having colors ranging from green to red; and so on--does not map via any plausible metaphor (or any at all) onto arithmetic. The inferences about apples just do not map onto the inferences about numbers.

Yes, arithmetic is not an apple. For that fact to matter, however, we would need an account of how cognition could possibly occur in a brain where the most basic aspects of how we conceive objects as being distinct from one another were replaced with the fact that Fuji apples look pretty. This is so ridiculous that I've constantly looked for signs that I'm misunderstanding as I read; no luck.

I studied math in college, and I felt personally insulted by the book's introduction, so I've been assuming that, roughly speaking, I am similar to the neo-Platonist mathematican whom L&N want to enlighten. But even I have never taken speeches about the beautiful abstract validity of math to imply that the 4-color theorem would hold true in a universe where matter itself worked differently. I'm not even sure what that would mean. "Oh, sure, if you replaced the strong nuclear force with a banana, you could still meaningfully construe a thing to be the set of all anticommutative algebras that satisfy the Jacobi identity." (To which an unthinkable entity living in a universe that proves L&N's null hypothesis would presumably say, "Totally! Wait, what's a 'thing'?")

There's an intermediate possibility, which is that the universe could *be* how we perceive it, with our perceptions still so radically different as for 'object' to mean nothing. Again, I don't buy it. I would be open to the idea, if L&N wanted to prove it to me-- they do research on the fundamentals of cognition; they can probably imagine some pretty bizarre shit-- but they aren't headed that direction, as far as I can tell. More on this if I'm still interested when I finish reading.

A small quibble: One point I did like a lot was a comment about math education that basically said too many math teachers assume that their applications of everyday language to math make sense automatically, when in fact, the application only works once you understand the mathematical structure in question enough to see an analogy. They give the example of dealing with infinity-- students are often confused at first by the statement that "there are just as many even numbers as integers". Even when given a 'proof', it still seems fishy. This, L&N point out, is because 'as many as' has multiple characterizations in everyday life. It can mean 'these two groups can be matched up one-to-one with nothing left over' or it can mean 'you can take THIS many items out of THAT pile, and if you do you'll have none left'. Ordinarily, these line up; with infinity, they don't. Explaining this would be better than saying, "No, you see, it's *infinite*," as though a vague understanding could become a complex understanding if you sat and thought about it hard enough.

But then (here's the quibble), they go and make a similar pedagogical error, which turns out to be not just unhelpful but kind of freaky when you move from the domain of conceiving infinity to the domain of defining words. Page 156:

A process conceptualized as not having an end is called an imperfective process--one that is not "perfected," that is, completed.

Check that out! The actual connection between 'imperfective' and 'perfected' is via Latin, but L&N blithely assume that because the people who invented the jargon spelled 'imperfective' with 'perfect' in it, you can point out that connection and readers will find the meaning of 'imperfective process' as easy to remember as if they defined the term 'completed processes' in terms of 'complete'. The way they imply you can derive words from one another is like the ordinary way derivations actually happen, just as the Cantorian definition of infinity is much like the intuitive sense that many students have. But that won't help someone who thinks it doesn't quite seem right and, thinking so, has trouble remembering it.

Well, the analogy would only really hold if they had written that in a book on increasing your vocabulary. I just felt the need to show that to someone.

[identity profile] dominika-kretek.livejournal.com 2005-10-14 09:40 pm (UTC)(link)
I would need more context to comment on this particular text constructively, but I am reminded of some things:

What does "objective" mean? Some people mean "true for all humans." Other people mean "true even if there were no humans." Other people mean "true in all possible universes." Most people fail to ever draw these distinctions.

What does "true" mean? Some people would say that some things are true regardless of whether humans are thinking about them. Other people think that regardless of what may or may not exist, "true" only makes sense when talking about what humans think about things, because truth pertains not to things but to human beliefs about things.

I myself have a hard time imagining that reason and its beloved first-born mathematics make any sense to talk about outside the world under human consideration. That idea probably seems either banally obvious or blasphemous. Some people try to refute that idea by saying that, if confronted with some bizarro alien world, humans could still get purchase on it by reasoning about it. I bet they could, but that's just because humans would still be the ones doing the looking. The weirder the specimen world was, the less easy it would be to reason about it, until finally causality itself ceases to seem applicable, and not even basic logic works.