Monday, November 15, 2010

Bluer than blue

On my profile I have answered the question Something I can't find using Google with The lyrics to the version of "Bluer than blue" we sang in highschool jazz. It isn't that you can't find lyrics to the song. The problem is that there are many songs out there with that name, that have unrelated lyrics.

Well my question has been answered. Someone who was in my high school jazz choir found my profile, emailed me, and emailed her memory of the lyrics. Courtesy of Lisa Schmidt, here they are:

Bluer than Blue

Once I thought the world was ours to share
Always knew there'd be someone to care
Now I realize
Promises were lies
Wish I'd find the one who really loves me

I'm feelin' bluer than blue
I'm all alone and feeling bluer than blue
My baby left me, now I'm cryin' like a fool
I want somebody who will love me
Oh well I'm bluer than blue
I'm all alone and need somebody that's true
Looking around to see if you'll come back to me
I want somebody who cares

Guess I've learned my lesson / Broken all the rules
Well, you know I try to recall
One thing I am certain / Love's a game of fools
You're never ready when loneliness calls

Bluer than blue
I'm all alone and need somebody that's true
Hanging around and hoping you'll come back to me
I want somebody who cares

[Here there was a vocal interlude, where we all went "doo, doo do dooo, doo wop etc."]

Guess I learned my lesson / When we started kissin'
Well, you know I try to recall
One thing I am certain / love's a word for hurtin'
You're never ready when loneliness starts calling [key change]

Bluer than blue
I'm all alone and need somebody that's true
Looking around to see if you'll come back to me
I want somebody who will love me, come on baby love me
Can't you see I'm lonely?
I want somebody who's gonna care [she's bluer than bluuuuuuuue]


Thank you Lisa!

Now a bit of reminiscing. When I was in choir, Canada did something really good. What they did is arrange for a variety of regional competitions, some of which would qualify you for the nationals. In those competitions each choir had 15 minutes to do a prepared set. Then one of the judges would come up and work with the choir for 15 minutes. Then the choir would go back stage and be given a workshop. I have no idea if they still do it, but I can't praise it highly enough.

The point of this was not the competition, though that was serious, but rather that it taught the teachers how to teach better. This quickly made everyone better. A lot of the choirs were simply amazing. I had the fortune to be at Esquimalt when the choir teacher, Eileen Cooper, began really going through this system and got mentored by the choir teacher at Argyle. In the time we were there, we went from a mediocre choir using instrumental mikes and with no idea how to really hold them to being among the top high school jazz choirs in the country. To this day I believe that I was only tolerated because it was so hard to find baritones who were willing to be part of the choir.

But be that as it may, most of my positive memories of high school involve choir. And as a parent it has been wonderful that I am able and willing to sing for my kids.

Tuesday, October 26, 2010

Practice poker online for free

I enjoy playing poker. I'm not that good, but I do OK in small home games and I enjoy myself.

Recently I've been having a lot of fun playing online for free. How, you ask? Well go to http://www.fulltiltpoker.com/, sign up for free, get a free account, and start playing with play chips. Work your way up to the level you've got skills for, then have fun.

Now everyone's first reaction when I tell them this is that people play really stupidly when they are playing online for free. It isn't worth trying.

But this is not necessarily so! When you play on Full Tilt you can reload to 1000 play chips every 5 minutes. And at the bottom tables people play like complete donks because they can with impunity. However there are a lot of different buy-ins. As you move up, the skill level increases. So you get a gradation of skill from ridiculously soft at 1/2 up to a boss level of 50K/100K. At the latter level typical buy-ins are 4,000,000. The game I just looked at had 2 players, each with about 25,000,000 in front of them. You can't play that game unless you've managed to earn your way to that level. Anyone who does that has legitimately beaten a *lot* of other players, and put a lot of time in. Do you think those people play poorly?

If you've got basic skills (easily acquired), it isn't hard to beat the easy "donk" levels of free poker online. Then you can move up to more fun levels that require more skill. And once you can play poker well? Well, then you can buy in with some confidence about your skills (at least for low level play) and enjoy both the game and the money you're making at it.

Here is a data point. My sister read a couple of books, went online, got a free account, worked her bankroll up to a few hundred thousand and was playing at the 100/200 level. Then she began playing for money, entered the WPS, and look what happened.

So if you find poker fun, and want to practice, you can online for free. The only thing you'll lose is your time. Which isn't a cost if it is fun for you.

Wednesday, August 25, 2010

Piaget Water Level Test

Many years ago in a dentist's office I read an interesting article in a magazine. It talked about how there were questions that were specific to certain adult genders. In particular until puberty there was no measurable difference in performance on the question, but after puberty there was a large difference. As examples they offered a verbal question that women do well on, and a question where they drew two cups, one tilted, and asked people to fill in the water level if both were half full. (Artistry not required.) Men do well on this. Most women don't. And education doesn't matter, women who graduate college do worse than men who drop out of high-school.

I botched the verbal one due to something that looked silly to me, got the male one, and dismissed the article as garbage. Then a month later it came up in a conversation with my girlfriend, and she got the men's question wrong. I couldn't remember the one that women do well on. This made me curious, so I asked my mother the same question and she got it wrong. I can call my mother many things, but unintelligent is very much not among them. As an example, when she was at Stanford in the 50s they gave her a battery of ability tests. The only one she was not in the top 1% on was manual dexterity.

(Side note for married men. Do not rush to give your wife this test. I've started many marital fights that way, and I have never tracked down a question that women do better than men on. I know there is one, and I know that at 19 I couldn't do it, but I don't remember what it was and have never encountered another.)

Since then I've learned that this question is called the Piaget Water Level Test. The background on this is that Piaget found that children typically gain specific mental abilities at specific ages that are tied to specific growth spurts. And so he collected examples. For instance there is a specific age at which children learn that people who were not present would not have seen what they did, and another at which children learn that when you pour water into a tall thin glass you don't have more than you used to. And as he collected examples, he eventually offered the water level test. Which, oddly, men gain the ability to do during our last growth spurt, puberty, and most women never do.

I've seen various estimates for how well people do on it. One was that 90% of men can do the task, and only 30% of women. That seems to be a good fit with my experience.

An interesting side note. It is widely noted that there is a large gender imbalance within programming. But I've found through experience that programmers I know, whether male or female, have a 100% success rate on this question. I have no idea what to make of this tidbit, but I find it interesting.

Incidentally for those wondering what the answer to the question is, the water level is horizontal to the ground. The most common answer among women I've asked is to draw the water level parallel to the bottom of the cup. The second most common answer from women is to realize that there is a trick, and to draw the water level tilted twice as much as the cup. When the correct answer is pointed out, women recognize it as very obvious. I imagine that their feeling is much like how I felt after botching the verbal question that I blanked out of my memory.

BTW if anyone knows of any question with the reverse gender characteristics, I've been looking for it for over 20 years. It is frustrating - I know that at least one such question exists, but I've never found it.

Thursday, August 19, 2010

Analysis vs Algebra predicts eating corn?

I like learning about odd connections between disparate things. This probably is the oddest example that I know.

Broadly speaking, mathematicians can be divided into those who like analysis, and those who like algebra. The distinction between the two types runs throughout math. Even those who work in areas that are far from analysis or algebra are very aware of the difference between them, and usually are very clear on which their preference is. I'll delve into this in more depth soon, but for now let's just take it for granted that this is a well-known distinction, and it has meaning for mathematicians.

Back when I was in grad school there was a department lunch with corn on the cob. Partway through the meal one of the analysts looked around the room and remarked, "That's odd, all of the analysts are eating corn one way and the algebraists are eating corn another!" Everyone looked around. In fact everyone was eating the corn in one of two ways. One way was to munch over the length of the corn in a straight line, back up, turn slightly, and do another row across. Kind of like how an old typewriter goes. The other way was to go around in a spiral. All of the analysts were eating in spirals, and the algebraists in rows.

There were a number of mathematicians present whose fields of study didn't make it clear whether they were on the analysis or algebra side of things. We went around and asked, and in every case the way they ate corn matched their preference. Since then I've made a point of amusing myself by asking mathematicians I meet whether they prefer algebra or analysis, and then predicting which way they will eat corn. I'm probably up to 40 or so by now, and in every case but one I've been able to correctly predict how they eat corn. The one exception was a logician who claimed to be exactly on the fence between the two. When I explained the corn thing to him he looked surprised, and said that he had an unusual way of eating corn. He went in loose spirals! In other words he truly was a perfect combination of algebra and analysis!

If you have even a passing familiarity of probability, it is clear that despite how unbelievable it initially is that the type of mathematics you prefer is connected to how you eat corn, it is pretty much certain that there actually is a very strong connection. If you believe, as I do, that this difference is connected to how we think about other things, then there must be some odd connection between how we like to understand the world and how we eat corn. Why is another matter.

How do I explain the distinction between algebra and analysis? Well the best way to understand it is to ask you to study advanced mathematics. You will have to take many courses with the word "algebra" in the name, and others with "analysis" in the name. By the time you're done you'll have experienced the difference, and you'll be clear on which you prefer. Odds are you won't do that, but that is the most reliable way to come to understand it.

If I have to wave my hands and explain it, I would explain it like this. In algebra there are sequences of operations which have proven to be important and effective in one circumstance. Algebraists try to reuse these operations in different contexts in the hopes that what proved effective in one situation will be effective again. By contrast an analyst is likely to form an idiosyncratic mental model of specific problems. Based on that mental model you have intuitions that let you carry out long chains of calculations that are, in principle, obviously going to lead to the right thing. Typically your intuition is correct to within a constant factor, and you're only interested in some sort of limiting behavior so that is fine.

If you don't know any advanced math, the odds are about equal that my explanation is going to mislead you as to give you an idea what I am talking about. You'd be better off figuring out your preference by looking at how you eat corn. That said, the distinction carries through into other subjects that I've learned about. But not in a clear and obvious way.

For instance I've noticed the difference cropping up in programming. The distinction is often hard to explain. There are a wide variety of programming techniques, and most programmers have only really learned a few. Some of those techniques appeal to analysts, and others to algebraists. But if you've only been exposed to techniques that are a good fit for one, then how do you know which you'd prefer? Worse yet, when two programmers talk and have different experience bases, how can they tell whether their natural intellectual tastes are similar or different?

Let me give some examples. Upon my first encounter it was clear to me that object oriented programming is something that appeals to algebraists. So if you're a programmer and found Design Patterns: Elements of Reusable Object-Oriented Software to be a revelation, it is highly likely that you lean towards algebra and eat your corn in neat rows. Going the other way, if the techniques described in On Lisp appeal, then you might be on the analytic side of the fence and eat your corn in spirals. This is particularly true if you found yourself agreeing with Paul Graham's thoughts in Why Arc Isn't Especially Object-Oriented. There was a period that I thought that the programming division might be as simple as functional versus object oriented. Then I encountered monads, and I learned that there were functional programmers who clearly were algebraists. (I know someone who got his PhD studying Haskell's type system. My prediction that he ate corn in rows was correct.) Going the other way I wouldn't be surprised that people who love what they can do with template metaprogramming in C++ lean towards analysis and eating corn in spirals. (I haven't tested the last guess at all, so take it with a grain of salt.)

Going out on a limb, I wouldn't be surprised to find out that where people fall in the emacs/vi debate is correlated with how they eat corn. I wouldn't predict a very strong correlation, but I'd expect that emacs is likely to appeal to people who would like algebra, and vi to people who like analysis.

And now to wrap up, why would how we eat corn say something how we think? Here is what I think.

When you pick up a piece of corn on the cob, you have two cues for how to eat it. The first is that the corn is laid out in very nice rows. How can you not follow the lines that are laid out for you? The other is that as you eat, your teeth scrape down the corn. If you twist your wrist, you'll eat more efficiently. Why would someone want to eat inefficiently?

My best guess is that the cue you notice and follow reflects a natural tendency about how you tend to think in general. And this tendency is tied to such things as what kind of math you prefer or what programming techniques would prove interesting for you.

Tuesday, August 3, 2010

How did pterosaurs get so big?

The pterosaurs got going something like 230 million years ago. They died out with the dinosaurs 65 million years ago. Over their history they came in all sizes, from Rhamphorhynchus who was the size of a sparrow to Quetzalcoatlus with a wingspan of variously estimated as being 30-40 feet. Spread out it was a similar size to a t-rex, and on the ground its estimated height was close to a giraffe's. The largest bird ever, Argentavis was much, much smaller than that.

However the birds arose about 150 million years ago. Feathers were a big advantage in flight, and over time the birds took over a lot of what the pterosaurs were doing. But the pterosaurs did not go away. Instead they wound up being in niches for very big flying animals. This is competition through specialization, which I talked about some time ago when I discussed the Neanderthals.

This coexistence provides evidence that birds were generally better fliers, but there was a niche for very big fliers that the pterosaurs were better at. The question I'm curious about is why the pterosaurs were better at being big fliers than birds were.

I have a theory. But before I can explain it I need to provide some background.

Wings have evolved in vertebrates three times in pterosaurs, birds, and bats. All three started with the basic vertebrate limb structure and found different ways of constructing a wing out of it. In both bats and birds the arm bones form part of the wing. Now an important fact about vertebrate bones is that different bones grow at different rates as you grow. In particular arm bones start off shorter and catch up later. The result is that in birds and bats, babies have the wrong proportions for their wings to be useful. Therefore baby birds and bats can't learn to fly until they have achieved a significant fraction of their full size.

Pterosaurs were different. Their wings were entirely constructed from wrist and hand bones. (Fully half the wing was supported by an elongated 4th finger.) Hand bones stay in proportion your whole life. Comparisons of fossils of pterosaurs at different ages in the same species verifies that their wings always had good proportions for flight. Furthermore we have fossils from baby pterosaurs that died miles out at sea, which is direct evidence that they flew young.

What does this have to do with the eventual size of the animals? Well birds cannot learn to fly until they are near full growth. Which means that they need intensive care from their parents until they reach that growth. This care is a significant fact of life for bird species, and is why most types of birds have both parents providing care. Unlike most mammals where the mother is generally capable of taking care of young on her own. The larger the bird is, the harder this care is to provide.

By contrast pterosaurs were probably able to take care of themselves at a much younger age, and smaller size. Which means that they were free to grow for a lot longer, to a lot larger size, without unduly taxing their parents. (In truth we don't have any data indicating how much or little parental care baby pterosaurs got. But I suspect it was less than birds get.) And, I believe, that is why they were able to get so much larger than birds.

Random trivia I came across in preparing this post. The reason bats can't fly during the day is that their wings are vulnerable to sunburn. There is evidence that pterosaurs had a protective layer so they didn't have this issue. Also birds have stiffer wings than bats do, which provides better lift and less maneuverability. Pterosaurs had more joints in their wings than birds do, but didn't have finger bones inside of the structure of their wings like bats, which suggests to me that their wings would have been somewhere between.

And my whole train of thought was started by watching National Geographic - Sky Monsters. If you're interested in pterosaurs, it is a worthwhile video.

Tuesday, July 6, 2010

Dozenal glyphs - a modest proposal

A proposal that shows up every so often is to use dozenal arithmetic. The idea is simple, in base 12 more fractions work out very conveniently. You can easily divide things 3 and 4 ways. This is frequently convenient, and is why we often sell things in dozens. It is also why the much maligned Imperial system sneaks factors of 3 (3 tsp in a tbsp) and 12 (inches in a foot) in various places. And there are numerous minor benefits, such as the fact that the multiplication table becomes significantly simpler and therefore easier to learn.

When the French created the metric system, they based it on factors of 10 everywhere. They even went so far as to try to measure angles in gradians (a quarter circle had 100), and to use decimal time. The world as a whole rejected decimal angles and times, but has adopted decimal metric everywhere else. Which is very convenient for scientists, but is hard to divide into thirds and somewhat inconvenient for quarters. Furthermore the persistence of both systems results in occasional annoyances like the fact that in daily life we usually prefer to measure speed in km/h, but for energy and power calculations the units only work out properly if you measure in m/s. Resulting in an annoying factor of 3.6 that shows up converting between them.

The other day I read An Argument for Dozenalism that made many of these arguments. Nothing new. However it made the interesting point that ideally a dozenal arithmetic would have its own set of glyphs. It suggested that 0 and 1 could be kept, but everything else should be changed. In my opinion that argument is correct, it is very confusing if 21 sometimes is 5*5 and sometimes 3*7, which makes a mixed dozenal and decimal world harder than it needs to be. But this raises the interesting question of what a logical dozenal set of glyphs might look like.

I've amused myself with thinking about this, and I have a proposal for a set of glyphs that are (mostly) unused, easy to learn, quick to draw, and are much more logical than existing ones. All have a vertical line in the middle. At the top there is a choice of a hook starting on the left, a straight end, or a hook starting on the right. At the bottom there is a choice of a hook ending on the left, straight down, on the right, or bending right to cut across the vertical line. This gives 12 possibilities. By incrementing the bottom first, and the top when you get a carry, you get a sort of a 3,4 base. Which means that a glance at the glyph tells you immediately its sign mod 4. A glance at the top tells you whether it falls in the range 0-3, 4-7 or 8-11.

So instead of the current system of 10 random symbols to memorize, you get two simple rules. While at first it seems bizarre, it grows on you quickly. And it gives you a very quick way to tell whether a given number is written in decimal or dozenal. To get a sense what it looks like, take a look at the 12 times table (my apologies for the handwriting - my none too good penmanship gets worse when I'm using a mouse pad to draw with):



Note in particular how regular the patterns are for 2, 3, 4, and 6. Doesn't this look easier to memorize than the decimal times table? As an exercise try writing out your favorite sequences. Whether you're writing out squares, powers of 2, or primes you'll see that more patterns leap out at you in dozenal, making them easier to learn.

So there is my humble contribution to a dozenal future.

(In other news, I now have a patent to my name, though in fact it is owned by a previous employer. By the standards of the patent system, it is not a particularly bad patent. But if I had my druthers, it would have never been filed. Ah well.)

Wednesday, June 9, 2010

The stock market thinks Microsoft has just under 4 years left

(I posted this basic analysis at http://news.ycombinator.com/item?id=1417156, which was a discussion of this Newsweek article, and then decided that it was interesting enough to call out for additional attention.)

Those of us who pay attention to finance know that the market tends to be more accurate than any individual person. So sometimes it is worth analyzing what the market is saying about different companies.

If you look at the stock market, it clearly saying that Microsoft's future doesn't look as bright as Google's or Apple's. That's why Microsoft is worth a P/E of about 13, while Google is worth one of 22 and Apple is worth one of 21.

The market projection gets substantially more stark when you subtract current book value to find how much the market values future revenue. (Book value is what all of the company assets would be worth if it was broken up and sold today. For Microsoft this is largely made up of their cash reserve.) Microsoft's market cap is 221 B, their book value is 46 B, and therefore 175 B of their market cap is projected future earnings. Their current profit is 46.28 B/year, and that works out to the market valuing them at their current earnings stream projected over a bit under 4 years.

For Google the equivalent exercise says a market cap of 154 B, and book value of 38 B so 116 B of market cap is projected future earnings. Their current profit is 14.81 B/year, which translates into the market valuing them at their current earnings stream projected over a bit over a decade. (10.4 years.)

For Apple the equivalent exercise says a market cap of 225 B, a book value of 39.4 B for 185.6 B of market cap due to projected future earnings. Their current profit is 17.22 B/year, which translates into their current earnings stream projected over a decade. (10.8 years.)

So the projection that Microsoft is walking over a cliff in a few years while both Google and Apple have a decent future. The market is perfectly aware that a lot changes in 10 years, and so they heavily discount any projections out that far. But the market is more likely to be correct for near events.

Now admittedly I've never liked Microsoft. But this isn't just claimed by some random haters on the Internet. This is the consensus of the stock market, which is based on a lot of informed people putting their money where their mouths are. This is worth thinking about.

(I took all figures for this from http://finance.yahoo.com/q/ks?s=msft, http://finance.yahoo.com/q/ks?s=goog and http://finance.yahoo.com/q/ks?s=aapl. I got book value by multiplying book value / share times shares outstanding.)

Sunday, May 16, 2010

Le Châtelier's principle: not just for chemists

I seem to be updating my blog once a month or so. I don't intend to do that, I'm just busy. This particular entry is one I've been meaning to write for months, but just haven't gotten around to.

But before I get to it, a brief digression. My sister would like to get into a poker tournament that she needs to be voted in to. If you could take a moment and vote for Jennifer Tilly, it would be most appreciated. Thank you.

Now on to the main subject. One of my favorite principles of chemistry is Le Châtelier's principle. It has several forms, but the most general (though admittedly not perfectly accurate) is Any change in status quo prompts an opposing reaction in the responding system.

What does this mean? Well let's take a simple example. Suppose we have air in a chamber, and apply pressure to compress the chamber. From Le Châtelier's principle we expect it to push back harder. In fact it does. From the ideal gas law, applied in a simplistic way, once the volume is cut in half the pressure will double, and it will indeed be pushing back harder.

However this understates the true effect. It turns out that the act of compressing the chamber heats up the gas, increasing the temperature, and this causes it to push back even harder than it would have otherwise. This rise in temperature when you compress is called adiabatic heating. The corresponding decrease in temperature when you decompress a gas is used inside of your refrigerator or AC system to move heat from a cool place to a warmer one. Similarly when warm air rises the pressure drops, which causes it to cool down. This is why air on mountain tops is colder than air at sea level. (The drop is about 10C per km of height.)

Anyways, the fine details notwithstanding, what we find is that when you push harder on this simple system, it winds up pushing back harder. And you eventually find yourself at another equilibrium. Furthermore what works for simple systems, works for many more complicated systems.

The big question that I had as a kid in chemistry class was why. Why does this always work out? I never got a good answer from my teacher, and my dissatisfaction with "it works because it always works" answers was one of the reasons why I chose to go on in math instead.

Interestingly, many years later in an advanced math course I learned the trivially simple answer. Which requires essentially no math to understand!

Here is that answer.

A system is at equilibrium when all forces on it are balanced, and it can rest in that state indefinitely. For instance take a pencil and lay it flat on a table. It is at equilibrium there. There is also another equilibrium where it is balanced on its tip, but it is very hard to put it in that equilibrium.

Now not all equilibria are created equal. A stable equilibrium is one where any perturbation of the system will cause it to head back towards that equilibrium. An unstable equilibrium is one in which some perturbation exists that causes it to head away from that equilibrium. In the example of the pencil, laying flat on its side is a stable equilibrium, while balancing on its tip is unstable. The key fact to remember about unstable equilibria is that they have a tendency to not stick around. No real system is perfectly balanced, and the imperfection will grow over time until equilibrium disappears on its own. This is is why we run into lots of pencils lying on their sides, and none balanced on their tips.

Now what does this have to do with Le Châtelier's principle? Well if we run across a system that has settled down to the point that it has a status quo we can notice, that system is extremely likely to be at some sort of equilibrium. Based on the point I made above, we can be pretty sure that it is a stable equilibrium. But Le Châtelier's principle is just a description of what it means to be at a stable equilibrium, so Le Châtelier's principle must be true of our system!

Now I should note that there is a world of difference between a local equilibrium and a global one. The pencil that is flat on the table, would be at an equal equilibrium on a different side, as a small push demonstrates. And at a better equilibrium flat on the floor, as a strong enough push will demonstrate. A mixture of hydrogen and oxygen that is heated at little bit will heat up, which increases the pressure, which causes the container it is in to expand, which cools it down, in accord with Le Châtelier's principle. But heat the same mixture enough and a chemical reaction will begin that results in it becoming hotter still, rather than cooler. (These examples are why the overly general formulation I quoted at the top is not perfectly accurate.)

It is clear that this is a very general argument, and makes it clear that the principle has nothing really to do with chemistry per se. In fact it applies to any sort of equilibrium. In chemistry or not. On the whole I find the examples outside of chemistry to be more interesting.

A case in point appears in the classic economics paper Cars, Cholera, and Cows: The Management of Risk and Uncertainty. One of the key themes is that our risk-taking behavior as a society winds up in an equilibrium. If it is at an equilibrium, then we should expect that any change which reduces risk will cause a some sort of compensatory increase in risk. Which will undo some of the positive benefits of the change. An example from that paper is that seat belt laws increase seat belt usage. But people using seat belts feel safer, and therefore drive more aggressively, resulting in more accidents. The net result? It appears that drivers are safer, pedestrians are less safe, and benefits to society are less clear than a naive analysis would predict.

Over time I've learned that equilibria are extremely common, and therefore the "unexpected consequence" is more often something to try to anticipate than something to be surprised at. For instance when a faster variation of cheetahs is bred by evolution, it creates pressure for antelope to become some combination of faster, better at spotting predators, and willing to bolt when predators are farther away. The net result is that the more effective predators wind up about equal versus their prey. (Look up the Red Queen Hypothesis for more on this.)

But how do you anticipate the unexpected? Well there are several ways.

  1. If you know that it is common to find some sort of push-back, you know to be on the lookout for it, which will make it easier to spot. For instance suppose that you start injecting a stable compound in at a constant rate. Eventually it has to break down at the same rate, the only question is where. It was not until Sherry Rowland and Mario Molina applied this line of reasoning to CFCs that it was realized that one of the most innocuous and inert chemicals discovered by man was destroying the ozone layer.

  2. There will be a set of related push-back phenomena associated with any stable equilibrium that you can find. However equilibria are very common. So look for potential equilibria, then actively ask how they are maintained. If you find them, you frequently learn something useful. For instance suppose there is an equilibrium level of major disasters in given area of human endeavor. By what means is this level maintained? I submit that it is maintained by memories of previous disaster, and desire to not experience that again. Which means that once memory fails and people become less careful, corners will be cut until disaster happens again. However memory fails on a time scale set by human lives, which tells me that an equilibrium rate for major disasters of any particular kind is never going to be more than a small number of human generations, no matter what the engineers promise us. For example it took just over 60 years to lose the regulations that were put in place to prevent another credit crisis like the one that started the Great Depression (and about 10 more years after that to experience a credit crisis - note that before the Depression credit crises arrived on average about once every 10 years), nuclear options are now getting a boost from the fact that memories of Three Mile Island are now fading, and I am willing to bet serious money that the surprisingly good record of safety devices for offshore drilling helped result in dangerous shortcuts that were key to causing the recent BP disaster.

  3. Certain equilibria and their corresponding compensation mechanisms come up very frequently to explain otherwise puzzling events. My favorite example is that people like to maintain a positive self-impression. The result is that anything that challenges our good opinion of ourselves causes serious cognitive dissonance. People do the most amazing things to avoid this cognitive dissonance. For some of the negative results on people's ability to learn, see What you refuse to see, is your worst trap.


So, even if you're not a chemist, if you squint at the world in the right way you can see Le Châtelier's principle popping up in the most unexpected and interesting places.

Sunday, April 18, 2010

Random grab bag

I can't believe that it has been nearly a month since my last post.

If you've been following me on buzz you'll know that I've been doing a joke a day. I started Feb 18, so now I've gone for 2 months, with a new joke each day, without once having to look one up. I'm surprised by this. Today's joke had a musical component that needed to be heard, so it is on youtube.

Stories like this make me sad. Nothing will compensate that man's loss or replace what was stolen, but the county deserves to lose that case.

Moving on, on Hacker News a while ago I tried to explain why quantum mechanics and the general theory of relativity conflict with each other. That may be of interest. I sent that link to a well-respected researcher, John Baez, and he thought the explanation was reasonable in so far as it went. Then he recommended the far more detailed explanation at http://arxiv.org/abs/gr-qc/9310031. (Look for the Download box at the upper right.) Having read it, that explanation is far longer, but gives much more depth than mine. Be warned that my reaction was, "Now that I've read it I feel less educated on the subject than before. Not because I didn't learn anything, but because I was reminded of how much physics I never learned... I was reassured to learn that C.J. Isham has that effect on many physicists as well.

What else did I mean to talk about and didn't? Oh right. I read The Back of the Napkin: Solving Problems and Selling Ideas with Pictures. On the whole it was a very good book, but it had a spectacularly poorly chosen example at the end. The example was a hypothetical firm selling accounting software named SAX Inc. Despite being the market leader its revenue was not growing as the market expanded, and some low end open source platforms were growing rapidly. Economic projections showed that it was poised to lose huge amounts of market share to these new competitors in the near future, and the question is what it should do.

Now if you're ever in a situation that looks vaguely like this, I've got a recommend. Reach for The Innovator's Solution, read it, understand it, get every executive in your company to understand it, and follow its advice as best you can. Because you're on the wrong side of an disruptive innovation, so you want to see what has been tried in that situation, what worked and not (mostly not), and need to understand the organizational reasons for that.

If you don't know what a disruptive innovation is, that is the scenario where an established market with established companies has competition from a new kind of product which is not good enough for the market, but which will become so with projected technological improvement. In this situation what happens is that over time technology improves, the ecosystem of companies that grew up with the (initially) crappy technology take over, and the established companies see their market implode. For a fuller explanation and lots of examples, read The Innovator's Dilemma. That gives the theory, and then the follow-up, The Innovator's Solution, gives a lot more detail on why companies repeatedly make the same bad choices.

Suffice it to say that The Back of the Napkin comes up with a solution, and it is a solution that I guarantee will fail. And as part of the reasoning they manage to come up with a software development project that is much more ambitious than anything that the company has tried before, and gave some suspiciously precise estimates of how much time and money it will take. If you don't know what is wrong with that, go read Software Estimation by Steve McConnell. (If you are responsible for anything to do with scheduling software development, I highly recommend that book on general principle. Steve McConnell's books on software development range from good to classic, and that is on the higher end of the scale.)

None of this is to say that The Back of the Napkin is a bad book. It is not. Indeed if it were then I'd be less pained by the bad example. But it had the opportunity to explain a bunch of very important things to an audience that normally doesn't hear those things, and didn't. Not only did it fail to explain them, it proceeded to actively misinform.

On a happier note I've also read Accelerando by Charles Strauss. I have to say, without hesitation, that it is the strangest book I've read. Let me give an example. Near the beginning some lobsters that got uploaded managed to escape over the Internet, take over a computer network, and turned themselves sentient. They ask the protagonist for help in getting away from human civilization because they don't want to be near us when technology goes critical. He succeeds. Then the book gets weird.

Don't get me wrong. It is a very good book, and I enjoyed it very much. But the whole point of the book is that technology is accelerating, and the result will be continuous future shock. The first few iterations you are lead into trying to understand how the future rapidly got more bizarre. In later iterations you're just following the personal story of humans with unbelievable technology trying to survive in a world where they are obsolete and unequipped to understand the universe around them.

Saturday, March 20, 2010

Touching Women

Today I want to share two useful tidbits about touch and women that I think should be better known, but aren't because people get embarrassed to talk about this stuff.

The first is a pressure point to help menstrual cramps. Everyone knows about pinching next to the thumb to help with headaches. It doesn't take the pain away, but it dulls it and makes it more bearable. There is a spot that does about the same thing with menstrual cramps.

It is located just above your foot, between your tendon and your ankle. To get it properly you want to use a "fat pinch". You get this by folding your index finger over, putting that on one side of the ankle, and pinching with the thumb on the other. So you get a pinch spread out over the soft flesh between the bone and Achilles tendon. I've offered this advice to multiple women who suffer menstrual cramps. None have ever heard it before, but it has invariably helped.


The other is more *ahem* intimate. This would be a good time to stop reading if that bothers you.

There are various parts of your body where you have a lot of exposed nerves. A light brushing motion over them will set up a tingling/itching sensation. A good place to experience this is the palm of your hand. Gently stroke towards the wrist, then pay attention to how your hand feels. Yes, that. And thinking about it brings it back.

This happens anywhere where nerves get exposed. One place where that reliably happens is the inside of any joint. For instance the inside of your elbow. (Not as much is exposed there as the palm of the hand, but it is still exposed.)

The larger the joint, the more nerves, the more this effect exists. The largest joint, of course, is the hip. And the corresponding sensitive area is the crease between leg and crotch on each side. This works on both genders. But for various reasons is more interesting for women...

Enjoy. ;-)

Wednesday, March 17, 2010

Address emotions in your forms

I learned quite a few things at SXSW. Many are interesting but potentially useless, such as how unexpectedly interesting the reviews for Tuscan Whole Milk, 1 Gallon, 128 fl oz are.

However the one that I found most fascinating, and is relevant to a lot of people, was from the panel that I was on. Kevin Hale, the CEO of Wufoo gave an example from their support form. In the process of trying to fill out a ticket you have the option of reporting your emotional state. Which can be anything from "Excited" to "Angry". This seems to be a very odd thing to do.

They did this to see whether they could get some useful tracking data which could be used to more directly address their corporate goal of making users happy. They found they could. But, very interestingly, they had an unexpected benefit. People who were asked their emotional state proceeded to calm down, write less emotional tickets, and then the support calls went more smoothly. Asking about emotional state, which has absolutely no functional impact on the operation of the website, is a social lubricant of immense value in customer support.

Does your website ask about people's emotional state? Should it? In what other ways do we address the technical interaction and forget about the emotions of the humans involved, to the detriment of everyone?

Serendipity at SXSW

This year I had the great fortune to be asked to be on a panel at SXSW. It was amazingly fun. However there was only one person I had ever met in person at the conference this year. So I was swimming in a sea of strangers.

But apparently there were a lot of people that I was tangentially connected to in some way.

I was commenting to one of my co-panelists, Victoria Ransom that a previous co-worker of mine looked somewhat similar to her, had a similar accent, and also had a Harvard MBA. Victoria correctly guessed the person I was talking about and had known her for longer than I had.

I was at the Google booth, relating an anecdote about a PDF that I had had trouble reading on a website, when I realized that the person from Australia who had uploaded said PDF was standing right there.

Another person had worked with the identical twin brother of Ian Siegel. Ian has been my boss for most of the last 7 years. (At 2 different companies.)

One of the last people I met was a fellow Google employee whose brother in law was Mark-Jason Dominus. I've known Mark through the Perl community for about a decade.

And these are just the people that I met and talked with long enough to find out how I was connected to them.

Other useful takeaways? Dan Roam is worth reading. Emergen-C before bed helps prevent hangovers. Kick-Ass is hilarious, you should catch it when it comes out next month. And if you're in the USA then Hubble 3D is coming to an IMAX near you this Friday. You want to see it. I'll be taking my kids.

And advice for anyone going to SXSW next year? Talk to everyone. If you're standing in line for a movie, talk to the random stranger behind you. Everyone is there to meet people. Most of the people there are interesting. You never know. Talking to the stranger behind you in line might lead to meeting an astronaut. (Yes, this happened to me.)

Monday, March 8, 2010

Rogue Waves

Today I ran across an interesting essay on our changing understanding of scurvy. As often happens when you learn history better, the simple narratives turn out to be wrong. And you get strange things where as science progressed it discovered a good cure for scurvy, they lost the cure, they proved that their understanding was wrong, then wound up unable to provide any protection from the disease, and only accidentally eventually learned the real cause. The question was asked about how much else science has wrong.

This will be a shorter version of a cautionary tale about science getting things wrong. I thought of it because of a a hilarious comedy routine I saw today. (If you should stop reading here, do yourself a favor and watch that for 2 minutes. I guarantee laughter.) That is based on a major 1991 oil spill. There is no proof, but one possibility for the cause of that accident was a rogue wave. (Rogue waves are also called freak waves.) If so then, comedy notwithstanding, the ship owners could in no way be blamed for the ship falling apart. Because the best science of the day said that such waves were impossible.

Here is some background on that. The details of ocean waves are very complex. However if you look at the ratio between the height of waves and the average height of waves around it you get something very close to a Rayleigh distribution, which is what would be predicted based on a Gaussian random model. And indeed if you were patient enough to sit somewhere in the ocean and record waves for a month, the odds are good that you'd find a nice fit with theory. There was a lot of evidence in support of this theory. It was accepted science.

There were stories of bigger waves. Much bigger waves. There were strange disasters. But science discounted them all until New Years Day, 1995. That is when the Draupner platform recorded a wave that should only happen once in 10,000 years. Then in case there was any doubt that something odd was going on, later that year the RMS Queen Elizabeth II encountered another "impossible" wave.

Remember what I said about a month of data providing a good fit to theory? Well Julian Wolfram carried out the same experiment for 4 years. He found that the model fit observations for all but 24 waves. About once every other month there was a wave that was bigger than theory predicted. A lot bigger. If you got one that was 3x the sea height in a 5 foot sea, that was weird but not a problem. If it happened in a 30 foot sea, you had a monster previously thought to be impossible. One that would hit with many times the force that any ship was built to withstand. A wall of water that could easily sink ships.

Once the possibility was discovered, it was not hard to look through records of shipwrecks and damage to see that it had happened. When this was done it was quickly discovered that huge waves appeared to be much more common in areas where wind and wave travel opposite to an ocean current. This data had been littering insurance records and ship yards for decades. But until scientists saw direct proof that such large waves existed, it was discounted.

Unfortunately there were soon reports such as The Bremen and the Caledonian Star of rogue waves that didn't fit this simple theory. Then satellite observations of the open ocean over 3 weeks found about a dozen deadly giants in the open ocean. There was proof that rogue waves could happen anywhere.

Now the question of how rogue waves can form is an active research topic. Multiple possibilities are known, including things from reflections of wave focusing to the Nonlinear Schrödinger equation. While we know a lot more about them, we know we don't know the whole story. But now we know that we must design ships to handle this.

This leads to the question of how bad a 90 foot rogue wave is. Well it turns out that typical storm waves exert about 6 tonnes of pressure per square meter. Ships were designed to handle 15 tonnes of pressure per square meter without damage, and perhaps twice that with denting, etc. But due to their size and shape, rogue waves can hit with about 100 tonnes of pressure per square meter. Are you surprised that a major oil tanker could see its front fall off?

If you want to see what one looks like, see this video.

Monday, March 1, 2010

Fun with Large Numbers

I haven't been blogging much. In part that is because I've been using buzz instead. (Mostly to tell a joke a day.) However I've got a topic of interest to blog about this time. Namely large numbers.

Be warned. If thinking about how big numbers like 9999 really are hurts your head, you may not want to read on.

It isn't hard to find lots of interesting discussion of large numbers. See Who can name the bigger number? for an example. However when math people go for big numbers they tend to go for things like the Busy Beaver problem. However there are a lot of epistemological issues involved with that, for instance there is a school of mathematical philosophy called constructivism which denies that the Busy Beaver problem is well-formulated or that that sequence is well-defined. I may discuss mathematical philosophy at some future point, but that is definitely for another day.

So I will stick to something simpler. Many years ago in sci.math we had a discussion that saw several of us attempt to produce the largest number we could following a few simple ground rules. The rules were that we could use the symbols 0 through 9, variables, functions (using f(x, y) notation), +, *, the logical operators & (and), ^ (or) ! (not), and => (implies). All numbers are non-negative integers. The goal was to use at most 100 non-whitespace characters and finish off with Z = (the biggest number we can put here). (A computer science person might note that line endings express syntactic intent and should be counted. We did not so count.)

A non-mathematician's first approach would likely be to write down Z = 999...9 for a 98 digit number. Of course 9999 is much larger - you would need an 8 digit number just to write out how many digits it has. But unfortunately we have not defined exponentiation. However that is easily fixed:

p(n,0) = 1
p(n, m+1) = n * p(n, m)

We now have used up 25 characters and have enough room to pile up a tower of exponents 6 deep.

Of course you can do better than that. Anyone with a CS background will start looking for the Ackermann function.

A(0, n) = n+1
A(m+1, 0) = A(m, 1)
A(m+1, n+1) = A(m, A(m+1, n))

That's 49 characters. Incidentally there are many variants of the Ackermann function out there. This one is sometimes called the Ackermann–Péter function in the interest of pedantry. But it was actually first written down by Raphael M. Robinson.

(A random note. When mathematicians define rapidly recursing functions they often deliberately pick ones with rules involving +1, -1. This is not done out of some desire to get a lot done with a little. It is done so that they can try to understand the pattern of recursion without being distracted by overly rapid initial growth.)

However the one thing that all variants on the Ackermann function share is an insane growth rate. Don't let the little +1s fool you - what really matters to growth is the pattern of recursion, and this function has that in spades. As it recurses into itself, its growth keeps on speeding up. Here is its growth pattern for small n. (The n+3/-3 meme makes the general form easier to recognize.)

A(1, n) = 2 + (n+3) - 3
A(2, n) = 2 * (n+3) - 3
A(3, n) = 2n+3 - 3
A(4, n) = 222 - 3 (the tower is n+3 high)

There is no straightforward way to describe A(5, n). Basically it takes the stacked exponent that came up with A(4, n) and iterates that operation n+3 times. Then subtract 3. Which is the starting point for the next term. And so on.

By most people's standards, A(9, 9) would be a large number. We've got about 50 characters left to express something large with this function. :-)

It is worth noting that historically the importance of the Ackermann function was not just to make people's heads hurt, but to demonstrate that there are functions that can be expressed with recursion that grow too quickly to fall into a simpler class of primitive recursive functions. In CS terms you can't express the Ackermann function with just nested loops with variable iteration counts. You need a while loop, recursion, goto, or some other more flexible programming construct to generate it.



Of course with that many characters to work with, we can't be expected to be satisfied with the paltry Ackermann function. No, no, no. We're much more clever than that! But getting to our next entry takes some background.

Let us forget the rules of the contest so far, and try to dream up a function that in some way generalizes the Ackermann function's approach to iteration. Except we'll use more variables to express ever more intense levels of recursion. Let's use an unbounded number of variables. I will call the function D for Dream function because we're just dreaming at this point. Let's give it these properties:


D(b, 0, ...) = b + 1

D(b, a0 + 1, a1, a2, ..., an, 0, ...)
= D(D(b, a0, a1, ..., an, 0, ...), a0, a1, ..., an, 0, ...)

D(b, 0, ..., 0, ai+1, ai+1, ai+2, ..., an, 0, ...)
= D(
D(b, a0, a1, ..., an, 0, ...),
b-1, b-1, ..., b-1,
ai, ai+1, ..., an, 0, ...
)

There is a reason for some of the odd details of this dream. You'll soon see b and b-1 come into things. But for now notice that the pattern with a0 and a1 is somewhat similar to m and n in the Ackermann function. Details differ, but recursive patterns similar to ones that crop up in the Ackermann function crop up here.

D(b, a0, 0, ...) = b+a0+1
D(b, a0, 1, 0, ...) ≈ 32a0 b

And if a1 is 2, then you get something like a stacked tower of exponentials (going 2,3,2,3,... with some complex junk). And you continue on through various such growth patterns.



But then we hit D(b, a0, a1, 1, 0, ...). That is kind of like calling the Ackermann function to decide how many times we will iterate calling the Ackermann function against itself. In the mathematical literature this process is called diagonalization. And it grows much, much faster than the Ackermann function. With each increment of a2 we grow much faster. And each higher variable folds in on itself to speed up even more. The result is that we get a crazy hierarchy of insane growth functions that grow much, much, much faster. Don't bother thinking too hard about how much faster, our brains aren't wired to really appreciate it.

Now we've dreamed up an insanely fast function, but isn't it too bad that we need an unbounded number of variables to write this down? Well actually, if we are clever, we don't. Suppose that b is greater than a0, a1, ..., an. Then we can represent that whole set of variables with a single number, namely m = a0 + a1 b + ... + an bn. Our dream function can be recognized to be the result of calculating D(b, m+1) by subtracting the then replacing the base with D(b, m) (but leaving all of the coefficients alone. So this explains why I introduced b, and all of the details about the -1s in the dream function I wrote.

Now can we encode this using addition, multiplication, non-negative integers, functions and logic? With some minor trickiness we can write the base rewriting operation:


B(b, c, 0) = 0
i < b => B(b, c, i + j*b) = i + B(b, c, j) * c

Since all numbers are non-negative integers the second rule leads to an unambiguous result. The first and second rules can both apply when the third argument is 0, but that is OK since they lead to the same answer. And so far we've used 40 symbols (remember that => counts as 1 in our special rules).

This leads us to be able to finish off defining our dream function with:

D(b, 0) = b + 1
D(b, n+1) = D(D(b, n), B(b, D(b, n), n))

This took another 42 characters.

This leaves us 18 characters left, two of which have to be Z=. So we get

Z = D(2, D(2, D(2, 9)))

So our next entry is

B(b, c, 0) = 0
i < b => B(b, c, i + j*b) = i + B(b, c, j) * c
D(b, 0) = b + 1
D(b, n+1) = D(D(b, n), B(b, D(b, n), n))
Z = D(2, D(2, D(2, 9)))

We're nearly done. The only thing I know to improve is one minor tweak:

B(b, c, 0) = 0
i < b => B(b, c, i + j*b) = i + B(b, c, j) * c
T(b, 0) = b * b
T(b, n+1) = T(T(b, n), B(b, T(b, n), n))
Z = T(2, T(2, T(2, 9)))

Here I changed B into T, and made the 0 case be something that had some growth. This starts us off with the slowest growth being T(b, i) being around b2i, and then everything else gets sped up from there. This is a trivial improvement in overall growth - adding a couple more to the second parameter would be a much bigger win. But if you're looking for largest, every small bit helps. And modulo a minor reformatting and a slight change in the counting, this is where the conversation ended.

Is this the end of our ability to discuss large numbers? Of course not. As impressive as the function that I provided may be, there are other functions that grow faster. For instance consider Goodstein's function. All of the growth patterns in the function that I described are realized there before you get to bb. In a very real sense the growth of that function is as far beyond the one that I described as the one that I described is beyond the Ackermann function.

If anyone is still reading and wants to learn more about attempts by mathematicians to discuss large (but finite) numbers in a useful way, I recommend Large Numbers at MRROB.

Friday, February 5, 2010

Developing on HEAD scales to Google

I have a simple rule of thumb for what I am and am not allowed to say about Google. If I can find it said in some official-looking place from Google, then I think I'm allowed to say it. Otherwise not.

I was therefore very glad to run across this talk by Guido Van Rossum describing his code review tool named Mondrian, put on youtube by Google Tech Talks, and judging from the knowledge level of the audience, delivered to a non-Google audience. Confirming that this was not, in fact, accidentally open, I found an O'Reilly article confirming that it was public.

I therefore feel comfortable talking about anything and everything said in that video. So you can view this as my spin on the material in that video, and if you want the full version without the editorializing, feel free to take an hour and watch the video for the original source.

These days everyone who is competent has agreed that source control is a Good Thing to use. However opinions on how to use it vary widely. One argument in particular that I've seen in multiple organizations is on the value of using multiple branches versus having everyone develop in one branch, which is often called something like MAIN or HEAD. The issues involved are somewhat complex, and not obvious. So let me give a quick overview.

The primary argument for branching is that people can work on different features on different schedules without getting in each other's way. The primary argument for all developing on the same branch is that you discover conflicts immediately when they are easiest to resolve, rather than months later when people have moved on to different things and have lost context for the pieces of code that conflicts. The primary argument against branching is the pain of merging later. (Granted, distributed version control systems like git have done a lot to reduce this pain. But they have not eliminated it.) The primary argument against developing on HEAD is that it requires a constant level of diligence on the part of developers. When any developer can break all developers, you need to be careful in what you check in.

I've just kept this down to the primary arguments because the secondary back and forth arguments get long, involved, and heated. Also what I've described as two ways of working is really two families of working. There are a lot of ways to organizing multiple branches. And there are quite a few useful uses for branches in a software project even if everyone is developing on the same branch. None of this is made easier by the fact that (as with many religious wars) the people involved have imprinted on one way of doing things. That makes them hyper aware of the problems in other approaches, and they don't even notice potential pain points in their own.

Until I came to Google my personal position was that everyone working on HEAD was the best approach as long as your team was small enough that you could make it work. And I vaguely accepted that the pain of branching was a necessary evil on large software projects. Even when the pain reached the point of craziness, I was mostly willing to accept unsupported claims that it is necessary.

Then I came to Google. There are a lot of groups at Google, and they are free to do their own thing. Many do. However most groups develop out of one giant code base on HEAD. And it works as a development model. Google has made it scale.

Guido's talk describes many elements of what makes it scale. The first piece is having good developers. The second piece is an enforced policy of having every single patch go through a code review process before you check anything in. The third piece is a lot of proprietary infrastructure that Google has built up to make things work. And beyond that you have people paying attention to best practices such as consistent style, good unit testing, so on and so forth. (All of which are reinforced in the code review process.)

My opinion after seeing Google has changed. I freely admit that there are real process and tool challenges to making it possible for large teams to develop everything on HEAD and have it scale smoothly. However it is possible. I've seen it work. And, speaking personally, this is my preferred way to work.

Different organizations are different, have different capabilities, different needs, and different goals. Sight unseen I'm not going to tell anyone else that their organization should try to work like Google does. I simply don't have the facts about your situation. But if, like me not that long ago, you've accepted the claim that large development teams have no choice, you now know better.

Now admittedly most people can't go and see this first hand for themselves at Google. But if you want to watch a large successful project developing code to a high standard in this way, I recommend watching clang. And if you want to know more about what kinds of tool support you need to make this work, go listen to Guido. He's smarter than I am, has been doing it longer, and actually built some of the basic tool support for it.

Monday, February 1, 2010

What is intelligence?

Last September I posted on why hard sciences are different from soft ones. The subject of this blog post is a perfect illustration of the critical difference. In the hard sciences people mostly agree on basic definitions, broad agreement exists on key problems to solve, and in general there is agreement on a basic paradigm. Through no fault of their own, in the soft sciences, none of this exists.

Few things illustrate this better than getting a handle on what intelligence is. Here we have a subject that has been seriously studied for over a century. Yet researchers have not achieved agreement on whether they should focus on a single measure of intelligence called g, or multiple measures, or if multiple measures then how many there should be and how they should be divided. Note here that if you talk to individual researchers this fact does not worry them, because each researcher works in a community of other researchers who have achieved agreement on this. Each school of thought therefore churns out a stream of research, but they have failed to convert each other. And after a century the debate shows no sign of ending any time soon.

Please note that I am far from expert on any of this, so I'll try to summarize in broad terms my understanding, but my understanding could be incorrect. Also be aware that a lot of experts disagree with each other strongly, and therefore it is easy to find experts who will disagree with anything interesting that I can say. Please keep those caveats in mind as I try to flounder my way through the complications.

Let's start with the simplest approach, namely Spearman's g. What Spearman did in 1904 is take a number of things that one would think correlate to intelligence, such as grades, and found that they seemed correlated well with each other. He then looked for some linear combination which correlated better with each of the initial factors than any initial factor did. He found a very combination that did so, and found that it successfully predicted a very large part of the variation. In modern mathematical terms he did something called a Principal Component Analysis and looked for the most significant component. He then found that the most significant component really did capture a lot of the variability.

Spearman called this factor g for general intelligence.

Not long after the IQ test was invented. Before long people added IQ tests to the mix of factors, and found that a well-designed IQ test served as a fairly good predictor of g. Then g in turn served as a decent predictor for many things, including academic performance, and success in the workplace. Since IQ tests were fast and easy to administer, IQ tests and ability tests were on. (Ability tests such as the SAT and GRE exams actually are fairly directly comparable to IQ tests, and high-IQ societies such as MENSA are happy to accept them as proof that your IQ is high enough to belong.)

All of this begs many important questions. We got g out of a mathematical construct. But is it real? Is there really such a thing as general intelligence? Also even if there is such a thing as general intelligence, is it a stable attribute of a person? Does your general intelligence change over time? And the whole principal components analysis admits of the discovery of multiple components, how important are the lesser components? (The existence of a split between mathematical and verbal ability is obvious enough that most ability tests split those into separate sections.) Are there important factors which have been missed? Also how much of this is affected by heredity versus environment? How does race play into this? Can training change how you score on the test? Can it change your actual intelligence?

Believe you me, those questions have been researched. Extensively. And argued. More extensively. The only ones that there seems to be a decent consensus on is that to the extent that general intelligence makes sense, it seems to be fairly stable, and both hereditary and environmental factors affect it. Every other one is still under debate, though it is easy to find experts that make definitive claims that the matter has been completely settled one way or the other.

As one example of the ongoing debate, ETS, which puts out various tests including the SAT exams, claims that training cannot improve your performance. At the same time Kaplan makes money selling the claim that they can significantly improve your performance. Kaplan has enough evidence of effectiveness that I accept that they are right, but the debate continues. I suspect not the least because ability tests are just IQ tests under a different name, and so acceptance that Kaplan can teach an ability test is evidence that a key tool underlying a lot of research into psychology is fundamentally flawed, which opens up a can of worms that many don't want to look into.

For another example, I strongly suggest that mathematically inclined readers read g a Statistical Myth. It presents a toy statistical model which demonstrates how readily a large number of independent random factors with a small amount of interaction could give rise to all of the statistical evidence which underlies claims that such a thing as general intelligence exists. Given the fact that any hereditary effect on intelligence is the result of a large number of fairly independent random factors called genes, I think the point is an important one. The idea that we each have a huge number of different intellectual abilities at different degrees of competence accords very well with my understanding of how the brain works, and my experience of interacting with others.

However it is also true that a small number of factors have a major effect on many different kinds of mental activities. Thus some combination of the amount of things you can keep in your mind at the same time and speed of reasoning may be influential enough on enough things to deserve the name "general intelligence".

At this point I'd say that the jury is not just out; they are deadlocked and randomly fighting each other. And have been in this state for the better part of a century.




So what do I have to contribute to the discussion? I'd like to throw out a toy mathematical model for how IQ and intelligence relate. Please do not take this too seriously. After all it is not even agreed on at this point that general intelligence is a particularly well-defined concept. But what I'd like to show is that even if you do accept that general intelligence exists, and that IQ is reasonably well correlated with it, that IQ tests are not a particularly effective way to find the smartest people. (Whether or not any better way exists is a question I'd like to avoid. In fact, please forget this parenthetical point.)

Now suppose you sit down to take a test. You come prepared with many relevant abilities. You have your vocabulary, your polished reasoning ability, and a body of basic knowledge you're expected to have acquired about the world. We are generally prepared to accept these as components of intelligence, and the test will try to measure those. Let's call that INT. However you come prepared with other relevant abilities. There is how well you are feeling, how much sleep you got, any tricks you know about how to handle the types of questions that are likely to be asked, cultural cues you share with the test makers and how stressed you are. We are generally not prepared to accept these as components of intelligence, yet they affect how we do. Let us call these TST.

For my toy model let's say that INT and TST are independent random normals with a mean of 0 and a standard deviation of 10. That's math talk describing a particular bell curve. If you want the odds of it being less than a particular number you just have to divide the number by 10 to get the number of standard deviations you're out, and then look at a standard table and read off the probability.

Now let's say that the measured IQ will be 100+INT+TST. If you're a mathematician you'll know that IQ will have a normal distribution with mean 100 and a standard deviation of 10*sqrt(2) which is about 14.14. The correlation between IQ and INT turns out to be 70.7%. By comparison the Stanford-Binet is designed to have a mean of 100 and a standard deviation of 16, and its correlation with other measures of intelligence like academic performance is (depending on who measures it and how it is done) in the range of 70-80%. So while the model is pretty simple, it is not too far off from how a real IQ test behaves. And it captures the obvious observation that there are factors that affect test performance which we don't think of as intelligence.

Now the fun thing about a model like this is that we can play with it since we know exactly how it is set up. And are free to add any internal details we want. And while we can't necessarily believe the answers we get, we can gain valuable intuition about what kinds of answers we can expect to see if we understood a far more complex system. Such as real people with real brains.

To analyze this I'm going to complicate the model one more time by introducing two more random variables. Let's make them independent normal variables with mean 0 and standard deviation 10/sqrt(2), and call them AVG and DIF. It turns out that (AVG+DIF) and (AVG-DIF) are independent normal variables with mean 0 and standard deviation 10. So let's make (AVG+DIF) be INT, and (AVG-DIF) be TST. High-school algebra shows that AVG = (INT+TST)/2 and DIF = (INT-TST)/2. (Hence the names.)

What was the point of that? Well it is this. At this point IQ = 100+2*AVG. And INT = AVG+DIF. Which means that if you give me an IQ, I can calculate AVG then use the known distribution of DIF to calculate confidence intervals on INT.

For example, in this model how intelligent is a typical member of MENSA? Well MENSA accepts anyone whose IQ is in the top 2% of the population. So a typical member might have an IQ at the boundary of the top 1%. (If you haven't taken statistics these calculations may make you eyes gloss over. You'll have to just trust that I'm coming up with correct numbers.) That IQ is about 2.326 standard deviations out, which after multiplying by the 10*sqrt(2) and adding 100 is an IQ of 132.90. (Remember that our standard deviation is lower than the regular IQ test, so our purported IQ scores will be lower than a regular test.) Which means that AVG is 16.45. On average our INT will be the same as the AVG, which puts you 1.645 standard distributions out, which is right around the 95th percentile.

We can go farther and calculate a 95% confidence interval for what the real intelligence of this individual is. 95% of the time a normally distributed variable will be within 1.960 standard deviations of 0, so that means that DIF will be in the range +- 13.86. Which means that INT will be (2.59, 30.31). Which means that INT is anywhere from 0.0259 to 3.031 standard deviations out, which would put INT anywhere from the 51st percentile to almost the 99.9th percentile. Hmmm, we really haven't nailed down INT very closely, have we? A similar calculation shows that with 80% odds INT is between the 77th and 99.5th percentiles. And you can even do fun things like show that with about 78% odds, this individual is not in the top 2% of the population on INT, and therefore would not qualify for MENSA if they were able to measure INT directly instead of the more complicated IQ.

Let's reverse the question. Let's take someone at the boundary of the 99th percentile in INT, and ask what IQ we should expect on average. Well that INT is 2.326 standard deviations out, which is an INT of 16.44, for an IQ of 116.44, which is 1.1625 standard deviations out, which is at the 87.77th percentile. So sorry, you're probably not going to test as smart enough for MENSA even though you're probably smarter than most of the people in MENSA. How likely is this person to fail to meet MENSA's admission standards? Well the cutoff for MENSA is top 2% which is 2.0537 standard deviations out, which is an IQ of 129.04 (again remember that the toy model's IQs are not as spread out as real IQ tests), so we need TST to be 12.6, which is 1.26 standard deviations out, which gives a probability of 89.6% of not qualifying for MENSA.

The upshot? This model predicts that most members of MENSA are not in the top 2% on intelligence, and most people in the top 2% of intelligence would not qualify for membership in MENSA.




Now let's apply this to a question of more personal interest. According to this model, how intelligent am I likely to be? I bring this up because in my last blog post I used my GRE to discuss my likely IQ, and therefore likely intelligence, then used it as a benchmark to compare with the people I've met at Google. So if I believe that high IQs systemically overestimate INT, how badly is mine likely to be overestimated according to my model?

Well my scores were V: 760, Q: 780 and A: 800. Going to a handy IQ calculator I put in 1540 for my GRE V+Q and find that I am 3.625 standard deviations out and only 1/7143 people have an IQ that high. So on my toy model that is an IQ of 151.265... Which implies that AVG is 25.633. So on average you'd expect my INT to be 25.633, which is 2.5633 standard deviations out, which puts me around the 99.5th percentile, or about 1/200 on INT. (Incidentally the average person with an INT that high would fall in the 96% percentile on IQ, and therefore would not qualify for MENSA.) But let's go farther. 50% of the time a normal distribution falls within 0.67449 standard distributions of the mean, which means that DIF is half the time within the range -4.769 to 4.769. That makes a 50% confidence interval for my INT be the range (20.864, 30.402). Which would put me anywhere from the 98th percentile to the 99.9th percentile in INT. Which is in line with my claim in the previous blog post that I wouldn't be entirely surprised to find that I am at the 99.9th percentile in intelligence. According to this model the odds are that I'm not that smart, but it is possible.

Interestingly there is a symmetry in this model which means that all calculations that I've done for INT apply for TST as well. Which means that I am likely to have a number of characteristics that make me good at taking tests but which are not reflected in my general intelligence. This is actually true.

The vast majority of people when they are faced with a major test or interview get stressed. Now what does stress do to you? Well stress is caused by adrenaline. Adrenaline prepares you for a flight-or-fight reaction. This means decreased blood flow to things like your stomach and neocortex (which is where most of your higher reasoning happens), and increased blood flow to areas that are likely to see action, like your muscles. The result? Improved vision, hearing, faster reflexes, and increased strength. Also significant damage to your ability to carry out complex cognitive tasks. Needless to say this is about the worst possible physiological response to sitting in place for several hours and completing a series of complex cognitive tasks.

Apparently evolution didn't anticipate that millions of us would have our life courses depend on how we do on multiple choice exams. :-)

Very oddly, many members of my family, including me, have the exact opposite response. When I walk into a test the thought process I have is basically, "Well this is it. I've prepared all I'm going to, and now I've got a multiple-guess test which will overstate my abilities. Let's see how I do." And I relax. Comparing me to a normal person based on the resulting test score is like starting with two runners, taking one out back and beating on him for a while, then expecting them to run a fair race. No matter how objectively you measure the result, I have an unfair advantage.




Random note. I originally intended to just present the toy model and use it to provide mathematical support for my impression that a really high IQ score is decent evidence of intelligence, but the true intelligence of most intelligent people isn't actually reflected in their IQ. But to sanity test the model I needed to find the estimated correlation between a standard IQ test and g, the general intelligence factor. And reading up on that I realized how much more complicated the subject was than I realized. I also found myself convinced by g a Statistical Myth that g was probably a statistical artifact with no real meaning. Which is how this turned into 2 long and only tangentially related blog posts mushed into 1. I apologize for the length. I didn't intend it to turn out this way, but I hope it was an interesting read.

Sunday, January 31, 2010

Things I've learned at Google so far

Well I've been an employee at Google for about a month. So this seems like as good a place to reflect for a moment.

The first thing that I've learned is that internally Google is incredibly open, but externally there is a lot we can't say. I understand and support a lot of the reasons why it is so, but it can be frustrating. There is a lot of really cool technology at Google that people never hear about. The statistics of what Google deals with are astounding. The technology we use to deal with it is amazing. The way we scale is unbelievable. (I really wish I could go back and have a few discussions on software development methodology raising points about what has proven to scale at Google...) One random fact that I know I can say is that computations happen in our data centers with about half the power drawn for what is industry standard. I'm not allowed to say how we do it, but it is a rather amazing testimony to what smart people can accomplish when we put our minds to it.

Moving on, what about Google's culture? I would describe Google's culture as "creative chaos". There was some confusion about where I was supposed to be when I started. This resulted in the following phone call, "Hello?", "Hello Ben, this is Conner (that's my new manager), where are you?" "Mountain View." "Why are you there?" "Because this is where the recruiter said to go." "Good answer! Nice of them to tell me. Enjoy your week!" This caused me to ask an experienced Googler, "Is it always this chaotic?" The response I got was, "Yes! Isn't it wonderful?" That response sums up a lot about Google's culture. If you're unable to enjoy that kind of environment, then Google isn't the place for you.

Seriously, the corporate culture is based on hiring really smart people, giving them responsibilities, letting them know what problems the company thinks it should focus on, then letting them figure out how to tackle it. What management hierarchy there is is very flat. And people pay little attention to it unless there is a problem. You are expected to be a self-directed person, who solves problems by reaching out to whomever you need to and talking directly. Usually by email. The result is an organization which is in a constant state of flux as things are changing around you, usually for the better. With a permanent level of chaos and very large volumes of email. It is as if an entire company intuitively understood that defect rates are tied to distance on the corporate org-chart, and tried to solve it by eliminating all barriers to people communicating directly with whoever they need to communicate with. (Incidentally the point about defect rates and org charts is actually true, see Facts and Fallacies of Software Engineering for a citation.)

Speaking of email, working at Google you learn really fast how gmail is meant to be used. If you want to deal with a lot of email in gmail, here is what you need to do. Go into settings and turn keyboard shortcuts on. The ones you'll use a lot are j/k to move through email threads, n to skip to the next message, and the space bar to page through text. And m to hide any active thread that you're not interested in (direct emails to you will still show up). There are other shortcuts, but this is enough to let you skim through a lot of email fairly quickly without touching the mouse too much. Next go into labels and choose to show all labels. Your labels are basically what you'd call folders in another email client. (Unfortunately they are not hierarchical, but they do work.) Next as you get email, you need to be aggressive about deciding what you need to see, versus what is context specific. Anything that is context specific you should add a filter for, that adds a label, and skips the inbox. Nothing is lost, you can get to the emails through the list of labels on the left-hand side of your screen in gmail. But now various kinds of automated emails, lower priority mailing lists, and so on won't distract you from your main email until you go looking for them.

When you combine all of these options with gmail's auto-threading features, it is amazing how much more efficiently you can handle email. In fact this is exactly the problem that gmail was invented to handle. Because this was the problem that Paul Buchheit was trying to solve for himself when he started gmail. It is worth pointing out that Paul Buchheit was a software engineer at Google. He didn't need permission to write something like gmail. Corporate culture says that if you need something like that, you just go ahead and do it. In fact this is enshrined as an official corporate policy - engineers get 20% of their time to do with pretty much as they please, and are judged in part on how they use that time. I found a speech claiming that over half of Google's applications started as a 20% project. (I'm surprised that the figure is so low.) To get a sense of how much stuff people just do, visit Google Labs. No corporate decision. No central planning. People just do things like start putting up solar panels in the parking lot, and the next thing you know Google has one of the largest solar panel installations in the world and has decided to go carbon neutral. And the attitude that this is how you should operate is enshrined as official corporate policy!

You've got to love corporate policies like that. Speaking of nice corporate policies, Google has quite a few surprising ones. For instance they have benefits like heavily subsidized massage on site (I've still got to take my free hour massage for joining), free gym membership, and the like. Or take their attitude on dogs. Policy says that if your immediate co-workers don't object, you can bring your dog to work. Cats are different, however. Nothing against cats, but Google is a dog place and cats wouldn't be comfortable. (Yes, there are lots of dogs around the offices, and I've even seen people randomly wander over to find out if they can borrow someone else's dog for a while.) Hmmm. Sick day policy. Don't show up when you're sick and tell people why you're not showing up. Note what's missing. There is no limit to how much sick time you get if you need it. Oh, and food. Official Google policy is that at all times there shall be good, free food within 150 feet of every Googler. OK, admittedly the food quality does vary. That in Mountain View is better than anywhere else (the larger clientele base lets them have a much more varied selection). But you quickly learn why it is common for new Googlers to put on 15-20 pounds in their first year. (I'm trying to avoid that. We'll see if I succeed...)

But, you say, isn't this crazy? Doesn't it cost a fortune? The answer is that of course it does. But it provides value. People bond over food. Even if you're not bonding, having food close by makes short meals easier. And the temptation to continue working until dinnertime is very real. (Particularly if, as with me, you'd like to wait until rush hour is over before going home.) Obviously no normal CFO would crunch numbers and see things that way. But Google stands behind that decision, and the people who work there treasure the company for it.

Speaking of the people who work there, Google has amazing people. It is often said that engineers find working at Google a humbling experience. This is absolutely true. It took me less than a day to realize that the guy sitting next to me is clearly much smarter than I am, and he's nowhere near the top of the range of talent at Google. In fact, as best as I can tell, I'm pretty much average, though I'm trying hard to hold out a ray of hope that I'm slightly better than average.

Let me put that in context. The closest thing that I have to an estimate for my IQ is scoring 2340 on the GRE exam in 1991. Based on conversions that I've seen, that puts me at about the top 0.01% in IQ. Now I was really "on" that day, happen to believe that there are problems with the measurement of intelligence by an IQ test (a subject which I may devote a future blog post to), but without false modesty I wouldn't be surprised to find that I'm as high as being in the top 0.1% in general intelligence (however that could be measured). Which in most organizations means that I get thought of as being very smart.

However software development is a profession that selects for intelligence. By and large only good software developers bother applying to Google. And Google rejects the vast majority of their applicants. Granted the filtering process is far from perfect, but by the time you get through that many filters, someone like me is just average.

This leads to another point of interest. How astoundingly complex the company is. I believe that organizations naturally evolve until they are as complex as the people in them can handle. Well Google is tackling really big, complex problems, and is full of people who can handle a lot of complexity. The result? I've been told that I should expect that after 2 months I'll only be marginally useful. My initial learning curve should start to smooth out after about 6 months. And every year I should expect half of what I've learned to become obsolete. (Remember what I said about Google having a certain level of permanent chaos? If you're like me, it is exhilarating. But sometimes the line between exhilarating and terrifying can be hard to find...)

Oh, and what else did I learn? That we're hiring more people this year. :-)