Economics really is the "dismal science".

Economics really is the "dismal science". While this term was originally referring to the idea that some people would always be on the brink of starvation because people would keep reproducing until they exhaust the food supply, I think now it refers to the fact that it's a "science" that behaves a lot more like two competing football teams with diametrically opposed solutions to every problem. A "science" shouldn't behave like that.

I've been thinking about whether one could create a simulation of an economy that would seek to answer these fundamental questions. I'm thinking that you could simulate people's behavior using "agents" that use some kind of reinforcement learning, combined with a tendency to imitate people around them.

I think part of the challenge will be engineering just the right amount of stupidity into the artificially intelligent agents to accurately mirror human behavior.

An interesting perspective, that makes quite a bit of sense.

An interesting perspective, that makes quite a bit of sense. Markets reward predictability. The obsession of Europe's political leaders with avoiding a default, when a default is clearly what needs to happen (when you can't pay your debts, you default), creates unpredictability.

David McWilliams: Why the markets would thank us for defaulting –
David McWilliams, Columnists – Independent.ie

Bitcoin value versus search volume

I overlayed a graph of the Bitcoin-USD exchange rate (red/green/black), with a graph of search volume for the phrase “bitcoin” (blue), between January and September 2011:

A striking correlation, no?

In fact, its what you’d expect, surprisingly so in fact. Since the supply of new Bitcoins is regulated such that it is essentially constant, you’d expect the value of a Bitcoin to grown and shrink in proportion to the rate at which people are seeking to acquire Bitcoins.

Proportionate A/B testing

More than once I’ve seen people ask questions like “In A/B Testing, how long should you wait before knowing which option is the best?”.

I’ve found that the best solution is to avoid a binary question of whether or not to use a variation. Instead, randomly select the variation to present to a user in proportion to the probability that this variation is the best based on the data (if any) you have so-far.

How? The key is the beta distribution. Let’s say you have a variation with 30 impressions, and 5 conversions. A beta distribution with (alpha=5, beta=30-5) describes the probability distribution for the actual conversion rate. It looks like this:

A graph showing a beta distribution

This shows that while the most likely conversion rate is about 1 in 6 (approx 0.16), the curve is relatively broad indicating that there is still a lot of uncertainty about what the conversion rate will be.

Let’s try the same for 50 conversions and 300 impressions (alpha=50, beta=300-50)”):

You’ll see that while the curve’s peak remains the same, the curve gets a lot narrower, meaning that we’ve got a lot more certainty about what the actual conversion rate is – as you would expect given 10X the data.

Let’s say we have 5 variations, each with various numbers of impressions and conversions. A new user arrives at our website, and we want to decide which variation we show them.

To do this we employ a random number generator, which will pick random numbers according to a beta distribution we provide to it. You can find open source implementations of such a random number generator in most programming languages, here is one for Java.

So we go through each of our variations, and pick a random number within the beta distribution we’ve calculated for that variation. Whichever variation gets the highest random number is the one we show.

The beauty of this approach is that it achieves a really nice, perhaps optimal compromise between sending traffic to new variations to test them, and sending traffic to variations that we know to be good. If a variation doesn’t perform well this algorithm will gradually give it less and less traffic, until eventually it’s getting none. Then we can remove it secure in the knowledge that we aren’t removing it prematurely, no need to set arbitrary significance thresholds.

This approach is easily extended to situations where rather than a simple impression-conversion funnel, we have funnels with multiple steps.

One question is, before you’ve collected any data about a particular variation, what should you “initialize” the beta distribution with. The default answer is (1, 1), since you can’t start with (0, 0). This effectively starts with a “prior expectation” of a 50% conversion rate, but as you collect data this will rapidly converge on reality.

Nonetheless, we can do better. Let’s say that we know that variations tend to have a 1% conversion rate, so you could start with (1,99).

If you really want to take this to an extreme (which is what we do in our software!), let’s say you have an idea of the normal distribution of the conversion rates, let’s say its 1% with a standard deviation of 0.5%.

Note that starting points of (1,99), or (2,198), or (3,297) will all give you a starting mean of 1%, but the higher the numbers, the longer they’ll take to converge away from the mean. If you plug these into Wolfram Alpha (“beta distribution (3,297)”) it will show you the standard deviation for each of them. (1,99) is 0.0099, (2,198) is 0.007, (3, 297) is 0.00574, (4, 396) is 0.005 and so on.

So, since we expect the standard deviation of the actual conversion rates to be 0.5% or 0.005, we know that starting with (4, 396) is about right.

You could find a smart way to find the starting beta parameters with the desired standard deviation, but it’s easier and effective to just do it experimentally as I did.

Note that while I discovered this technique independently, I later learned that it is known as “Thompson sampling” and was originally developed in the 1930s (although to my knowledge this is the first time it has been applied to A/B testing).