Blog

Lessons in statistics and modelling

The Mind of Monty Hall

10 Jun 2020 - Estimated reading time: 4 mins

Imagine you’re on a gameshow. There are three closed doors - one car, two goats behind them. You choose the right hand door. The host then dramatically throws open one of the other doors, revealing a goat. Then… he offers you a switch.

Now there’s two closed doors. What do you do? What are your odds of winning a new motor?

If you’ve never come across this problem before, you’ll probably think your odds are 50:50, and you’ll probably stick with your original choice1.

If you’ve read “a curious incident of the dog in night-time” or seen the film “21”, you’re probably feeling smug. This is a famous puzzle, known as the ‘Monty Hall Problem’, which rose to fame after it was published by Marilyn vos Savant (who had a recorded IQ of 228 in the Guinness Book of World Records) in a newspaper column  back in 1990.

She suggested you should switch, and that your likelihood of winning the car has increased to 2 in 3. This highly counter-intuitive suggestion led to a backlash from readers, many with PhDs. Nevertheless, the maths is clear, and simulating the problem reinforces this answer2.

However…

This isn’t the full picture. The answer above is correct if the switch is always offered, regardless of your original choice of door. But what it if isn’t? What if the host says ‘tough luck’ if you choose a goat at the outset, but offers you the chance to switch if you choose the car on the off chance you’ll be naïve enough to fall for it?

Under this framework, the rule of thumb ‘people don’t offer you something for nothing’ has served you well. Under this framework, your likelihood of winning (if you switch) has fallen to zero.3 4 

So what?...

At Hymans, we use models and analytics to help answer (or at least inform) interesting questions. There’s a number of lessons I take from the Monty Hall problem:

1. The real world is more uncertain than risky

There is a distinction between the type of risk that we can pin down, statistically, with probabilities and distributions, and the deeper type of uncertaintywhich can’t be quantified in this fashion. Once we realise the host might link their choice to offer the switch to our original choice of door, the puzzle has immediately switched from a ‘statistical’ domain to a ‘psychological’ one. We can only tame it and morph it back into the type of problem where maths can guide us by making an assumption around the host’s behaviour (such as “assume they always offer the switch, regardless of your original choice”). However, this begs the question to a degree – the hazard lies in the possibility we have made an incorrect assumption.

I’d argue that the vast majority of the hazards we come across are more uncertain than risky (and even when hazards can be expressed as a probability distribution, there’s often some degree of error in the process of fitting or calibrating this distribution, particularly in the tails).

2. Modelling is like a telescope

Modelling, in the right context, certainly has value, at least in my mind. It can help us understand things which aren’t immediately intuitive (like the 2/3 probability in the ‘traditional’ Monty Hall problem). However, modelling is always going to involve an abstraction of some sort away from reality, and some simplifying assumptions. If we focus our sights through a modelling lens too much, we can become blinded / desensitised to the hazards which are ‘assumed away’ under that particular framework.

To use an analogy – a model is like a telescope. It can help you see things in more detail that you wouldn’t otherwise be able to see, but it also narrows your focus. There’s value in stepping back every so often, and looking around at the night sky more broadly, otherwise you might miss the full moon behind you.

3. Rules of thumb are valuable

In a domain of uncertainty, rules of thumb can be really valuable. Old ‘simplistic’ heuristics can be helpful in guiding robust decisions, and avoiding the unseen emergent risk of model error, and calibration sensitivity.

I’m not suggesting we should abandon modelling in its entirety – I think the best approach is to use modelling, and broader thinking, in combination (recognising that modelling is always going to be ‘deep and narrow’ and will have blind spots).

Conclusion

To summarise:

  1. Intuition can fail but modelling and statistics can help us understand surprising truths...
  2. …however, always drill into the mismatch between models and reality – this is where hazards can hide.
  3. Simple rules of thumb can be more powerful than you think, and…
  4. …never trust a gameshow host!

1. The reason you stick could be an example of the ‘endowment effect’

2. To help overcome the intuition barrier; assume there are 1,000 doors, not three. The chances you choose the correct door initially is now 0.1%. When the host opens all but one of the remaining 999 doors (in a non-random fashion – he knows where the car is, and is purposely avoiding it), that is a strong indication that the remaining door he has left unopened is the winning door. The only scenario under which switching will lose is if you have chosen, by chance, the winning door at the outset.

3. In the interests of avoiding plagiarism, it’s worth highlighting that I came across the bulk of this line of argument in the interesting book The Monty Hall Problem: beyond closed doors.

4. In fact, once we allow the host’s prevalence to offer the switch to be conditional on your original choice, it’s possible to develop a framework under which your likelihood of winning can be any probability at all. For example, if the host offers the switch every 1-in-4 times a contestant chooses the wrong door, and offers the switch every time when the correct door is chosen, then your likelihood of winning if you stick is now 2 in 3!

5. Sometimes known as Knightian uncertainty, or ambiguity.

Subscribe to our news and insights

0 comments on this post