I loved the Monty Hall problem which we've discussed here before and checked the www if there are other interesting problems. And I discoved one which is similar, in fact an even more basic setup: The two envelope problem.
Let's say it's the same game show with Mr Monty Hall and he has two envelopes and tells you that both contain money and one contains twice as much as the other. You choose one and then you are asked if you'd like to change it for the other one.
If you approach the question with calculating the expected value of the other envelope, you have a problem. A 50% chance the other envelope contains half as much as you currently have and a 50% chance it contains double. Say you currently have A (Euros/Pounds), that makes the expected value of the other envelope = 1/2x1/2A + 1/2x2A = 5/4A. Wait a second. 5/4A? You expect the other envelope to contain more so you'd like to change. But of course, that's always the case, if the show's host asks you infinitively if you'd like to change, you'll never stop changing.
Here's a variation. Say you meet with somebody in a pub and you make a wager: You both count the amount of money you carry on you and whoever has less than the other gets all the money from him. You have A Euros. So the maximum you can lose is A. But the maximum you can gain is more than 2A (because to win the other has to have more money). So you have more to win than to lose and thus you'd accept the wager. But again: Both will think exactly the same, both hope to gain more than they'll lose. If you think it's a 50/50 chance you end up with a similar expected value as above: 1/2 that you lose A and 1/2 than you gain > than 2A, maths tells you to go for it.
So, what's the solution, where did the maths go wrong? Well, tough to say. Have your own go at it or read what the Wiki entry says:
As a bonus I'd also like to present the Simpson's paradox. There's no mathematical problem involved, as weird as it looks at first, it's just part of the "there are lies, damn lies, and statistics". Here's a real life example, based on an actual study about two treatments of kidney stones. Overall, Treatment A was successful in 78% of the cases while Treatment B was successful in 83% of the cases. Hmm, Treatment B looks like the better option. But what if we break it down a bit further? With small kidney stones A had a success rate of 93% while B only had a success rate of 87%. And with large kidney stones? A wins yet again, this time 73% to 69%. So in both cases (and that's not leaving out anybody) treatment A is more successful. And yet if you tally them up suddenly B beats A by 83% to 78%.
Here's another real life example, this even went to court. The University of Berkeley was sued in 1973 for a bias against women who had applied to graduate school. The tally supports the case: 44% of male applicants were accepted, only 35% of female applicants, that's a pretty significant difference. But if you look at the different departments the case is lost:
A: 62% (of male applicants accepted), 82% (females)
B: 63%, 68%
C: 37%, 34%
D: 33%, 35%
E: 28%, 24%
F: 6%, 7%
In 4 out of 6 departments, female applicants were more likely to be accepted and in the other two the differences are very small. If you haven't figured out how this can happen... work on your maths. Or check the Wikipedia:
Talk about everything but Kick Off.
1 post • Page 1 of 1
- 2000+ Poster!
- Posts: 2729
- Joined: Sun Apr 21, 2002 12:00 am
- Location: Berlin, Germany
WC Performances 2003: 28/31 - 2004: 14/43 - 2005: 17/63 - 2006: 31/50 - 2008: 12/41 - 2009: 14/34 - 2010: 24/46
Who is online
Users browsing this forum: No registered users and 13 guests