# Luck(?) of the Draw

What is luck?  Is luck?  And, if you vote yea, is a belief in luck an obstacle to understanding probability?

This question came up on Twitter a couple of nights ago when Christopher Danielson and Michael Pershan were discussing Daniel Kahneman’s recent book, Thinking, Fast and Slow.  Specifically, they were talking about the fact that Kahneman doesn’t shy away from using the word luck when discussing probabilistic events.  This, of course, is the kind of thing that makes mathematically fastidious people cringe.  And Danielson and Pershan are nothing if not mathematically fastidious.  Spend like five minutes with their blogs.  So Danielson twittered this string of twitterings:

According to Danielson, luck a “perceived bias in a random event.”  And, according to his interpretation of Kahneman, luck is composed of “happy outcomes that can be explained by probability.”  Let me see if I can define luck for myself, and then examine its consequences.

# What is luck?

I think, at its heart, luck is about whether we perceive the universe to be treating us fairly.  When someone is kind to us, we feel happy, but we can attribute our happiness to another’s kindness.  When someone is mean, we feel sad, but we can attribute our sadness to another’s meanness.  When we are made to feel either happy or sad by random events, however, there is no tangible other for us to thank or blame, and so we’ve developed this idea of being either lucky or unlucky as a substitute emotion.

But happy/sad and lucky/unlucky are relative feelings, and so there must be some sort of zero mark where we just feel…nothing.  Neutral.  With people, this might be tricky.  Certainly it’s subjective.  Really, my zero mark with people is based on what I expect of them.  If a stranger walks through a door in front of me without looking back, that’s roughly what I expect.  And, when that happens, I do almost no emoting whatsoever.  If, however, he holds the door for me, this stranger has exceeded my expectations, which makes me feel happy at this minor redemptive act.  If he sees me walking in behind him and slams the door in my face, he has fallen short of my expectations, which makes me sad and angry about him being an asshole.

And, in this regard, I think that feeling lucky is actually a much more rational response than being happy/sad at people, because with random events at least I can concretely define my expectation.  I have mathematical tools to tell me, with comforting accuracy, whether I should be disappointed with my lot in life; there is no need to rely on messy inductive inferences about human behavior.  So I feel lucky when I am exceeding mathematical expectations, unlucky when I’m falling short, and neutral when my experience roughly coincides with the expected value.  Furthermore, the degree of luck I feel is a function of how far I am above or below my expectation.  The more anomalous my current situation, the luckier/unluckier I perceive myself to be.

Let’s look at a couple examples of my own personal luck.

1. I have been struck by lightning zero times.  Since my expected number of lightning strikes is slightly more than zero, I’m doing better than I ought to be, on average.  I am lucky.  Then again, my expected number of strikes is very, very slightly more than zero, so I’m not doing better by a whole lot.  So yeah, I’m lucky in the lightning department, but I don’t get particularly excited about it because my experience and expectation are very closely aligned.
2. I have both my legs.  Since the expected number of legs in America is slightly less than two, I’m crushing it, appendage-wise.  Again, though, I’m extremely close to the expected value, so my luck is modest.  But, I am also a former Marine who spent seven months in Iraq during a period when Iraq was the explosion capital of the world.  My expected number of legs, conditioned on being a war veteran, is farther from two than the average U.S. citizen, so I am mathematically justified in feeling luckier at leg-having than most leg-having people in this country.

# Conclusion

Of course for some people the lottery is terrible.  People have gambling problems.  People spend way too much money on all kinds of things they probably shouldn’t.  But that doesn’t mean that everyone—or even most people—that play are suckers.  Eating the occasional King Size Snickers probably won’t get your foot chopped off; smoking the occasional cigarette probably won’t kill you (sorry, kids), and buying the occasional lottery ticket will likely have about zero net impact on your finances.  Besides, isn’t it worth it to dream, for even a day, of having indoor hot tubs?  They’re so bubbly.

# Cereal Boxes Redux

In my last post, my students were wrestling with a question about cereal prizes.  Namely, if there is one of three (uniformly distributed) prizes in every box, what’s the probability that buying three boxes will result in my ending up with all three different prizes?  Not so great, turns out.  It’s only 2/9.  Of course this raises another natural question: How many stupid freaking boxes do I have to buy in order to get all three prizes?

There’s no answer, really.  No number of boxes will mathematically guarantee my success.  Just as I can theoretically flip a coin for as long as I’d like without ever getting tails, it’s within the realm of possibility that no number of purchases will garner me all three prizes.  But, just like the coin, students get the sense that it’s extremely unlikely that you’d buy lots and lots of boxes without getting at least one of each prize.  And they’re right.  So let’s tweak the question a little: How many boxes do I have to buy on average in order to get all three prizes?  That’s more doable, at least experimentally.

I have three sections of Advanced Algebra with 25 – 30 students apiece.  I gave them all dice to simulate purchases and turned my classroom—for about ten minutes at least—into a mathematical sweatshop churning out Monte Carlo shopping sprees.  The average numbers of purchases needed to acquire all prizes were 5.12, 5.00, and 5.42.  How good are those estimates?

Simulating cereal purchases with dice

Here’s my own simulation of 15,000 trials, generated in Python and plotted in R:

I ended up with a mean of 5.498 purchases, which is impressively close to the theoretical expected value of 5.5.  So our little experiment wasn’t too bad, especially since I’m positive there was a fair amount of miscounting, and precisely one die that’s still MIA from excessively enthusiastic randomization.

And now here’s where I’m stuck.  I can show my kids the simulation results.  They have faith—even though we haven’t formally talked about it yet—in the Law of Large Numbers, and this will thoroughly convince them the answer is about 5.5.  I can even tell them that the theoretical expected value is exactly 5.5.  I can even have them articulate that it will take them precisely one box to get the first new toy, and three boxes, on average, to get the last new toy (since the probability of getting it is 1/3, they feel in their bones that they should have to buy an average of 3 boxes to get it).  But I feel like we’re still nowhere near justifying that the expected number of boxes for the second toy is 3/2.

For starters, a fair number of kids are still struggling with the idea that the expected value of a random variable doesn’t have to be a value that the variable can actually attain.  I’m also not sure how to get at this next bit.  The absolute certainty of getting a new prize in the first box is self-evident.  The idea that, with a probability of success of 1/3, it ought “normally” to take 3 tries to succeed is intuitive.  But those just aren’t enough data points to lead to the general conjecture (and truth) that, if the probability of success for a Bernoulli trial is p, then the expected number of trials to succeed is 1/p.  And that’s exactly the fact we need to prove the theoretical solution.  Really, that’s what we need basically to solve the problem completely for any number of prizes.  After that, it’s straightforward:

The probability of getting the first new prize is n/n.  The probability of getting the second new prize is (n-1)/n … all the way down until we get the last new prize with probability 1/n.  The expected numbers of boxes we need to get all those prizes are just the reciprocals of the probabilities, so we can add them all together…

If X is the number of boxes needed to get all n prizes, then

$E(X) = \frac{n}{n} + \frac{n}{n-1} + \cdots + \frac{n}{1} = n(\frac{1}{n} + \frac{1}{n-1} + \cdots + \frac{1}{1}) = n \cdot H_n$

where Hn is the nth harmonic number.  Boom.

Oh, but yeah, I’m stuck.