# Luck(?) of the Draw

What is luck?  Is luck?  And, if you vote yea, is a belief in luck an obstacle to understanding probability?

This question came up on Twitter a couple of nights ago when Christopher Danielson and Michael Pershan were discussing Daniel Kahneman’s recent book, Thinking, Fast and Slow.  Specifically, they were talking about the fact that Kahneman doesn’t shy away from using the word luck when discussing probabilistic events.  This, of course, is the kind of thing that makes mathematically fastidious people cringe.  And Danielson and Pershan are nothing if not mathematically fastidious.  Spend like five minutes with their blogs.  So Danielson twittered this string of twitterings:

According to Danielson, luck a “perceived bias in a random event.”  And, according to his interpretation of Kahneman, luck is composed of “happy outcomes that can be explained by probability.”  Let me see if I can define luck for myself, and then examine its consequences.

# What is luck?

I think, at its heart, luck is about whether we perceive the universe to be treating us fairly.  When someone is kind to us, we feel happy, but we can attribute our happiness to another’s kindness.  When someone is mean, we feel sad, but we can attribute our sadness to another’s meanness.  When we are made to feel either happy or sad by random events, however, there is no tangible other for us to thank or blame, and so we’ve developed this idea of being either lucky or unlucky as a substitute emotion.

But happy/sad and lucky/unlucky are relative feelings, and so there must be some sort of zero mark where we just feel…nothing.  Neutral.  With people, this might be tricky.  Certainly it’s subjective.  Really, my zero mark with people is based on what I expect of them.  If a stranger walks through a door in front of me without looking back, that’s roughly what I expect.  And, when that happens, I do almost no emoting whatsoever.  If, however, he holds the door for me, this stranger has exceeded my expectations, which makes me feel happy at this minor redemptive act.  If he sees me walking in behind him and slams the door in my face, he has fallen short of my expectations, which makes me sad and angry about him being an asshole.

And, in this regard, I think that feeling lucky is actually a much more rational response than being happy/sad at people, because with random events at least I can concretely define my expectation.  I have mathematical tools to tell me, with comforting accuracy, whether I should be disappointed with my lot in life; there is no need to rely on messy inductive inferences about human behavior.  So I feel lucky when I am exceeding mathematical expectations, unlucky when I’m falling short, and neutral when my experience roughly coincides with the expected value.  Furthermore, the degree of luck I feel is a function of how far I am above or below my expectation.  The more anomalous my current situation, the luckier/unluckier I perceive myself to be.

Let’s look at a couple examples of my own personal luck.

1. I have been struck by lightning zero times.  Since my expected number of lightning strikes is slightly more than zero, I’m doing better than I ought to be, on average.  I am lucky.  Then again, my expected number of strikes is very, very slightly more than zero, so I’m not doing better by a whole lot.  So yeah, I’m lucky in the lightning department, but I don’t get particularly excited about it because my experience and expectation are very closely aligned.
2. I have both my legs.  Since the expected number of legs in America is slightly less than two, I’m crushing it, appendage-wise.  Again, though, I’m extremely close to the expected value, so my luck is modest.  But, I am also a former Marine who spent seven months in Iraq during a period when Iraq was the explosion capital of the world.  My expected number of legs, conditioned on being a war veteran, is farther from two than the average U.S. citizen, so I am mathematically justified in feeling luckier at leg-having than most leg-having people in this country.

# Conclusion

Of course for some people the lottery is terrible.  People have gambling problems.  People spend way too much money on all kinds of things they probably shouldn’t.  But that doesn’t mean that everyone—or even most people—that play are suckers.  Eating the occasional King Size Snickers probably won’t get your foot chopped off; smoking the occasional cigarette probably won’t kill you (sorry, kids), and buying the occasional lottery ticket will likely have about zero net impact on your finances.  Besides, isn’t it worth it to dream, for even a day, of having indoor hot tubs?  They’re so bubbly.

# Cereal Boxes Redux

In my last post, my students were wrestling with a question about cereal prizes.  Namely, if there is one of three (uniformly distributed) prizes in every box, what’s the probability that buying three boxes will result in my ending up with all three different prizes?  Not so great, turns out.  It’s only 2/9.  Of course this raises another natural question: How many stupid freaking boxes do I have to buy in order to get all three prizes?

There’s no answer, really.  No number of boxes will mathematically guarantee my success.  Just as I can theoretically flip a coin for as long as I’d like without ever getting tails, it’s within the realm of possibility that no number of purchases will garner me all three prizes.  But, just like the coin, students get the sense that it’s extremely unlikely that you’d buy lots and lots of boxes without getting at least one of each prize.  And they’re right.  So let’s tweak the question a little: How many boxes do I have to buy on average in order to get all three prizes?  That’s more doable, at least experimentally.

I have three sections of Advanced Algebra with 25 – 30 students apiece.  I gave them all dice to simulate purchases and turned my classroom—for about ten minutes at least—into a mathematical sweatshop churning out Monte Carlo shopping sprees.  The average numbers of purchases needed to acquire all prizes were 5.12, 5.00, and 5.42.  How good are those estimates?

Simulating cereal purchases with dice

Here’s my own simulation of 15,000 trials, generated in Python and plotted in R:

I ended up with a mean of 5.498 purchases, which is impressively close to the theoretical expected value of 5.5.  So our little experiment wasn’t too bad, especially since I’m positive there was a fair amount of miscounting, and precisely one die that’s still MIA from excessively enthusiastic randomization.

And now here’s where I’m stuck.  I can show my kids the simulation results.  They have faith—even though we haven’t formally talked about it yet—in the Law of Large Numbers, and this will thoroughly convince them the answer is about 5.5.  I can even tell them that the theoretical expected value is exactly 5.5.  I can even have them articulate that it will take them precisely one box to get the first new toy, and three boxes, on average, to get the last new toy (since the probability of getting it is 1/3, they feel in their bones that they should have to buy an average of 3 boxes to get it).  But I feel like we’re still nowhere near justifying that the expected number of boxes for the second toy is 3/2.

For starters, a fair number of kids are still struggling with the idea that the expected value of a random variable doesn’t have to be a value that the variable can actually attain.  I’m also not sure how to get at this next bit.  The absolute certainty of getting a new prize in the first box is self-evident.  The idea that, with a probability of success of 1/3, it ought “normally” to take 3 tries to succeed is intuitive.  But those just aren’t enough data points to lead to the general conjecture (and truth) that, if the probability of success for a Bernoulli trial is p, then the expected number of trials to succeed is 1/p.  And that’s exactly the fact we need to prove the theoretical solution.  Really, that’s what we need basically to solve the problem completely for any number of prizes.  After that, it’s straightforward:

The probability of getting the first new prize is n/n.  The probability of getting the second new prize is (n-1)/n … all the way down until we get the last new prize with probability 1/n.  The expected numbers of boxes we need to get all those prizes are just the reciprocals of the probabilities, so we can add them all together…

If X is the number of boxes needed to get all n prizes, then

$E(X) = \frac{n}{n} + \frac{n}{n-1} + \cdots + \frac{n}{1} = n(\frac{1}{n} + \frac{1}{n-1} + \cdots + \frac{1}{1}) = n \cdot H_n$

where Hn is the nth harmonic number.  Boom.

Oh, but yeah, I’m stuck.

# Conditional Response

Aside from being entertaining, these DIRECTV commercials offer at least two important lessons about logic.

For starters, let’s name the propositions listed in the video:

• q: your cable is on the fritz
• r: you get frustrated
• t: your daughter gets thrown out of school
• u: your daughter meets undesirables
• v: your daughter ties the knot with undesirables
• w:  you get a grandson with a dog collar

So the ad takes us through the following sequence of conditional statements:

$\begin{array}{lcl} q & \longrightarrow & r \\ r & \longrightarrow & s \\ s & \longrightarrow & t \\ t & \longrightarrow & u \\ u & \longrightarrow & v \\ v & \longrightarrow & w \end{array}$

Let’s be generous and accept that each statement, individually, is true.  Then we’re led sequentially along a nice string of propositions, beginning at q and ending with w.  Actually, there’s one more tacit proposition, p: you have cable.  So the commercial’s (implicit + explicit) logic looks something like this:

$p \longrightarrow q \longrightarrow r \longrightarrow s \longrightarrow t \longrightarrow u \longrightarrow v \longrightarrow w$

And therein our first logic lesson: conditional statements respect transitivity.  We can follow an unbroken path of propositions all the way from p to w, which means we can replace that whole string of implications with the statement, “If you have cable, then you’ll get a grandson with a dog collar.”  Symbolically:

$p \longrightarrow w$

We’ve accepted all the statements along the way, so we accept this one as well, which is both funny and logically sound.  DIRECTV has successfully made fun of the cable companies, and we’ve had a chuckle.  And if the commercial were to end there, everything would be hunky dory.  But it doesn’t end there.  It ends on the line, “Don’t have a grandson with a dog collar; get rid of cable…”  Which is to say, “If you don’t have cable, you won’t have a grandson with a dog collar.”  Or…

$\neg p \longrightarrow \neg w$

But that’s incorrect!  And that’s our second lesson: the technical name for this fallacy is denying the antecedent, or the inverse error.  To give you a more intuitive example, consider the propositions:

• p: you are a dog
• q: you are a mammal

$p \longrightarrow q$: “If you are a dog, then you are a mammal.”  True.

$\neg p \longrightarrow \neg q$: “If you are not a dog, then you are not a mammal.”  Obviously false.

It might very well be true that having cable leads to a grandson with a dog collar, but that certainly doesn’t mean getting rid of cable is enough to avoid one.

# War Games

Back in my previous existence as an artillery officer, I participated in the war for a little while.  Our main job—my Marines and I—was to provide counter-fire support for units in and around the city of Fallujah, Iraq.  Basically, whenever our guys started taking rocket and/or mortar fire, radar would track the source of those rounds and send us their point of origin as a target.  Then we would shoot at it.  Simple.  Kind of.

By the time I got to Fallujah, all the dumb bad guys had been selected out of the gene pool; the ones who were left knew that what they were doing was extremely risky, and they took steps to minimize that risk.  They tried their best to make every opportunity count, and our goal was to make it just as costly as possible for them to shoot at us.  It was a deadly serious game-theoretical problem for both sides.  A game measured in seconds.

# The (square) Root of Love

All right, fellas, huddle up.  We’re going to talk about the best way to find true love.  I mean, you can’t just go running around all willy-nilly hoping to bump into somebody great.  The world is a big place.  You need a strategy, man.  A dating plan of attack.

First, some ground rules, some general observations about romantic life, and a few restrictions in the interest of mathematical well-behavedness:

1. You are only going to meet a finite number of datable women over the course of your lifetime.  It will be a depressingly low number.
2. You are going to be an upstanding citizen and date only one woman at a time.
3. You will date a woman for some finite period of time, at which point you’ll make a decision either to pull the trigger and propose, or cut her loose.  Or, more likely, she’ll dump you first.
4. Once you propose, no takesies-backsies.  And once you cut a woman loose, you can’t ever reconsider her for marriage; she will hate you forever.
5. You are able to perfectly rank the women you have dated according to a strict, unambiguous order of preference.  Tie goes to the blonde.
6. You will encounter these women in random order.  That is, you are completely ignorant of where the next potential wife will stand in the overall rankings.
7. You will date a certain number of women without really considering any of them for a proposal.  In other words, you’ll take some time getting a feel for who’s out there.  Setting the bar.

In the world of mathematics, this is what’s known as an optimal stopping problem.  You’re going to date, and date, and date…, and stop.  Hopefully on the woman of your dreams (hence the optimal part).  In fact, this is one of those problems that’s so famous it goes by several (mildly sexist) names: the secretary problem, the sultan’s dowry problem, the fussy suitor problem.  Because it’s Valentine’s Day, we’ll call it the marriage problem.