Luck(?) of the Draw

What is luck?  Is luck?  And, if you vote yea, is a belief in luck an obstacle to understanding probability?

This question came up on Twitter a couple of nights ago when Christopher Danielson and Michael Pershan were discussing Daniel Kahneman’s recent book, Thinking, Fast and Slow.  Specifically, they were talking about the fact that Kahneman doesn’t shy away from using the word luck when discussing probabilistic events.  This, of course, is the kind of thing that makes mathematically fastidious people cringe.  And Danielson and Pershan are nothing if not mathematically fastidious.  Spend like five minutes with their blogs.  So Danielson twittered this string of twitterings:

According to Danielson, luck a “perceived bias in a random event.”  And, according to his interpretation of Kahneman, luck is composed of “happy outcomes that can be explained by probability.”  Let me see if I can define luck for myself, and then examine its consequences.

What is luck?

I think, at its heart, luck is about whether we perceive the universe to be treating us fairly.  When someone is kind to us, we feel happy, but we can attribute our happiness to another’s kindness.  When someone is mean, we feel sad, but we can attribute our sadness to another’s meanness.  When we are made to feel either happy or sad by random events, however, there is no tangible other for us to thank or blame, and so we’ve developed this idea of being either lucky or unlucky as a substitute emotion.

But happy/sad and lucky/unlucky are relative feelings, and so there must be some sort of zero mark where we just feel…nothing.  Neutral.  With people, this might be tricky.  Certainly it’s subjective.  Really, my zero mark with people is based on what I expect of them.  If a stranger walks through a door in front of me without looking back, that’s roughly what I expect.  And, when that happens, I do almost no emoting whatsoever.  If, however, he holds the door for me, this stranger has exceeded my expectations, which makes me feel happy at this minor redemptive act.  If he sees me walking in behind him and slams the door in my face, he has fallen short of my expectations, which makes me sad and angry about him being an asshole.

And, in this regard, I think that feeling lucky is actually a much more rational response than being happy/sad at people, because with random events at least I can concretely define my expectation.  I have mathematical tools to tell me, with comforting accuracy, whether I should be disappointed with my lot in life; there is no need to rely on messy inductive inferences about human behavior.  So I feel lucky when I am exceeding mathematical expectations, unlucky when I’m falling short, and neutral when my experience roughly coincides with the expected value.  Furthermore, the degree of luck I feel is a function of how far I am above or below my expectation.  The more anomalous my current situation, the luckier/unluckier I perceive myself to be.

Let’s look at a couple examples of my own personal luck.

  1. I have been struck by lightning zero times.  Since my expected number of lightning strikes is slightly more than zero, I’m doing better than I ought to be, on average.  I am lucky.  Then again, my expected number of strikes is very, very slightly more than zero, so I’m not doing better by a whole lot.  So yeah, I’m lucky in the lightning department, but I don’t get particularly excited about it because my experience and expectation are very closely aligned.
  2. I have both my legs.  Since the expected number of legs in America is slightly less than two, I’m crushing it, appendage-wise.  Again, though, I’m extremely close to the expected value, so my luck is modest.  But, I am also a former Marine who spent seven months in Iraq during a period when Iraq was the explosion capital of the world.  My expected number of legs, conditioned on being a war veteran, is farther from two than the average U.S. citizen, so I am mathematically justified in feeling luckier at leg-having than most leg-having people in this country.

Which brings us back to this business of luck being a “perceived bias in a random event.”  I’m not convinced.  In fact, I’m absolutely sure I can be lucky in a game I know to be unbiased (within reasonable physical limits).  Let’s play a very simple fair game: we’ll flip a coin.  I’ll be heads, you’ll be tails, and the loser of each flip pays the winner a dollar.  Let’s say that, ten flips deep, I’ve won seven of them.  I’m up $4.00.  Of course, my expected profit after ten flips is $0, so I’m lucky.  And you, of course, are down $4.00, so you’re unlucky.  Neither of us perceives the game to be biased, and we both understand that seven heads in ten flips is not particularly strange (it happens about 12% of the time), and yet I’m currently on the favorable side of randomness, and you are not.  That’s not a perception; that’s a fact.  And bias has nothing to do with it, not even an imaginary one.

Now, in the long run, our distribution of heads and tails will converge toward its theoretical shape, and we will come out of an extremely long and boring game with the same amount of money as when we started.  In the long run, whether we’re talking about lightning strikes or lost limbs or tosses of a coin, nobody is lucky.  Of course, in the long run—as Keynes famously pointed out—we’ll be dead.  And therein, really, is why luck creeps into our lives.  At any point, in any category, we have had only a finite number of trials, which means that our experiences are very likely to differ from expectation, for good or ill.  In fact, in many cases, it would be incredibly unlikely for any of us to be neither lucky nor unlucky.  That would be almost miraculous.  So…

Is luck?

As in, does it really exist, or is it just a perceptual trick?  Do I only perceive myself to be lucky, as I said above, or am I truly?  I submit that it’s very real, provided that we define it roughly as I just have.  It’s even measurable.  It doesn’t have to be willful or anthropomorphic, just a deviation from expectation.  That shouldn’t be especially mathematically controversial.  I think the reason mathy people cringe around the idea of luck is because it’s so often used as an explanation, which is where everything starts to get a little shaky.  Because that’s not a mathematical question.  It’s a philosophical or—depending on your personal bent—a religious one.

If you like poker, you’d have a tough time finding a more entertaining read than Andy Bellin’s book, Poker Nation.  The third chapter is called “Probability, Statistics, and Religion,” which includes some gems like, “…if you engage in games of chance long enough, the experience is bound to affect the way you see God.”  It also includes a few stories about the author’s friend, Dave Enteles, about whom Bellin says, “Anecdotes and statistics  cannot do justice to the level of awfulness with which he conducts his play.”  After (at the time) ten years of playing, the man still kept a cheat sheet next to him at the table with the hand rankings on them.  But all that didn’t stop Dave from being the leading money winner at Bellin’s weekly game during the entire 1999 calendar year.  “The only word to describe him at a card table during that time is lucky,” says Bellin, “and I don’t believe in luck.”

But there’s no choice, right, but to believe?  I mean, it happened.  Dave’s expectation at the poker table, especially at a table full of semi-professional and otherwise extremely serious and skillful players, is certainly negative.  Yet he not only found himself in the black, he won more than anybody else!  That’s lucky.  Very lucky.  And that’s also the absolute limit of our mathematical interest in the matter.  We can describe Dave’s luck, but we cannot explain it.  That way lies madness.

There are 2,598,960 distinct poker hands possible.  There are 3,744 ways to make a full house (three-of-a-kind plus a pair).  So, if you play 2,598,960 hands, your expected number of full houses during that period is 3,744.  Of course, after 2.6 million hands, the probability of being dealt precisely 3,744 full houses isn’t particularly large.  Most people will have more and be lucky, or less and be unlucky.  That’s inescapable.  Now, why you fall on one side and not the other is something you have to reconcile with your favorite Higher Power.

Bellin’s final thoughts on luck:

I know in my heart that if Dave Enteles plays 2,598,960 hands of poker in his life, he’s going to get way more than his fair share of 3,744 full houses.  Do you want to know why?  Well, so do I.

And, really, that’s the question everybody who’s ever considered his/her luck struggles to answer.  No one has any earthly reason to believe she will win the lottery next week.  But someone will.  Even with a negative expectation, someone will come out way, way ahead.  And because of that, we can safely conclude that that person has just been astronomically lucky.  But why Peggy McPherson?  Why not Reggie Ford?  Why not me?  Thereon we must remain silent.

Is a belief in luck an obstacle to understanding probability?

I don’t see why it should be.  At least not if we’re careful.  If you believe that you are lucky in the sense of “immune to the reality of randomness and probabilistic events,” then that’s certainly not good.  If you believe that you are lucky in the sense of “one of the many people on the favorably anomalous side of a distribution,” then I don’t think there is any harm in it.  In fact, acknowledging that random variables based on empirical measurements do not often converge toward their theoretical limits particularly rapidly is an important feature of very many random variables.  In other words, many random variables are structured in such a way as to admit luck.  That’s worth knowing and thinking about.

Every day in Vegas, somebody walks up to a blackjack table with an anomalous number of face cards left in the shoe and makes a killing.  There is no mystery in it.  If you’re willing to work with a bunch of people, spend hours and hours practicing keeping track of dozens of cards at a time, and hide from casino security, you can even do it with great regularity.  There are how-to books.  You could calculate the exact likelihood of any particular permutation of cards in the shoe.  I understand the probabilistic underpinnings of the game pretty well.  I can play flawless Basic Strategy without too much effort.  I know more about probability than most people in the world.  And yet, if I happen to sit at a table with a lot more face cards than there ought to be, I can’t help but feel fortunate at this happy accident.  For some reason, or for no reason, I am in a good position rather than a bad one; I am here at a great table instead of the guy two cars behind me on the Atlantic City Expressway.  That’s inexplicable.

And that’s luck.

Label Maker

If you’ve perused this blog, you know that I love probability.  I was fortunate enough to see Al Cuoco and Alicia Chiasson give a really cool presentation at this year’s NCTM conference about exploring the probabilities of dice sums geometrically and algebraically.  Wheelhouse.  After we got done looking at some student work and pictures of distributions, Al nonchalantly threw out the following question:

Is it possible to change the integer labels on two dice [from the standard 1,2,3,4,5,6] such that the distribution of sums remains unchanged?

Of course he was much cooler than that.  I’ve significantly nerded up the language for the sake of brevity and clarity.  Still, good question, right?  And of course since our teacher has posed this tantalizing challenge, we know that the answer is yes, and now it’s up to us to fill in the details.  Thusly:

First let’s make use of the Cuoco/Chiasson observation that we can represent the throw of a standard die with the polynomial

P(x) = x^1 + x^2 + x^3 + x^4 + x^5 + x^6

When we do it this way, the exponents represent the label values for each face, and the coefficients represent frequencies of each label landing face up (relative to the total sample space).  This is neither surprising, nor super helpful.  Each “sum” occurs once out of the six possible.  We knew this already.

What is super helpful is that we can include n dice in our toss by expanding n factors of P(x).  For two dice (the number in question), that looks like

P(x)^2 = x^2+2x^3+3x^4+4x^5+5x^6+6x^7+5x^8+4x^9+3x^{10}+2x^{11}+x^{12}

You can easily confirm that this jibes with the standard diagram.  For instance the sum of 7 shows up most often (6 out of 36 times), which helps casinos make great heaps of money off of bettors on the come.  Take a moment.  Compare.

Okay, so now we know that the standard labels yield the standard distribution of sums.  The question, though, is whether there are any other labels that do so as well.  Here’s where some abstract algebra comes in handy.  Let’s assume that there are, in fact, dice out there who satisfy this property.  We can represent those with polynomials as well.  We know that the coefficient on each term must still be 1 (each face will still come up 1 out of 6 times), but we don’t yet know about the exponents (labels).  So let’s say the labels on the two dice are, respectively

(a_1,a_2,a_3,a_4,a_5,a_6) and (b_1,b_2,b_3,b_4,b_5,b_6).

If we want the same exact sum distribution, it had better be true that

P(x)^2 = (x^{a_1}+x^{a_2}+x^{a_3}+x^{a_4}+x^{a_5}+x^{a_6}) (x^{b_1}+x^{b_2}+x^{b_3}+x^{b_4}+x^{b_5}+x^{b_6}).

For future convenience (trust me), let’s call the first polynomial factor on the right hand side Q(x).  Great!  Now we just have to figure out what all the a’s and b’s are.  It helps that our polynomials belong to the ring Z[x], which is a unique factorization domain.  A little factoring practice will show us that

P(x)^2 = x^2(x+1)^2(x^2+x+1)^2(x^2-x+1)^2.

We just have to rearrange these irreducible factors to get the answer we’re looking for.  Due to a theorem that is too long and frightening to reproduce here [waves hands frantically], we know that the unique factorization of Q(x)—our polynomial with unknown exponents—must be of the form

Q(x) = x^s(x+1)^t(x^2+x+1)^u(x^2-x+1)^v,

where s, t, u, and v are all either 0, 1, or 2.  So that’s good news, not too many possibilities to check.  In fact, we can make our lives a little easier.  First of all, notice that Q(1) must equal 6.  Right?  Each throw of that single die must yield each of the 6 faces with equal probability.  But then substituting 1 into the factored form gives us

Q(1) = 1^s2^t3^u1^v

Clearly this means that t and u have to be 1, and we just have to nail down s and v.  Well, if we take a look at Q(0), we also quickly realize that s can’t be 0.  It can’t be 2 either, because, if s is 2, then the smallest sum we could obtain on our dice would be 3—which is absolutely no good at all.  So s is 1 as well.  Let’s see what happens in our three remaining cases, when u is 0, 1, and 2:

u=0: Q(x)=x^1+x^2+x^2+x^3+x^3+x^4

u=1: Q(x)=x^1+x^2+x^3+x^4+x^5+x^6

u=2: Q(x)=x^1+x^3+x^4+x^5+x^6+x^8

Check out those strange and beautiful labels!  We can mark up the first die with the exponents from the u = 0 case, and the second die with the u = 2 case.  When we multiply those two polynomials together we get back P(x)2, which is precisely what we needed (check if you like)!  Our other option, of course, is to label two dice with the u =1 case, which corresponds to a standard die.  And, thanks to unique factorization, we can be sure that there are no other cases.  Not only have we found some different labels, we’ve found all of them!

If the a’s on the first die are (1,2,2,3,3,4), then the b’s end up being (1,3,4,5,6,8), and vice versa.  And, comfortingly, if the a’s on the first die are (1,2,3,4,5,6), then so are the b’s on the second one.

Two dice with the u = 1 label are what you find at every craps table in the country.  One die of each of the other labels forms a pair of Sicherman dice, and they are the only other dice that yield the same sum distribution.  You could drop Sicherman dice in the middle of Vegas, and nobody would notice.  At least in terms of money changing hands.  The pit boss might take exception.  Come to think of it, I cannot stress how important it is that you not attempt to switch out dice in Vegas.  Your spine is also uniquely factorable…into irreducible vertebrae.

*This whole proof has been cribbed from Contemporary Abstract Algebra (2nd ed.), by Joseph A. Gallian.  If you want the whole citation, click his name and scroll down.*

Building a Probability Cannon

For just a moment, let’s consider a staple of the second year algebra curriculum: the one-dimensional projectile motion problem.  (I used to do an awful lot of this sort of thing.)  It’s not a fantastic problem—it’s overdone, and often under-well—but it’s representative of many of our standard modeling problems in some important ways:

  1. Every one of my students has participated in the activity we’re modeling.  They’ve thrown, dropped, and shot things.  They’ve jumped and fallen and dove from various heights.  In other words, they have a passing acquaintance with gravity.
  2. Data points are relatively easy to come by.  All we need is a stopwatch and a projectile-worthy object.  If that’s impractical, then there are also some great and simple—and free—simulations out there (PhET, Angry Birds), and some great and simple—and free—data collection software as well (Tracker).
  3. We only need a few data points to fix the parameters.  For a general quadratic model, we only need three data points to determine the particular solution.  Really we only need two, if we assume constant acceleration.
  4. Experiments are easy to repeat.  Drop/throw/shoot the ball again.  Run the applet again.
  5. The model conforms to a fairly nice and well-behaved family of functions.  Quadratics are continuous and differentiable and smooth, and they’re generally willing to submit to whatever mathematical poking we’re wont to visit upon them without getting gnarly.
  6. Theoretical predictions are readily checked.  Want to know, for instance, when our projectile will hit the ground?  Find the sensible zero of the function (it’s pretty easy to sanity check its reasonableness—see #1 above).  Look at a table of values and step through the motion second-by-second (use a smaller delta t for an even better sense of what’s going on).  Click RUN on your simulation, and wait until it stops (self-explanatory).  And, if you’re completely dedicated, build yourself a cannon and put your money where your mouth is.

Of course I’ve chosen to introduce this discussion with the example of projectile motion, but there are plenty of other candidates: length/area/volume, exponential growth and decay, linear speed and distance.  Almost without exception (in the algebra classroom), we model phenomena that satisfy the six conditions listed above.

Almost.  Because then we run into probability, and probability isn’t so tame.  I’ll grant that #1 still holds (though I’m not entirely convinced it holds in the same sense), but the other five conditions go out the window.

Data points are NOT easy to come by.

I can already hear you protesting.  “Flip a coin…that’s a data point!”  Well, yes.  Sort of.  But in the realm of probability, individual data points are ambiguous.  The ordered pair (3rd flip, heads) is very different from (3 seconds, 12 meters).  They’re both measurements, but the first one has much, much higher entropy.  Interpretation becomes problematic.  Here’s another example: My meteorologist’s incredibly sophisticated model (dart board?) made the following prediction yesterday: P(rain) = 0.6.  In other words, the event “rain” was more likely than the event “not rain.”  It did not rain yesterday.  How am I to understand this un-rain?  Was the model right?  If so, then I’m not terribly surprised it didn’t rain.  Was the model wrong?  If so, then I’m not terribly surprised it didn’t rain.  In what sense have I collected “data?”

And what if I’m interested in a compound event?  What if I want to know not just the result of a lone flip, but P(exactly 352 heads in 1000 flips)?  Now a single data point suddenly consists of 1000 trials.  So it turns out data points have the potential to be rather difficult to come by, which brings us to…

We need an awful lot of data points.

I’m not talking about our 1000-flip trials here, which was just a result of my arbitrary choice of one particular problem.  I mean that, no matter what our trials consist of, we need to do a whole bunch of them in order to build a reliable model.  Two measurements in my projectile problem determine a unique curve and, in effect, answer any question I might want to ask.  Two measurements in a probabilistic setting tell me just about nothing.

Consider this historical problem born, like many probability problems, from gambling.  On each turn, a player rolls three dice and wins or loses money based on the sum (fill in your own details if you want; they’re not so important for our purposes here).  As savvy and degenerate gamblers, we’d like to know which sums are more or less likely.  We have some nascent theoretical ideas, but we’d like to test one in particular.  Is the probability of rolling a sum of 9 equal to the probability of rolling a sum of 10?  It seems it should be: after all, there are six ways to roll a 9 ({6,2,1},{5,3,1},{5,2,2},{4,4,1},{4,3,2},{3,3,3}), and six ways to roll a 10 ({6,3,1},{6,2,2},{5,4,1},{5,3,2},{4,4,2},{4,3,3})*.  Done, right?

It turns out this isn’t quite accurate.  For instance, the combination {6,2,1} treats all of the 3! = 6 permutations of those numbers as one event, which is bad mojo.  If you go through all 216 possibilities, you’ll find that there are actually 27 ways to roll a 10, and only 25 ways to roll a 9, so the probabilities are in fact unequal.  Okay, no biggie, our experiment will certainly show this bias, right?  Well, it will, but if we want to be 95% experimentally certain that 10 is more likely, then we’ll have to run through about 7,600 trials!  (For a derivation of this number—and a generally more expansive account—see Michael Lugo’s blog post.)  In other words, the Law of Large Numbers is certainly our friend in determining probabilities experimentally, but it requires, you know, large numbers.

*If you’ve ever taught probability, you know that this type of dice-sense is rampant.  Students consistently collapse distinct events based on superficial equivalence rather than true frequency.  Ask a room of high school students this question: “You flip a coin twice.  What’s the probability of getting exactly one head?”  A significant number will say 1/3.  After all, there are three possibilities: no heads, one head, two heads.  Relatively few will immediately notice, without guidance, that “one head” is twice as likely as the other two outcomes.

Experiments are NOT easy to repeat.

I’ve already covered some of the practical issues here in terms of needing a lot of data points.  But beyond all that, there are also philosophical difficulties.  Normally, in science, when we talk about repeating experiments, we tend to use the word “reproduce.”  Because that’s exactly what we expect/are hoping for, right?  I conduct an experiment.  I get a result.  I (or someone else) conduct the experiment again.  I (they) get roughly the same result.  Depending on how we define our probability experiment, that might not be the case.  I flip a coin 10 times and count 3 heads.  You flip a coin 10 times and count 6 heads.  Experimental results that differ by 100% are not generally awesome in science.  In probability, they are the norm.

As an interesting, though somewhat tangential observation, note that there is another strange philosophical issue at play here.  Not only can events be difficult to repeat, but sometimes they are fundamentally unrepeatable.  Go back to my meteorologist’s prediction for a moment.  How do I repeat the experiment of “live through yesterday and see whether it rains?”  And what does a 60% chance of rain even mean?  To a high school student (teacher) who deals almost exclusively in frequentist interpretations of probability, it means something like, “If we could experience yesterday one million times, about 600,000 of those experiences would include rain.”  Which sounds borderline crazy.  And the Bayesian degree-of-belief interpretation isn’t much more comforting: “I believe, with 60% intensity, that it will rain today.”  How can we justify that level of belief without being able to test its reliability by being repeatedly correct?  Discuss.

Probability distributions can be unwieldy.

Discrete distributions are conceptually easy, but cumbersome.  Continuous distributions are beautiful for modeling, but practically impossible for prior-to-calculus students (not just pre-calculus ones).  Even with the ubiquitous normal distribution, there is an awful lot of hand-waving going on in my classroom.  Distributions can make polynomials look like first-grade stuff.

Theoretical predictions aren’t so easily checked.

My theoretical calculations for the cereal box problem tell me that, on average, I expect to buy between 5 and 6 boxes to collect all the prizes.  But sometimes when I actually run through the experiment, it takes me northward of 20 boxes!  This is a teacher’s nightmare.  We’ve done everything right, and then suddenly our results are off by a factor of 4.  Have we confirmed our theory?  Have we busted it?  Neither?  Blurg.  So what are we to do?

We are to build a probability cannon!

With projectile motion problems, building a cannon is nice.  It’s cool.  We get to launch things, which is awesome.  With probability, I submit that it’s a necessity.  We need to generate data: it’s the raw material from which conjecture is built, and the touchstone by which theory is tested.  We need to (metaphorically) shoot some stuff and see where it lands.  We need…simulations!

If your model converges quickly, then hand out some dice/coins/spinners.  If it doesn’t, teach your students how to use their calculators for something besides screwing up order of operations.  Better yet, teach them how to tell a computer to do something instead of just watching/listening to it.  (Python is free.  If you own a Mac, you already have it.)  Impress them with your wizardry by programming, right in front of their eyes, and with only a few lines of code, dice/coins/spinners that can be rolled/flipped/spun millions of times with the push of a button.  Create your own freaking distributions with lovely, computer-generated histograms from your millions of trials.  Make theories.  Test theories.  Experience anomalous results.  See that they are anomalous.  Bend the LLN to your will.

Exempli Gratia

NCTM was kind enough to tweet the following problem today, as I was in the middle of writing this post:

Okay, maybe the probability is just 1/2.  I mean, any argument I make for Kim must be symmetrically true for Kyle, right?  But wait, it says “greater than” and not “greater than or equal to,” so maybe that changes things.  Kim’s number will be different from Kyle’s most of the time, and it will be greater half of the times it’s different, so…slightly less than 1/2?  Or maybe I should break it down into mutually exclusive cases of {Kim rolls 1, Kim rolls 2, … , Kim rolls 6}.  You know what, let’s build a cannon.  Here it is, in Mathematica:

Okay, so it looks like my second conjecture is right; the probability is a little less than 1/2.  Blammo!  And it only took (after a few seconds of typing the code) 1.87 seconds to do a million trials.  Double blammo!  But how much less than 1/2?  Emboldened by my cannon results, I can turn back to the theory.  Now, if Kyle rolls a one, Kim will roll a not-one with probability 5/6.  Ditto two, three, four, five, and six.  So Kim’s number is different from Kyle’s 5/6 of the time.  And—back to my symmetry argument—there should be no reason for us to believe one or the other person will roll a bigger number, so Kim’s number is larger 1/2 of 5/6 of the time, which is 5/12 of the time.  Does that work?  Well, since 5/12 ≈ 0.4167, which is convincingly close to 0.416159, I should say that it does.  Triple blammo and checkmate!

But we don’t have to stop there.  What if I remove the condition that Kim’s number is strictly greater?  What’s the probability her number is greater than or equal to Kyle’s?  Now my original appeal to symmetry doesn’t require any qualification.  The probability ought simply be 1/2.  So…

What what?  Why is the probability greater than 1/2 now?  Oh, right.  Kim’s roll will be equal to Kyle’s 1/6 of the time, and we already know it’s strictly greater than Kyle’s 5/12 of the time.  Since those two outcomes are mutually exclusive, we can just add the probabilities, and 1/6 + 5/12 = 7/12, which is about (yup yup) 0.583.  Not too shabby.

What if we add another person into the mix?  We’ll let Kevin join in the fun, too.  What’s the probability that Kim’s number will be greater than both Kyle’s and Kevin’s?

It looks like the probability of Kim’s number being greater than both of her friends’ might just be about 1/4.  Why?  I leave it as an exercise to the reader.

That tweet-sized problem easily becomes an entire lesson with the help of a relatively simple probability cannon.  If that’s not an argument for introducing them into your classroom, I don’t know what is.

Ready.  Aim.  Fire!

Thanks to Christopher Danielson for sparking this whole discussion.

Pruning Tree Diagrams

A few days ago we opened up with some group work surrounding the following problem.  I gave no guidance other than, “One representative will share your solution with the class.”

My favorite cereal has just announced that it’s going to start including prizes in the box.  There is one of three different prizes in every package.  My mom, being cheap and largely unwilling to purchase the kind of cereal that has prizes in it, has agreed to buy me exactly three boxes.  What is the probability that, at the end of opening the three boxes, I will have collected all three different prizes?

It’s a very JV, training-wheels version of the coupon collector’s problem, but it’s nice for a couple of reasons:

  1. The actual coupon collector’s problem is several years out of reach, but it’s a goody, so why not introduce the basics of it?
  2. There is a meaningful conversation to be had about independence.  (Does drawing a prize from Box 1 change the probabilities for Box 2?  Truly?  Appreciably?  Is it okay to assume, for simplicity, that it doesn’t?  How many prizes need to be out there in the world for us to feel comfortable treating this thing as if it were a drawing with replacement?  If everybody else is buying up cereal—and prizes—uniformly, does that bring things closer to true independence?  farther away?)
  3. There are enough intuitive wrong answers to require some deeper discussion: e.g, 1/3 (Since all the probabilities along the way are 1/3, shouldn’t the final probability of success also be 1/3?), 1/27 (There are three chances of 1/3 each, so I multiplied them together.), and 1/9 (There are three shots at three prizes, so nine outcomes, and I want the one where I get all different toys.)  The correct answer, by the by, is 6/27 or 2/9 (try it out).

Many groups jumped right into working with the raw numbers (see wrong answers above).  A few tried, with varying levels of success, to list all the outcomes individually (interestingly, a lot of these groups correctly counted 27 possibilities, but then woefully miscounted the number of successes…hmmm).  A small but determined handful of groups used tree diagrams to help them reason about outcomes sequentially.

This business of using tree diagrams was pleasantly surprising.  We hadn’t yet introduced them in class, and I hadn’t made any suggestions whatsoever about how to tackle the problem, so I thought it was nice to see a spark of recollection.  That said, it’s not terribly surprising; presumably these kids have used them before.  But I did run across one student, Z, who interpreted his tree diagram in novel way—to me at least.

Most students, when looking at a tree diagram, hunt for paths that meet the criteria for success.  Here’s a path where I get Prize 1, then Prize 2, then Prize 3.  Here’s another where I get Prize 1, then Prize 3, then Prize 2…  The algorithm goes something like, follow a path event-by-event and, if you ultimately arrive at the compound event of interest, tally up a success.  Repeat until you’re out of paths.  That is, most students see each path as an stand-alone entity to be checked, and then either counted or ignored.

What Z did was different in three important ways.  First of all, he found his solutions via subtraction rather than addition.  Second, he attacked the problem in a very visual—almost geometric—way.  And third, he didn’t treat each path separately; rather, Z searched for equivalence classes of paths within the overall tree.

Z’s (paraphrased) explanation goes as follows:

First I erased all of the straight paths, because they mean I get the same prize in every box.  Then I erased all of the paths that were almost straight, but had one segment that was crooked, which means I get two of the same prize.  And then I was left with the paths that were the most crooked, which means I get a different prize each time.

Looking at his diagram, I noticed that Z hadn’t even labeled the segments; he simply drew the three stages, with three possibilities at each node, and then deleted everything that wasn’t maximally crooked.  How awesome is that?  In fact, taking this tack made it really easy for him to answer more complicated followup questions.  Since he’d already considered the other cases, he could readily figure out the probability of getting three of the same prize (the 3 branches he pruned first), or getting only two different prizes (the next 18 trimmings).  He could even quickly recognize the probability of getting the same prize twice in a row, followed by a different one (the 6 branches he trimmed that went off in one direction, followed by a straight-crooked pattern).

Of course this method isn’t particularly efficient.  He had to cut away 21 paths to get down to 6.  For n prizes and boxes, you end up pruning nn — n! branches.  Since nn grows much, much faster, than n!, Z’s algorithm becomes prohibitively tedious in a hurry.  If there are 5 prizes and 5 boxes, that’s already 3005 branches that need to be lopped off.  So yes, it’s inefficient, but then again so are tree diagrams.  Without more sophisticated tools under his belt, that’s not too shabby.  What the algorithm lacks in computational efficiency, it makes up for in conceptual thoughtfulness.  I’ll take it that tradeoff any day of the week.

Check, One, Two

Myself and (then) SSgt Clark doing some math in Fallujah, Iraq

I used to be an artillery officer in the Marine Corps, and it’s sometimes fun to bring mathematical details of my former life into the classroom.  Not only is there some useful and interesting math to be found there, but it also buys me the occasional attention of the Call of Duty crowd.  Here is a simple application of Bayes’ Theorem to artillery safety.