A Tale of Two Numbers

A few months ago, we had just finished talking about polynomials and were moving into matrices.  Because a lot of matrix concepts have analogs in the real numbers, we kicked things off with a review of some real number topics.  Specifically, I wanted to talk about solving linear equations using multiplicative inverses as a preview of determinants and using inverse matrices for solving linear systems.  For instance:

\begin{array}{ll}    2x=8 & AX=B \\    2^{-1}2x = 2^{-1}8 & A^{-1}AX = A^{-1}B \\    1x = \frac{1}{2}8 & IX = A^{-1}B \\    x=4 & X = A^{-1}B    \end{array}

As an aside, I threw out this series of equations in the hopes of (a) foreshadowing singular matrices, and (b) offering a justification for the lifelong prohibition against dividing by zero:

\begin{array}{l}    0x=1 \\    0^{-1}0x = 0^{-1}1 \\    1x = \frac{1}{0}1 \\    x = \frac{1}{0}    \end{array}

I thought this was just so beautiful.  Why can’t we divide by zero?  Because zero doesn’t have a multiplicative inverse.  There is no solution to 0x = 1, so 0-1 must not exist!  Q.E.D.

As it turns out, Q.E.NOT.  One of my students said, “Why can’t we just invent the inverse of zero?  Like we did with i?”

Again, we had just finished our discussion of polynomials, during which we had conjured the square root of -1 seemingly out of the clear blue sky.  They wanted to do the same thing with 1/0.  What an insightful and beautiful idea!  Consider the following stories, from my students’ perspectives:

  1. When we’re trying to solve quadratic equations, we might happen to run into something like x2 = -1.  Now of course there is no real number whose square is -1, so for convenience let’s just name this creature i (the square root of -1), and put it to good use immediately.
  2. When we’re trying to solve linear equations, we might happen to run into something like 0x = 1.  Now of course there is no real number that, when multiplied by 0, yields 1, so for convenience let’s just name this creature j (the multiplicative inverse of 0), and put it to good use immediately.

Why are we allowed to do the first thing, but not the second?  Why do we spend a whole chapter talking about the first thing, and an entire lifetime in contortions to avoid the second?  Both creatures were created, more or less on the spot, to patch up shortcomings in the real numbers.  What’s the difference?

And this is the tricky part: how do I explain it within the confines of a high school algebra class?  Well, I can tell you what I tried to do…

Let’s suppose that j is a legitimate mathematical entity in good standing with its peers, just like i.  Since we’ve defined j as the number that makes 0j = 1 true, it follows that 0 = 1/j.  Consider the following facts:

\begin{array}{l}    2 \cdot 0 = 0 \\    2\frac{1}{j} = \frac{1}{j} \\    \frac{2}{j} = \frac{1}{j} \\    2 = 1    \end{array}

In other words, I can pretty quickly show why j allows us to prove nonsensical results that lead to the dissolution of mathematics and perhaps the universe in general.  After all, if I’m allowed to prove that 2 = 1, then we can pretty much call the whole thing off.  What I can’t show, at least with my current pedagogical knowledge, is why i doesn’t lead to similar contradictions.

Therein lies the broad problem with proof.  It’s difficult.  If there are low-hanging fruit on the counterexample tree, then I can falsify bad ideas right before my students’ very eyes.  But if there are no counterexamples, then it becomes incredibly tough.  It’s easy to show a contradiction, much harder to show an absence of contradiction.  I can certainly take my kids through confirming examples of why i is helpful and useful.  But in my 50 min/day with them, there’s just no way I can organize a tour through the whole scope and beauty of complex numbers.  Let’s be serious, there’s no way that I can even individually appreciate their scope and beauty.

The complex numbers aren’t just a set, or a group.  They’re not even just a field.  They form an algebra (so do matrices, which brings a nice symmetry to this discussion), and algebras are strange and mysterious beings indeed.  I could spend the rest of my life learning why i leads to a rich and self-consistent system, so how am I supposed to give a satisfactory explanation?

Take it on faith, kids.  Good enough?

Update 3/20/12: My friend, Frank Romascavage, who is currently a graduate student in math at Bryn Mawr College (right down the road from my alma mater Villanova), pointed out the following on Facebook:

“We need to escape integral domains first so that we can have zero divisors!  Zero divisors give a quasi-invertibility condition (with respect to multiplication) on 0.  They aren’t really true inverses, but they are somewhat close!  In Z_{6} we have two zero divisors, 3 and 2, because 3 times 2 (as well as 2 times 3) in Z_{6} is 0.”

In many important ways, an integral domain is a generalization of the integers, which is why they behave very much the same.  An integral domain is just a commutative ring (usually assumed to have a unity), with no zero divisors.  If there are two members of a ring, say a and b, then they are said to be zero divisors if ab = 0.  In other words, to “escape integral domains,” is to move into a ring where the Zero Product Property no longer holds.  This means that, in non-integral domains, we can almost, sort of, a little bit, divide by zero.  Zero doesn’t really have a true inverse, but it’s close.  Frank’s example is the numbers 2 and 3 in the ring of integers modulo 6, since 3 x 2 = 0 (mod 6).  In fact, the ring of integers modulo n fails to be an integral domain in general, unless n is prime.  CTL

0!rganized Emptiness

On the back of the fundamental counting principle, my class has just established the fact that we can use n! to count the number of possible arrangements of n unique objects.  This is fantastic, but we don’t always want to arrange all of the n things available to us, which is okay.  We’ve also been introduced to the permutation function, which has the very nice property of counting ordered arrangements of r-sized subsets of our n objects.  Handy indeed.

Today we made an interesting observation: we now have not one, but two ways to count arrangements of, let’s say, 7 objects.

  1. We can fall back on our old friend, the factorial, and compute 7!
  2. We can use our new friend, the permutation function, and compute \bf{_7P_7}

Since both expressions count the same thing, they ought to be equal, but then we run into this interesting tidbit when we evaluate (2):

_7P_7 = \frac{7!}{(7-7)!} = \frac{7!}{0!},

which seems to imply that 0! = 1.  To say this is counterintuitive for my kids would be a severe understatement.  And in this moment of philosophical crisis, when the book might present itself as a palliative ally, students are instead met with this:

To prevent inconsistency?  How in the world are kids supposed to trust a mathematical resource that paints itself into a corner, only tacitly admits such, and then drops a bomb of a deus ex machina in order to save face?  I haven’t been so angry since the ending of Lord of the Flies.  Especially when this problem appears two pages later:

Okay, 8!.  So how many ways can I arrange my bookshelf with a zero-volume reference set?  One: I can arrange an empty shelf in exactly one way.  And, since we already know that n! counts the ways I can arrange n objects, it follows naturally that this 1 way of arranging 0 things must also be represented by 0!.

There are a lot of good proofs/justifications available for the willing Googler, but this one, to me, seems like the most natural and straightforward for a high school classroom.  At a bare minimum, it’s much, much better than, “Because I need it to be true for my own convenience.”

Only a math textbook could take something so lovely and make it seem dirty.

Cereal Boxes Redux

In my last post, my students were wrestling with a question about cereal prizes.  Namely, if there is one of three (uniformly distributed) prizes in every box, what’s the probability that buying three boxes will result in my ending up with all three different prizes?  Not so great, turns out.  It’s only 2/9.  Of course this raises another natural question: How many stupid freaking boxes do I have to buy in order to get all three prizes?

There’s no answer, really.  No number of boxes will mathematically guarantee my success.  Just as I can theoretically flip a coin for as long as I’d like without ever getting tails, it’s within the realm of possibility that no number of purchases will garner me all three prizes.  But, just like the coin, students get the sense that it’s extremely unlikely that you’d buy lots and lots of boxes without getting at least one of each prize.  And they’re right.  So let’s tweak the question a little: How many boxes do I have to buy on average in order to get all three prizes?  That’s more doable, at least experimentally.

I have three sections of Advanced Algebra with 25 – 30 students apiece.  I gave them all dice to simulate purchases and turned my classroom—for about ten minutes at least—into a mathematical sweatshop churning out Monte Carlo shopping sprees.  The average numbers of purchases needed to acquire all prizes were 5.12, 5.00, and 5.42.  How good are those estimates?

Simulating cereal purchases with dice

Here’s my own simulation of 15,000 trials, generated in Python and plotted in R:

I ended up with a mean of 5.498 purchases, which is impressively close to the theoretical expected value of 5.5.  So our little experiment wasn’t too bad, especially since I’m positive there was a fair amount of miscounting, and precisely one die that’s still MIA from excessively enthusiastic randomization.

And now here’s where I’m stuck.  I can show my kids the simulation results.  They have faith—even though we haven’t formally talked about it yet—in the Law of Large Numbers, and this will thoroughly convince them the answer is about 5.5.  I can even tell them that the theoretical expected value is exactly 5.5.  I can even have them articulate that it will take them precisely one box to get the first new toy, and three boxes, on average, to get the last new toy (since the probability of getting it is 1/3, they feel in their bones that they should have to buy an average of 3 boxes to get it).  But I feel like we’re still nowhere near justifying that the expected number of boxes for the second toy is 3/2.

For starters, a fair number of kids are still struggling with the idea that the expected value of a random variable doesn’t have to be a value that the variable can actually attain.  I’m also not sure how to get at this next bit.  The absolute certainty of getting a new prize in the first box is self-evident.  The idea that, with a probability of success of 1/3, it ought “normally” to take 3 tries to succeed is intuitive.  But those just aren’t enough data points to lead to the general conjecture (and truth) that, if the probability of success for a Bernoulli trial is p, then the expected number of trials to succeed is 1/p.  And that’s exactly the fact we need to prove the theoretical solution.  Really, that’s what we need basically to solve the problem completely for any number of prizes.  After that, it’s straightforward:

The probability of getting the first new prize is n/n.  The probability of getting the second new prize is (n-1)/n … all the way down until we get the last new prize with probability 1/n.  The expected numbers of boxes we need to get all those prizes are just the reciprocals of the probabilities, so we can add them all together…

If X is the number of boxes needed to get all n prizes, then

E(X) = \frac{n}{n} + \frac{n}{n-1} + \cdots + \frac{n}{1} = n(\frac{1}{n} + \frac{1}{n-1} + \cdots + \frac{1}{1}) = n \cdot H_n

where Hn is the nth harmonic number.  Boom.

Oh, but yeah, I’m stuck.

Pruning Tree Diagrams

A few days ago we opened up with some group work surrounding the following problem.  I gave no guidance other than, “One representative will share your solution with the class.”

My favorite cereal has just announced that it’s going to start including prizes in the box.  There is one of three different prizes in every package.  My mom, being cheap and largely unwilling to purchase the kind of cereal that has prizes in it, has agreed to buy me exactly three boxes.  What is the probability that, at the end of opening the three boxes, I will have collected all three different prizes?

It’s a very JV, training-wheels version of the coupon collector’s problem, but it’s nice for a couple of reasons:

  1. The actual coupon collector’s problem is several years out of reach, but it’s a goody, so why not introduce the basics of it?
  2. There is a meaningful conversation to be had about independence.  (Does drawing a prize from Box 1 change the probabilities for Box 2?  Truly?  Appreciably?  Is it okay to assume, for simplicity, that it doesn’t?  How many prizes need to be out there in the world for us to feel comfortable treating this thing as if it were a drawing with replacement?  If everybody else is buying up cereal—and prizes—uniformly, does that bring things closer to true independence?  farther away?)
  3. There are enough intuitive wrong answers to require some deeper discussion: e.g, 1/3 (Since all the probabilities along the way are 1/3, shouldn’t the final probability of success also be 1/3?), 1/27 (There are three chances of 1/3 each, so I multiplied them together.), and 1/9 (There are three shots at three prizes, so nine outcomes, and I want the one where I get all different toys.)  The correct answer, by the by, is 6/27 or 2/9 (try it out).

Many groups jumped right into working with the raw numbers (see wrong answers above).  A few tried, with varying levels of success, to list all the outcomes individually (interestingly, a lot of these groups correctly counted 27 possibilities, but then woefully miscounted the number of successes…hmmm).  A small but determined handful of groups used tree diagrams to help them reason about outcomes sequentially.

This business of using tree diagrams was pleasantly surprising.  We hadn’t yet introduced them in class, and I hadn’t made any suggestions whatsoever about how to tackle the problem, so I thought it was nice to see a spark of recollection.  That said, it’s not terribly surprising; presumably these kids have used them before.  But I did run across one student, Z, who interpreted his tree diagram in novel way—to me at least.

Most students, when looking at a tree diagram, hunt for paths that meet the criteria for success.  Here’s a path where I get Prize 1, then Prize 2, then Prize 3.  Here’s another where I get Prize 1, then Prize 3, then Prize 2…  The algorithm goes something like, follow a path event-by-event and, if you ultimately arrive at the compound event of interest, tally up a success.  Repeat until you’re out of paths.  That is, most students see each path as an stand-alone entity to be checked, and then either counted or ignored.

What Z did was different in three important ways.  First of all, he found his solutions via subtraction rather than addition.  Second, he attacked the problem in a very visual—almost geometric—way.  And third, he didn’t treat each path separately; rather, Z searched for equivalence classes of paths within the overall tree.

Z’s (paraphrased) explanation goes as follows:

First I erased all of the straight paths, because they mean I get the same prize in every box.  Then I erased all of the paths that were almost straight, but had one segment that was crooked, which means I get two of the same prize.  And then I was left with the paths that were the most crooked, which means I get a different prize each time.

Looking at his diagram, I noticed that Z hadn’t even labeled the segments; he simply drew the three stages, with three possibilities at each node, and then deleted everything that wasn’t maximally crooked.  How awesome is that?  In fact, taking this tack made it really easy for him to answer more complicated followup questions.  Since he’d already considered the other cases, he could readily figure out the probability of getting three of the same prize (the 3 branches he pruned first), or getting only two different prizes (the next 18 trimmings).  He could even quickly recognize the probability of getting the same prize twice in a row, followed by a different one (the 6 branches he trimmed that went off in one direction, followed by a straight-crooked pattern).

Of course this method isn’t particularly efficient.  He had to cut away 21 paths to get down to 6.  For n prizes and boxes, you end up pruning nn — n! branches.  Since nn grows much, much faster, than n!, Z’s algorithm becomes prohibitively tedious in a hurry.  If there are 5 prizes and 5 boxes, that’s already 3005 branches that need to be lopped off.  So yes, it’s inefficient, but then again so are tree diagrams.  Without more sophisticated tools under his belt, that’s not too shabby.  What the algorithm lacks in computational efficiency, it makes up for in conceptual thoughtfulness.  I’ll take it that tradeoff any day of the week.

Conditional Response

Aside from being entertaining, these DIRECTV commercials offer at least two important lessons about logic.

For starters, let’s name the propositions listed in the video:

  • q: your cable is on the fritz
  • r: you get frustrated
  • s: your daughter imitates
  • t: your daughter gets thrown out of school
  • u: your daughter meets undesirables
  • v: your daughter ties the knot with undesirables
  • w:  you get a grandson with a dog collar

So the ad takes us through the following sequence of conditional statements:

\begin{array}{lcl} q & \longrightarrow & r \\ r & \longrightarrow & s \\ s & \longrightarrow & t \\ t & \longrightarrow & u \\ u & \longrightarrow & v \\ v & \longrightarrow & w \end{array}

Let’s be generous and accept that each statement, individually, is true.  Then we’re led sequentially along a nice string of propositions, beginning at q and ending with w.  Actually, there’s one more tacit proposition, p: you have cable.  So the commercial’s (implicit + explicit) logic looks something like this:

p \longrightarrow q \longrightarrow r \longrightarrow s \longrightarrow t \longrightarrow u \longrightarrow v \longrightarrow w

And therein our first logic lesson: conditional statements respect transitivity.  We can follow an unbroken path of propositions all the way from p to w, which means we can replace that whole string of implications with the statement, “If you have cable, then you’ll get a grandson with a dog collar.”  Symbolically:

p \longrightarrow w

We’ve accepted all the statements along the way, so we accept this one as well, which is both funny and logically sound.  DIRECTV has successfully made fun of the cable companies, and we’ve had a chuckle.  And if the commercial were to end there, everything would be hunky dory.  But it doesn’t end there.  It ends on the line, “Don’t have a grandson with a dog collar; get rid of cable…”  Which is to say, “If you don’t have cable, you won’t have a grandson with a dog collar.”  Or…

\neg p \longrightarrow \neg w

But that’s incorrect!  And that’s our second lesson: the technical name for this fallacy is denying the antecedent, or the inverse error.  To give you a more intuitive example, consider the propositions:

  • p: you are a dog
  • q: you are a mammal

p \longrightarrow q: “If you are a dog, then you are a mammal.”  True.

\neg p \longrightarrow \neg q: “If you are not a dog, then you are not a mammal.”  Obviously false.

It might very well be true that having cable leads to a grandson with a dog collar, but that certainly doesn’t mean getting rid of cable is enough to avoid one.

Two Roads [Con]verged

Last week we started working with infinite geometric series, a topic I personally love.  First of all, it’s one of the few places in a high school curriculum where deep, genuine philosophical questions bubble all the way up to the surface of a mathematical discussion.  Second, it marks the place in my own academic life where I experienced a religious conversion to Orthodox Mathematicism:

In the beginning there was a single term.  And to that term the Teacher did add another of smaller magnitude.  Then a third term, smaller still, appeared upon the right hand side of the chalkboard, and it was revealed to me that the terms did decrease exponentially.  My heart saw that this shrinking and adding proceedeth forever and ever, terms without end, Amen.  And lo, when I beheld the sum, it was finite, and I knew that it was Good.

If my introduction to convergent series was a baptism, then using one to demonstrate that .999… = 1 was my confirmation.  Now, having done the same thing with my students, I think it might be even more interesting from this side of the desk.  In particular, two of their questions/comments highlight two very different understandings of infinity and the real numbers.

First, the ingredients of a metaphor.  If you’ve ever been a runner, this is easy.  If not, I’m going to need you to go on a quick jog before you read any farther so you can appreciate the rest of this carefully crafted rhetorical device.  I’ll wait…

When you drive the same stretch of road over and over again, you tend to experience it dynamically.  You pass a landmark, anticipate a curve, accelerate over a little rise.  The road changes in front of your eyes.  You see the road as a process.  But when you run along the same route, it looks completely different.  There is just this monolithic expanse of concrete laid out over the landscape.  You can creep around and explore its different features, but you experience the road essentially as a static object.  In other words, you experience the road as it actually is.  Keep this in mind as you read the following two questions from my actual students.

D: “But Mr. Lusto, if .999… is exactly 1, then .999… plus .999… should equal exactly 2, but it doesn’t.  It’s 1.999…8.”

What a freaking fantastic argument!  Here’s a student who has accepted my proof, interpreted it, thought about it critically, and deduced a logical contradiction.  My heart swelled a little bit.  Unfortunately, the flaw in his reasoning highlights a fundamental misconception.  D is viewing .999… like a driver.  He sees it as a dynamic process, repeatedly appending a 9 to an ever-expanding sequence of 9s.  He might even accept that this can theoretically go on forever, but his point-of-view still gets him into some trouble.  When D mentally sums .999… and .999…, he’s suggesting that there are two “last 9s” that, when added, produce a trailing 8.  But of course there are no “last 9s.”  He’s implicitly terminated the process prematurely (which is to say, at all).  Hence his objection, though thoroughly beautiful, is ultimately illusory.

J: “But Mr. Lusto, if .999… equals 1, then doesn’t 1.999… equal 2?  Then can’t we write every number in two different ways?”

This student views .999… like a runner.  The reason that .999… and 1 can be meaningfully thought of as equal is because they represent the same static value.  They’re just two different names for the same object.  Here’s a student who sees .999… as it actually is.  And now, because of that, his concern is genuine.  The fact that many real numbers have two decimal representations (one with infinite trailing 0s, one with infinite trailing 9s) is a true mathematical/philosophical problem.  In fact, it’s an important result: those sorts of numbers turn out to be dense in the reals (in the topological sense).  J may never care about, or even get enough math under his belt to understand, that statement,  but his view of the nature of infinity is already more nuanced than D’s.

Something to think about next time you’re driving.  Better yet, next time you’re running.

Pretty Little Lies

I find myself lying to my students.  A lot.  I suppose it doesn’t much bother me on a moral level.  For one thing, my conscience is perhaps less muscular than it ought to be.  For another, I’m generally pretty open with my kids.  They know, for instance, that I’m divorced.  That I’m quitting smoking for my 30th birthday.  That for several years I was professionally violent.  I go out of my way to let them know that, within reason, I won’t shy away from their curiosity.  Still, I lie.

Continue reading

Money for Nothing

In the conversation following a recent Mathalicious blog post, one commenter said:

“If we have evidence that current teachers are ineffective (and don’t international math test scores provide this evidence?), then why not let the non-educators take a shot?”

In reading through the comments, this particular point focused my attention because I suspect its primary sentiment runs deep within the larger discussion about the state of U.S. math education.  I.e., our international math ranking is poor; teachers must not be doing such a good job; so why not let somebody else take the wheel?

On the surface, this doesn’t seem a particularly unreasonable argument.  It does, however, make some tacit assumptions that are questionable, and even downright strange.  Namely:

Teacher training is a negative-value-added process.

Any meaningful discourse about education has to address teacher preparation programs.  If teachers are truly failing en masse, then clearly something important and fundamental is lacking in their initial training.  Now of course these programs aren’t perfect.  They might not even be world-class.  But to suggest that non-educators would be better classroom teachers is to imply that I’m somehow a worse math teacher after year-long stint in grad school and months of practicum work and student teaching than I was the day I left the Marine Corps in search of a new career.  Which is ridiculous.  How can exposure to current research in pedagogical content knowledge, educational psychology, and legal/policy decisions make me worse?  How can active observation time in great, good, bad, and awful classrooms make me worse?  Hands-on experience with real students?  Regardless of the level of helpfulness of any of those things, I know that it’s strictly non-negative.  And please let me know if you can prove otherwise, because I’d like to sue for my $25,000 back.

The countries with better outcomes are being taught by non-educators.

If international math test scores suffice as proof of U.S. teachers’ ineffectiveness, then those countries with higher scores, one supposes, must provide some sort of evidence about effective practices.  Is Finland spanking us because it got its teachers out of the picture somehow?  Not quite.  Over there “teacher” routinely rates as the most admired profession among high school graduates; after a rigorous screening process, the most qualified teaching candidates are educated at government expense; all teachers are required to hold, at a minimum, a master’s degree (source for all the above).  Finland isn’t great in spite of its teachers; it just does a much better job of screening and training them.  Which all sounds easy and practical, but in order to replicate that in the U.S., our society’s view of the teaching profession would have to change dramatically.  You must believe your tax dollars are well spent paying pre-service teachers’ tuition.  You must believe teaching is an important and prestigious position.  You must believe that teachers shouldn’t have to take a vow of poverty to educate our children.  In short, the answer seems not to be getting non-educators into the game, but forcing über-educators into the game.

People who devote all their professional time, energy, and resources to teaching can be called “non-educators.”

Like any other profession, education is vulnerable to a certain level of entrenchment.  Change can be slow and difficult, and new blood can’t hurt.  But if someone leaves a career in, say, mechanical engineering to spend all her working (and a whole lot of her non-working) life thinking, studying, worrying about, and practicing teaching math to high school students, then she has officially become an educator.  Because that’s what educators do.  But again, if the profession hopes to attract any of the best and brightest from other fields, then there is going to have to be a societal sea change that makes teaching a viable option for people who have gone to a whole lot of intellectual and financial trouble to become the best and brightest in the first place.  Those of us who have made those sacrifices in spite of the trouble/prestige ratio welcome you.

War Games

Back in my previous existence as an artillery officer, I participated in the war for a little while.  Our main job—my Marines and I—was to provide counter-fire support for units in and around the city of Fallujah, Iraq.  Basically, whenever our guys started taking rocket and/or mortar fire, radar would track the source of those rounds and send us their point of origin as a target.  Then we would shoot at it.  Simple.  Kind of.

By the time I got to Fallujah, all the dumb bad guys had been selected out of the gene pool; the ones who were left knew that what they were doing was extremely risky, and they took steps to minimize that risk.  They tried their best to make every opportunity count, and our goal was to make it just as costly as possible for them to shoot at us.  It was a deadly serious game-theoretical problem for both sides.  A game measured in seconds.

Continue reading

The (square) Root of Love

All right, fellas, huddle up.  We’re going to talk about the best way to find true love.  I mean, you can’t just go running around all willy-nilly hoping to bump into somebody great.  The world is a big place.  You need a strategy, man.  A dating plan of attack.

First, some ground rules, some general observations about romantic life, and a few restrictions in the interest of mathematical well-behavedness:

  1. You are only going to meet a finite number of datable women over the course of your lifetime.  It will be a depressingly low number.
  2. You are going to be an upstanding citizen and date only one woman at a time.
  3. You will date a woman for some finite period of time, at which point you’ll make a decision either to pull the trigger and propose, or cut her loose.  Or, more likely, she’ll dump you first.
  4. Once you propose, no takesies-backsies.  And once you cut a woman loose, you can’t ever reconsider her for marriage; she will hate you forever.
  5. You are able to perfectly rank the women you have dated according to a strict, unambiguous order of preference.  Tie goes to the blonde.
  6. You will encounter these women in random order.  That is, you are completely ignorant of where the next potential wife will stand in the overall rankings.
  7. You will date a certain number of women without really considering any of them for a proposal.  In other words, you’ll take some time getting a feel for who’s out there.  Setting the bar.

In the world of mathematics, this is what’s known as an optimal stopping problem.  You’re going to date, and date, and date…, and stop.  Hopefully on the woman of your dreams (hence the optimal part).  In fact, this is one of those problems that’s so famous it goes by several (mildly sexist) names: the secretary problem, the sultan’s dowry problem, the fussy suitor problem.  Because it’s Valentine’s Day, we’ll call it the marriage problem.

Continue reading