Someone and Anyone

“I walk slowly, like one who comes from so far away he doesn’t expect to arrive.”

Archive for the ‘Mathematics’ Category

Harry Vs Draco

with one comment

Harry Potter and Draco Malfoy are the frontrunners for the “Best Student of the Year” award at Hogwarts. Professor Dumbledore suggests that they play a certain game (invented by a great wizard) to determine the winner. Dumbledore describes the game to Harry and Draco as follows.

Consider a board as below, with two distinct non-zero integers m and n “thrown” onto it. At the beginning of the game, Dumbledore, as the impartial referee, will provide m and n. Then, there will be a toss to decide who gets to make the first move. The winner of the toss can either make the first move himself or invite his opponent to make the first move.

When his turn comes, each player has to introduce a new positive integer onto the board such that it is the difference between any two existing integers on the board.

For example:

First move: Player 1 would introduce k = |m – n|

Second move: Player 2 would introduce either x = |m – k| or y = |n – k|

Third move: (Assuming that Player 2 introduced x = |m – k| in the second move) Player 1 introduces either u = |m – x| or v = |n – x| or w = |k – x| or y = |n – k|

Fourth move: Player 2 introduces…

and so on…

The game goes on like this and concludes only when one of the players finds it impossible to introduce any more new numbers to the board; the other player thus wins.

Well, now, the stage is set. Dumbledore has provided m and n. Harry has won the toss!

Can you help Harry ensure that he wins the game? Who should make the first move and why?

Written by Anyone

May 29, 2008 at 10:12 pm

Posted in Mathematics

Tagged with , , , ,

Analysis of Strategies for the n-Door Monty Hall Problem

with one comment

In an earlier post (see comments also), I had put forth a question about the winning strategy for the famous n-Door Monty Hall problem. Let me pose it again here, this time in a general way.

Of the following strategies, which has the highest chance of winning in the n-door problem (n > 3)?

  1. Stick with your initial choice till the very end without switching
  2. Stick with your initial choice till the end and switch only at the last stage
  3. Switch your choice at the very beginning and stick with this new door till the very end without switching again
  4. Switch at every possible stage

For those who are already familiar with the 3-door problem, strategy 4 might intuitively seem to be the best strategy. However, that is not the case!

Consider n doors as follows: C, G1, G2, …, Gn-1

Assertion 1: The door we choose at the first stage is C (the “car door”) with probability 1/n, while it is a Gx (one of the “goat doors”) with probability (n – 1)/n.

Strategy 1: Stick with your initial choice till the very end without switching

This strategy is equivalent to simply picking a door uniformly at random. Hence, it is trivially true that the probability of picking C (i.e. probability of winning the car) is 1/n.

Strategy 2: Stick with your initial choice till the end and switch only at the last stage

This strategy is interesting. At every stage except the last one, Monty Hall will go ahead and show us one of the goat doors. That is, out of a total of (n – 1) goat doors, we are shown (n – 2). Hence, the unopened door at the last stage and the door we chose at the first stage are the only two mystery doors to us! One of these contains the car, while the other contains a goat. As we can see, by waiting till the last stage, we maximized our own information content.

If the door we chose at the first stage is C, then according to this strategy, we definitely lose — because the other mystery door (to which we have to switch) then contains a goat.

However, if we chose a Gx at the first stage, then we always win! Because the other mystery door (to which we have to switch) then contains the car.

Therefore, the probability of winning with this strategy is (probability of choosing a Gx at the first stage) * 1. From Assertion 1, this equals (n – 1)/n.

Strategy 3: Switch your choice at the very beginning and stick with this new door till the very end without switching again

According to this strategy, we choose a door at the first stage and switch our choice at the second stage. Thereafter, we stick to the door that we chose at the second stage, till the end of the game.

If the door we chose at the first stage is C, then we definitely lose — because we would switch to a goat door at the second stage and stick with it thereafter.

So, we could win only if we chose a Gx at the first stage. However, merely choosing a Gx at the first stage does not mean we always win. After the first stage, Monty shows us one of the goat doors. This still leaves (n – 2) other doors (excluding our initial choice) as a mystery to us. So, we will switch to one of the remaining (n – 2) doors uniformly at random. So, the probability of switching to C is 1/(n – 2).

Therefore, the probability of winning with this strategy is (probability of choosing a Gx at the first stage) * 1/(n – 2). Hence, It follows (from Assertion 1) that the probability of winning with this strategy is (n – 1)/(n (n – 2)) or (n – 1)/(n2 – 2n).

Strategy 4: Switch at every possible stage

In this strategy, we keep hopping from one door to another door at every stage. In this approach, the only way to win is by switching to C at the (n – 1)th stage, no matter which doors we have switched to up until then. The only thing that matters is whether we are able to switch to C at the last stage. This can happen if and only if we are at a Gx at the (n – 2)th stage.

It is obvious that by the time we reach the last stage, Monty would have revealed all but one goat door. So, we’ll be left with two doors — one goat door (i.e. some Gx) and C. Hence, it is clear that we will win if we are at the solitary goat door at the end of the (n – 2)th stage.

Let us first see what is the probability (P) of making it to some Gx at the last-but-one stage, given that we start with C at the first stage. In other words, having started with C at the first stage, what is the probability that we will have reached some Gx after exactly (n – 3) transitions (or switches)?

From C, we will always make a transition to some goat door other than the one that Monty reveals at the end of the first stage. So, we can make a transition to one of the remaining (n – 2) goat doors with probability 1. That leaves us with exactly (n – 4) transitions after which to reach some goat door. From here on, Monty is going to unveil one goat door at each stage except the last. So, the number of possible transitions at each stage, no matter what door we are at, keeps decreasing by 1. Also, note that there is a transition from every goat door to C.

Therefore, the probability of transition to a goat door for the third stage is ((n – 3) – 1)/(n – 3); the probability of transition to a goat door for the fourth stage is (n – 5)/(n – 4); … ; the probability of transition to a goat door for the (n – 3)th stage is 2/3; the probability of transition to a goat door for the (n – 2)th stage is 1/2.

Therefore, with C as the initial choice, the probability that we reach some Gx after exactly (n – 2) stages is given by

P = 1 * (n – 4)/(n – 3) * (n – 5)/(n – 4) * … * 2/3 * 1/2.

This implies P = 1/(n – 3).

However, this seems to hold only if we do not make any transitions to C up to the end of (n – 2) stages. But what if we do switch to C at some stage? Then, we switch back to one of the remaining (n – m – 1) instances of Gx with probability 1, where m is the stage number that took us to C. Therefore, switching to C at some intermediate stage does not affect our calculation of P.

Now, let us see what is the probability (Q) of making it to some Gx at the last-but-one stage, given that we start with a Gx at the first stage. In other words, having started with some Gx at the first stage, what is the probability that we will have reached another Gx after exactly (n – 3) transitions?

From a Gx, we will make a transition to either some Gx other than the one that Monty reveals at the end of the first stage or C. So, for the second stage, the probability that we make a transition to one of the remaining (n – 3) goat doors is (n – 3)/(n – 2); for the third stage, the probability that we make a transition to one of the remaining (n – 4) goat doors is (n – 4)/(n – 3); … ; for the (n – 3)th stage, the probability that we make a transition to one of the remaining 2 goat doors is 2/3; for the (n – 2)th stage, the probability that we make a transition to the remaining 1 goat door is 1/2.

Therefore, with some Gx as the initial choice, the probability that we reach another Gx after (n – 2) stages is given by

Q = (n – 3)/(n – 2) * (n – 4)/(n – 3) * … * 2/3 * 1/2

This implies Q = 1/(n – 2).

Again, at any stage, if we make a transition to C, then we make a transition to a Gx at the next stage with probability 1. Therefore, switching to C at some intermediate stage does not affect our calculation of Q.

Thus, the probability of winning with the strategy of always switching is (probability of choosing C at the first stage) * P + (probability of choosing a Gx at the first stage) * Q. From Assertion 1, it follows that the probability of winning with this strategy is given by 1/(n * (n – 3)) + (n – 1)/(n * (n – 2)). This equals (n2 – 3n + 1)/(n3 – 5n2 + 6n).

In the order of probability of winning, the above four strategies compare as follows:

Strategy 2 > Strategy 4 > Strategy 3 > Strategy 1.

There are many other strategies for playing this game, of course. For ex., you could switch and stick alternately (or with any other pattern, for that matter). However, intuitively, it seems that it would pretty much be impossible to beat Strategy 2 — as n increases, the probability of winning approaches 1. Perhaps, there exists a formal proof of the supremacy of Strategy 2. But I’ve not been able to find it on the Web. If you come across such a proof, please leave a comment.

PS: If you find any discrepancies in this article, please let me know. I am not too sure about the correctness of the proof for strategy 4.

Written by Anyone

May 21, 2008 at 1:42 am

The Birthday Problem

with 7 comments

The well-known birthday problem:

How many people do you need to assemble before the probability that some two of them have the same birthday is greater than 0.5?

Assumptions: (1) A year has 365 days (no leap year), and (2) By “same birthday”, we only mean the same day and month (not necessarily the same year).

My first intuition was 183 — the smallest number that is more than half the number of days in a year. But as it turns out, 183 is the number of people you need to assemble before the probability that someone has the same birthday as you — a specific person — is greater than 0.5!

Look closely. The question is not about someone matching a particular person’s birthday. It is about the birthdays of some two people matching. Enough said. What is the right answer then, and why? How do we generalize this to matching the birthdays of some N people?

If you are guessing or have worked out a solution now, please leave comments.

The point of asking such questions (even though they might seem trivial to some) is to further our own understanding, and more importantly, to enjoy the process of understanding! So, if you already know the answer, please defer commenting.

Written by Anyone

May 19, 2008 at 12:06 pm

The Multi-Stage Monty Hall Problem

with 7 comments

Some time back, Sids had a post on the Monty Hall problem/game, which is as follows (Source: Wikipedia).

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

In this case, there are three doors. So, the game is played in two stages — Stage 1: Choose a door; Stage 2: Switch or stick with your choice. Here, the probability that you win the car is 2/3 if you switch your choice. So, yes. It is to your advantage to switch. See this page for details.

Let us now generalize this game to N doors. That means, the game is played in (N – 1) stages.

Stage 1: Choose a door

For stages X = 2 through (N – 1):

Monty Hall shows you one of the “goat doors”. Then, you either switch to one of the remaining (N X) doors or stick with your choice

Questions:

  1. What strategy has the highest probability of winning in this scenario?
  2. Why is this the best strategy?
  3. What is the probability of winning with this strategy?

For simplicity, let us consider N = 4. You get to choose a door at the first stage, and for the next two stages, you can either switch or stick. So the various permutations are:

  1. Choose, Stick, Stick
  2. Choose, Stick, Switch
  3. Choose, Switch, Stick
  4. Choose, Switch, Switch

Which of these four strategies wins most probably? And why? Once this is figured out, try looking at the generalized problem with N doors.

PS: If you already know the answer (or have found a link to it), kindly procrastinate on writing a comment about it. 😀

Written by Anyone

May 17, 2008 at 9:35 pm

On Domain Stretching, Conditional Convergence and Absolute Convergence

with one comment

Sometimes, an infinite series may not be as expressive as (or carry as much information as) the function it represents. In this post, we’ll mainly discuss this concept (and also look at conditional and absolute convergence briefly).

Consider the following infinite series: f(x) = 1 + x + x2 + x3 + x4 + x5 + …. Does this series ever converge? Try substituting 1/2 for x and see what happens. It converges, as we saw here. So, we write this as f(1/2) ~ 2. The twiddle or tilde sign here indicates that f(x) asymptotically tends to 2 at x = 1/2. Similarly, it is possible to prove the following: f(-1/2) ~ 2/3; f(1/3) ~ 3/2; f(-1/3) ~ 3/4 and so on. It is trivially true that f(0) = 1.

Let us now substitute 1 for x. We get f(1) = 1 + 1 + 1 + 1 + …. The series diverges at x = 1. It is obvious that the series diverges for x = 2, 3, 4, 5, … too. Observe the behaviour of f(x) when x = -1. We get f(-1) = 1 – 1 + 1 – 1 + 1 – 1 + …. If you take an even number of terms, then f(-1) = 0, and if you take an odd number of terms, then f(-1) = 1. We see that the partial sum of this series is definitely not becoming infinitely large. However, it is not converging either. This is considered a form of divergence. We can verify that f(-2), f(-3), f(-4), … also diverge. Loosely speaking, they seem to oscillate and go off to positive infinity and negative infinity at once.

We see that f(x) seems to have values only when x is between -1 and 1, exclusive. In other words, we have:

Observation 1: The domain of the function f(x) is from -1 to 1, exclusive.

Now, let us rewrite f(x) here and simplify it a bit.

f(x) = 1 + x + x2 + x3 + x4 + x5 + …

That is, f(x) = 1 + x (1 + x + x2 + x3 + x4 + … )

This implies f(x) = 1 + x f(x)

Therefore, we have f(x) = 1 / (1-x)

On simplification, we have:

Equation 1: 1 / (1-x) = 1 + x + x2 + x3 + x4 + x5 + …

Now, the question we ask is: Are the LHS and the RHS of this equation one and the same? Earlier in this post, we had discussed the value of the RHS for various values of x, viz. -1/2, -1/3, 0, 1/3, 1/2. You will see that the LHS concurs. However, they are not one and the same thing! They have different domains.

We can see that 1 / (1-x) has values everywhere except at x = 1. When we see this in contrast to Observation 1, we see that the domain of 1 / (1-x) is “stretched”. It includes the domain of the infinite series and more. This indicates that an infinite series sometimes defines only a part of a function. More appropriately, an infinite series might define a function over only a part of the function’s domain.

So, that was about “domain stretching” to uncover hidden properties of a function. I was sorely tempted to discuss this beautiful concept here. So, I had to make some room for it. 🙂

————————-

At this point, you are free to jump to Equation 2 below, from where our actual discussion of conditional and absolute convergence begins. Or, you can just stay on and see why Equation 2 is true.

On integrating Equation 1, we get:

-log(1 – x) = x + x2/2 + x3/3 + x4/4 + x5/5 + …

Therefore, log(1 – x) = – x – x2/2 – x3/3 – x4/4 – x5/5 – …

At x = -1, we get:

Equation 2: 1 – 1/2 + 1/3 – 1/4 + 1/5 – 1/6 + … = log 2

Let us denote this series (the LHS of Equation 2) as S. So, S converges to log 2. However, for S to converge to log 2, there is a condition that needs to be satisfied: the terms have to be added in that order. If you add the terms in a different order, the series might either converge to a different quantity or diverge. For example, let us rearrange the terms in this series as follows:

1 – 1/2 – 1/4 + 1/3 – 1/6 – 1/8 + 1/5 – 1/10 – …

Or (1 – 1/2) – 1/4 + (1/3 – 1/6) – 1/8 + (1/5 – 1/10) – …

This is equivalent to 1/2 – 1/4 + 1/6 – 1/8 + 1/10 – …

This simplifies to 1/2 (1 – 1/2 + 1/3 – 1/4 + 1/5 – …) or 1/2 (S)

The rearranged series sums up to half of S!

Such series, whose limit depends on the order in which their terms are arranged, are said to be conditionally convergent. Those series that converge to the same quantity, no matter what order they are summed in, are said to be absolutely convergent.

————————-

Other articles in this series: On Convergence, On Divergence

Written by Anyone

May 16, 2008 at 12:24 pm

On Divergence

with one comment

My earlier plan was to write a bit about the various types of convergence in this article. However, I think we would do better to understand divergence first. So, here goes. In the previous post, we saw an infinite series that converged. In other words, it exhibited limiting behaviour. (This infinite series was a geometric series with 1 as the first term and 1/2 as the common ratio. As a matter of fact, any geometric series, with a first term a and a common ratio r, converges. That is, the sequence of its partial sums has a limit. As we saw, for the geometric series in the previous post, the limit of the sequence of its partial sums is 2.) But, what of those infinite series that do not exhibit limiting behaviour towards any quantity? Such series (i.e. ones that are not convergent) are said to be divergent. The partial sums of a divergent series go on increasing without limit. Loosely speaking, the sequence of partial sums of a divergent series tends to infinity.

From the example infinite geometric series in the previous post, we can make the following observation:

Observation 1: If a series converges, then the individual terms of the series must tend to zero.

In that series, the tenth term is 0.001953125, the twentieth term is 0.0000019073486328125, the fiftieth term is (approx.) 0.00000000000000177636, and so on. Notice how the Nth term is tending to zero as N increases. So, Observation 1 seems true enough. And therefore, we can conclude that if the individual terms in a series do not approach zero, then the series diverges. However, the converse of Observation 1 is not true. If the individual terms of a series tend to zero, the series does not necessarily converge — it may diverge. A very good example of this is the following harmonic series: 1 + 1/2 + 1/3 + 1/4 + 1/5 + ….

In this harmonic series, the tenth term is 0.1, the hundredth term is 0.01, the millionth term is 0.000001, and so on. It is clear that as N increases, the Nth term in this series tends to zero. However, this series is known to diverge. And there happens to be a rather old but elegant proof of its divergence by Nicole d’Oresme, a French scholar (c. 1323 – 1382).

d’Oresme observed that (1/3 + 1/4) is greater than 1/2. Similarly, (1/5 + 1/6 + 1/7 + 1/8 ) is greater than 1/2. So is (1/9 + 1/10 + 1/11 + 1/12 + 1/13 + 1/14 + 1/15 + 1/16). And so on. By first taking two terms, then four terms, then eight terms, then sixteen terms, and so on, it is possible to group the series into infinitely many “blocks”, where each block adds up to a value greater than 1/2. No matter how many such blocks we consider, it is always possible to come up with the next, well-defined block. That is, there is always a value x > 1/2 waiting to be added, no matter how many blocks we have already added up. Loosely speaking, the sum of the entire series must therefore be infinite. That is, the sum of the series increases without limit. The series diverges. Quod Erat Demonstrandum.

This elegant proof by d’Oresme seemed to have been lost on the world for several centuries. Pietro Mengoli proved this result all over again in 1647, using a different approach. Forty years later, Johann Bernoulli proved it with yet another approach. Shortly after, Jakob Bernoulli came up with yet another proof! Neither Mengoli nor the Bernoulli brothers seemed to have known about d’Oresme’s fourteenth century proof. John Derbyshire asserts that d’Oresme’s proof remains the most elegant of all the proofs for this result, and is the one given in textbooks today.

PS: Johann Bernoulli was the father of Daniel Bernoulli (of Bernoulli’s Principle fame). Jakob Bernoulli (of Bernoulli Trial and Bernoulli Numbers fame) was Johann Bernoulli’s elder brother. That is one super-cool family, eh? 😀

————————-

Other articles in this series: On Convergence, On Domain Stretching, Conditional Convergence and Absolute Convergence

Written by Anyone

May 15, 2008 at 11:52 am

On Convergence

with 2 comments

Currently, I’m reading John Derbyshire’s Prime Obsession. In this book, Derbyshire makes a very good effort to explain about the distribution of prime numbers and the Reimann Hypothesis in the layman’s language. Mathematics has been treated rather loosely at several places in the book, but you can forgive Derbyshire that. He has, in fact, tried to make mathematical concepts less mathematical and more intuitive in his book. The subject of the book is not central to the subject of this series of articles, though. In the next few articles, I am going to try and explain the concepts of convergence and divergence as simply as I can, using some examples from Derbyshire’s book. In this article, I’ll be talking about the concept of convergence.

Consider a finite series. For ex. 1 + 1/2 + 1/4 + 1/8. This series (essentially a sum) can be calculated precisely, because the number of terms in it is finite. The sum, in fact, is equal to 15/8 or 1.875. Any such finite series can be equated to a known quantity. However, when a series is infinite, i.e. it has infinitely many terms, precisely computing the sum is not possible — the series computed up to any large N can always be bettered by adding the (N+1)th term. In other words, it is not possible to equate the sum of an infinite series to a known quantity. So, the question is: can it be “approximated” at least? In other words, does it exhibit limiting behaviour towards some quantity? To put it in yet another way, does it tend toward some quantity? The answer is: Yes, sometimes. (At other times, you cannot zero in on a quantity for an infinite series at all. More on that in a later post.)

Consider the following infinite series now: 1 + 1/2 + 1/4 + 1/8 + 1/16 + …. Both, the finite series we saw earlier and this infinite series, have the same pattern of occurrence of terms (or progression). They are both geometric series, with 1 as the first term and 1/2 as the common ratio. Yet, these two series are entirely different in nature.

I know I have said that computing an infinite series is not possible. Even so, let us just start adding up the terms in the above infinite series. Let us see where it leads us. Up to four terms, the sum is 1.875, as we have seen earlier. The mathematical term of art for this is: the partial sum up to four terms is 1.875. Up to five terms, the sum is 1.9375. Up to six terms, 1.96875. Up to ten terms, 1.998046875. If you keep adding more and more terms like this, you will notice that the partial sum improves with the addition of more and more terms. However, you will also notice that the improvement in the Nth partial sum over the (N-1)th partial sum diminishes vanishingly as N increases.

Let us now take a “metrological” perspective of this — let us trace this infinite series on an imaginary six-inch scale/ruler. Let us assume that the Nth term in the series indicates the length (in inches, say) that we have to progress on the ruler. Assuming that we are at the zero mark to start off with, let us start moving along the ruler according to the value of each successive term. The first term is 1. So, on reading the first term, we progress to the 1-inch mark on the ruler. Since the second term is 1/2, we now move to the 1.5-inch mark. At the third term, we find ourselves at the 1.75-inch mark. And so on. Basically, at the Nth term, our progress on the ruler is half of that at the (N-1)th term. Hence, for infinitely large N, the progress on the ruler is infinitesimally small compared to the (N-1)th term.

As we can verify on the imaginary ruler, as more and more terms are added, the partial sum of the series gets closer and closer to the quantity 2 without ever equalling it. However, there is no limit to how close the partial sum of the series can get to 2. For any N, the Nth partial sum is closer to 2 than the (N-1)th partial sum. The larger N gets, the close the Nth partial sum gets to 2. For no value of N will the Nth partial sum be equal to 2, though. The mathematical term of art for this phenomenon is: the series asymptotically tends to 2. Loosely speaking, this means that the sum equals 2 at infinity. This is known as convergence. The series converges to 2.

We know that PageRank converges to the principal eigenvector of the modified adjacency matrix (L) of the Web. What this means is that the PageRank vector, no matter how many power iterations we conduct, will get painfully close to the principal eigenvector of L, but it will never be the principal eigenvector of L. However, there is no limit on how close the PageRank vector can get to the principal eigenvector of L.

There exist variants of convergence as well — absolute convergence and conditional convergence. (There are also pointwise convergence and uniform convergence, but I’m not yet fully equipped to explain them well.) But I think we’ll discuss those in another post, partly because I feel they might, by themselves, warrant a separate discussion and partly because my body is begging for some sleep right now.

————————-

Other articles in this series: On Divergence, On Domain Stretching, Conditional Convergence and Absolute Convergence

Written by Anyone

May 14, 2008 at 11:46 pm