## Posts Tagged ‘**basics**’

## Harry Vs Draco

Harry Potter and Draco Malfoy are the frontrunners for the “Best Student of the Year” award at Hogwarts. Professor Dumbledore suggests that they play a certain game (invented by a great wizard) to determine the winner. Dumbledore describes the game to Harry and Draco as follows.

Consider a board as below, with two distinct non-zero integers *m* and *n* “thrown” onto it. At the beginning of the game, Dumbledore, as the impartial referee, will provide *m* and *n*. Then, there will be a toss to decide who gets to make the first move. The winner of the toss can either make the first move himself or invite his opponent to make the first move.

When his turn comes, each player has to introduce a new positive integer onto the board such that it is the difference between any two existing integers on the board.

For example:

First move: Player 1 would introduce *k = |m – n|*

Second move: Player 2 would introduce either *x = |m – k|* or *y = |n – k|*

Third move: (Assuming that Player 2 introduced *x** = |m – k|* in the second move) Player 1 introduces either *u = |m – x|* or *v = |n – x|* or *w = |k – x|* or *y = |n – k|*

Fourth move: Player 2 introduces…

and so on…

The game goes on like this and concludes only when one of the players finds it impossible to introduce any more new numbers to the board; the other player thus wins.

Well, now, the stage is set. Dumbledore has provided *m* and *n*. Harry has won the toss!

Can you help Harry ensure that he wins the game? Who should make the first move and why?

## On Domain Stretching, Conditional Convergence and Absolute Convergence

Sometimes, an infinite series may not be as expressive as (or carry as much information as) the function it represents. In this post, we’ll mainly discuss this concept (and also look at conditional and absolute convergence briefly).

Consider the following infinite series:* f(x) = 1 + x + x ^{2 }+ x^{3} + x^{4} + x^{5} + …*. Does this series ever converge? Try substituting

*1/2*for

*x*and see what happens. It converges, as we saw here. So, we write this as

*f(1/2) ~ 2*. The twiddle or tilde sign here indicates that

*f(x) asymptotically tends to 2 at x = 1/2*. Similarly, it is possible to prove the following:

*f(-1/2) ~ 2/3*;

*f(1/3) ~ 3/2*;

*f(-1/3) ~ 3/4*and so on. It is trivially true that

*f(0) = 1*.

Let us now substitute *1* for *x*. We get *f(1) = 1 + 1 + 1 + 1 + …*. The series diverges at *x = 1*. It is obvious that the series diverges for *x = 2, 3, 4, 5, …* too. Observe the behaviour of *f(x)* when *x = -1*. We get *f(-1) = 1 – 1 + 1 – 1 + 1 – 1 + …*. If you take an even number of terms, then *f(-1) = 0*, and if you take an odd number of terms, then *f(-1) = 1*. We see that the partial sum of this series is definitely not becoming infinitely large. However, it is not converging either. This is considered a form of divergence. We can verify that *f(-2), f(-3), f(-4), …* also diverge. Loosely speaking, they seem to oscillate and go off to positive infinity and negative infinity at once.

We see that *f(x)* seems to have values only when *x* is between *-1* and *1*, exclusive. In other words, we have:

**Observation 1:** *The domain of the function f(x) is from **-1 to **1, exclusive.*

Now, let us rewrite *f(x)* here and simplify it a bit.

*f(x) = 1 + x + x ^{2 }+ x^{3} + x^{4} + x^{5} + …*

That is, *f(x) = 1 + x (**1 + x + x ^{2 }+ x^{3} + x^{4} + … )*

This implies *f(x) = 1 + x f(x)*

Therefore, we have *f(x) = 1 / (1-x)*

On simplification, we have:

**Equation 1:** *1 / (1-x) = 1 + x + x ^{2 }+ x^{3} + x^{4} + x^{5} + …*

Now, the question we ask is: *Are the LHS and the RHS of this equation one and the same?* Earlier in this post, we had discussed the value of the RHS for various values of *x*, viz. *-1/2*, *-1/3*, *0*, *1/3*, *1/2*. You will see that the LHS concurs. However, they are *not* one and the same thing! *They have different domains.*

We can see that *1 / (1-x)* has values everywhere except at *x = 1*. When we see this in contrast to Observation 1, we see that the domain of *1 / (1-x)* is “stretched”. It includes the domain of the infinite series and more. This indicates that an infinite series sometimes defines only a part of a function. More appropriately, *an infinite series might define a function over only a part of the function’s domain*.

So, that was about “domain stretching” to uncover hidden properties of a function. I was sorely tempted to discuss this beautiful concept here. So, I *had* to make some room for it. ðŸ™‚

————————-

At this point, you are free to jump to Equation 2 below, from where our actual discussion of conditional and absolute convergence begins. Or, you can just stay on and see *why* Equation 2 is true.

On integrating Equation 1, we get:

*-log(1 – x) = x + x ^{2}/2 + x^{3}/3 + x^{4}/4 + x^{5}/5 + …*

Therefore, *log(1 – x) = – x – x ^{2}/2 – x^{3}/3 – x^{4}/4 – x^{5}/5 – …*

At *x = -1*, we get:

**Equation 2:*** 1 – 1/2 + 1/3 – 1/4 + 1/5 – 1/6 + … = log 2*

Let us denote this series (the LHS of Equation 2) as *S*. So, *S* converges to *log 2*. However, for *S* to converge to *log 2*, there is a *condition* that needs to be satisfied: *the terms have to be added in that order*. If you add the terms in a different order, the series might either converge to a different quantity or diverge. For example, let us rearrange the terms in this series as follows:

*1 – 1/2 – 1/4 + 1/3 – 1/6 – 1/8 + 1/5 – 1/10 – …*

Or *(1 – 1/2) – 1/4 + (1/3 – 1/6) – 1/8 + (1/5 – 1/10) – …*

This is equivalent to *1/2 – 1/4 + 1/6 – 1/8 + 1/10 – …*

This simplifies to *1/2 (1 – 1/2 + 1/3 – 1/4 + 1/5 – …)* or *1/2 (S)*

The rearranged series sums up to half of *S*!

Such series, whose limit depends on the order in which their terms are arranged, are said to be *conditionally convergent*. Those series that converge to the same quantity, *no matter what order they are summed in*, are said to be *absolutely convergent*.

————————-

**Other articles in this series:** On Convergence, On Divergence

## On Divergence

My earlier plan was to write a bit about the various types of convergence in this article. However, I think we would do better to understand *divergence* first. So, here goes. In the previous post, we saw an infinite series that *converged*. In other words, it exhibited limiting behaviour. (This infinite series was a *geometric series* with *1* as the first term and *1/2* as the common ratio. As a matter of fact, *any* geometric series, with a first term *a* and a common ratio *r,* converges. That is, the sequence of its partial sums has a limit. As we saw, for the geometric series in the previous post, the limit of the sequence of its partial sums is *2*.) But, what of those infinite series that do not exhibit limiting behaviour towards any quantity? Such series (i.e. ones that are *not* convergent) are said to be *divergent*. The partial sums of a divergent series go on increasing *without limit*. Loosely speaking, the sequence of partial sums of a divergent series tends to infinity.

From the example infinite geometric series in the previous post, we can make the following observation:

**Observation 1:** *If a series converges, then the individual terms of the series must tend to zero*.

In that series, the tenth term is *0.001953125*, the twentieth term is *0.0000019073486328125*, the fiftieth term is (approx.) *0.00000000000000177636*, and so on. Notice how the *N*th term is tending to zero as *N* increases. So, Observation 1 seems true enough. And therefore, we can conclude that if the individual terms in a series *do not* approach zero, then the series *diverges*. However, the converse of Observation 1 is *not true*. If the individual terms of a series tend to zero, the series does not necessarily converge — it may diverge. A very good example of this is the following *harmonic series*: *1 + 1/2 + 1/3 + 1/4 + 1/5 + …*.

In this harmonic series, the tenth term is *0.1*, the hundredth term is *0.01*, the millionth term is *0.000001*, and so on. It is clear that as *N* increases, the *N*th term in this series tends to zero. However, this series is known to diverge. And there happens to be a rather old but elegant proof of its divergence by Nicole d’Oresme, a French scholar (*c.* 1323 – 1382).

d’Oresme observed that *(1/3 + 1/4)* is greater than *1/2*. Similarly, *(1/5 + 1/6 + 1/7 + 1/8 )* is greater than *1/2*. So is *(1/9 + 1/10 + 1/11 + 1/12 + 1/13 + 1/14 + 1/15 + 1/16)*. And so on. By first taking two terms, then four terms, then eight terms, then sixteen terms, and so on, it is possible to group the series into infinitely many “blocks”, where each block adds up to a value greater than *1/2*. No matter how many such blocks we consider, it is always possible to come up with the next, well-defined block. That is, there is always a value *x > 1/2* waiting to be added, no matter how many blocks we have already added up. Loosely speaking, the sum of the entire series must therefore be infinite. That is, the sum of the series increases without limit. The series diverges. *Quod Erat Demonstrandum*.

This elegant proof by d’Oresme seemed to have been lost on the world for several centuries. Pietro Mengoli proved this result all over again in 1647, using a different approach. Forty years later, Johann Bernoulli proved it with yet another approach. Shortly after, Jakob Bernoulli came up with yet another proof! Neither Mengoli nor the Bernoulli brothers seemed to have known about d’Oresme’s fourteenth century proof. John Derbyshire asserts that d’Oresme’s proof remains the most elegant of all the proofs for this result, and is the one given in textbooks today.

**PS:** Johann Bernoulli was the father of Daniel Bernoulli (of *Bernoulli’s Principle* fame). Jakob Bernoulli (of *Bernoulli Trial* and *Bernoulli Numbers* fame) was Johann Bernoulli’s elder brother. That is one super-cool family, eh? ðŸ˜€

————————-

**Other articles in this series:** On Convergence, On Domain Stretching, Conditional Convergence and Absolute Convergence

## On Convergence

Currently, I’m reading John Derbyshire’s *Prime Obsession*. In this book, Derbyshire makes a very good effort to explain about the distribution of prime numbers and the Reimann Hypothesis in the layman’s language. Mathematics has been treated rather loosely at several places in the book, but you can forgive Derbyshire that. He has, in fact, tried to make mathematical concepts less mathematical and more intuitive in his book. The subject of the book is not central to the subject of this series of articles, though. In the next few articles, I am going to try and explain the concepts of convergence and divergence as simply as I can, using some examples from Derbyshire’s book. In this article, I’ll be talking about the concept of convergence.

Consider a *finite* series. For ex. *1 + 1/2 + 1/4 + 1/8*. This series (essentially a sum) can be calculated precisely, because the number of terms in it is finite. The sum, in fact, **is equal to** *15/8* or *1.875*. Any such finite series can be equated to a known quantity. However, when a series is infinite, i.e. it has infinitely many terms, precisely computing the sum is not possible — the series computed up to any large *N* can always be bettered by adding the *(N+1)*th term. In other words, it is not possible to *equate* the sum of an infinite series to a known quantity. So, the question is: can it be “approximated” at least? In other words, does it exhibit *limiting behaviour* towards some quantity? To put it in yet another way, does it *tend toward* some quantity? The answer is: Yes, sometimes. (At other times, you cannot zero in on a quantity for an infinite series at all. More on that in a later post.)

Consider the following *infinite* series now: *1 + 1/2 + 1/4 + 1/8 + 1/16 + …*. Both, the finite series we saw earlier and this infinite series, have the same pattern of occurrence of terms (or *progression*). They are both *geometric series*, with *1* as the first term and *1/2* as the common ratio. Yet, these two series are entirely different in nature.

I know I have said that computing an infinite series is not possible. Even so, let us just start adding up the terms in the above infinite series. Let us see where it leads us. Up to four terms, the sum is *1.875*, as we have seen earlier. The mathematical term of art for this is: *the partial sum*

*up to four terms is 1.875*. Up to five terms, the sum is

*1.9375*. Up to six terms,

*1.96875*. Up to ten terms,

*1.998046875*. If you keep adding more and more terms like this, you will notice that the partial sum

*improves*with the addition of more and more terms. However, you will also notice that the improvement in the

*N*th partial sum over the

*(N-1)*th partial sum diminishes vanishingly as

*N*increases.

Let us now take a “metrological” perspective of this — let us trace this infinite series on an imaginary six-inch scale/ruler. Let us assume that the *N*th term in the series indicates the length (in inches, say) that we have to progress on the ruler. Assuming that we are at the zero mark to start off with, let us start moving along the ruler according to the value of each successive term. The first term is *1*. So, on reading the first term, we progress to the 1-inch mark on the ruler. Since the second term is *1/2*, we now move to the 1.5-inch mark. At the third term, we find ourselves at the 1.75-inch mark. And so on. Basically, at the *N*th term, our progress on the ruler is half of that at the *(N-1)*th term. Hence, for infinitely large *N*, the progress on the ruler is infinitesimally small compared to the *(N-1)*th term.

As we can verify on the imaginary ruler, as more and more terms are added, the partial sum of the series gets closer and closer to the quantity *2* **without ever equalling it**. However, there is

*no limit*to how close the partial sum of the series can get to

*2*. For any

*N*, the

*N*th partial sum is closer to

*2*than the

*(N-1)*th partial sum. The larger

*N*gets, the close the

*N*th partial sum gets to

*2*. For no value of

*N*will the

*N*th partial sum be

*equal*to

*2*, though. The mathematical term of art for this phenomenon is:

*the series*. Loosely speaking, this means that the sum equals

**asymptotically tends to**2*2*at infinity. This is known as

**convergence**. The series

*converges*to

*2*.

We know that PageRank *converges* to the principal eigenvector of the modified adjacency matrix (*L*) of the Web. What this means is that the PageRank vector, no matter how many power iterations we conduct, will get painfully close to the principal eigenvector of *L*, but it will never *be* the principal eigenvector of *L*. However, there is no limit on how close the PageRank vector can get to the principal eigenvector of *L*.

There exist variants of convergence as well — *absolute convergence* and *conditional convergence*. (There are also *pointwise convergence *and *uniform convergence*, but I’m not yet fully equipped to explain them well.*)* But I think we’ll discuss those in another post, partly because I feel they might, by themselves, warrant a separate discussion and partly because my body is begging for some sleep right now.

————————-

**Other articles in this series:** On Divergence, On Domain Stretching, Conditional Convergence and Absolute Convergence