# LIMITS

If a certain quantity ‘has a limit’, one thing that is certain (if the statement is true) is that this limit cannot be exceeded: all the different definitions of ‘limit’ concur on this. However, the question of whether an increasing or decreasing quantity actually *attains *this limit (supposing it has one) is another matter altogether. In real life the question tends to be academic; the final destination (‘limit’) of an air trip to Paris is not, as it happens, Paris itself, since the Charles de Gaulle Airport is strictly speaking outside the city limits. On the other hand, the City airport of London is well and truly in London. But who cares about such finicky issues?

However, in mathematics the question of limits arose quite early on and the controversy is still going on today. The fundamental concept of Greek mathematics was *ratio *which originally had a clearcut arithmetic meaning. If two quantities *A *and *B * were in the ratio of 4 to 7, this meant that there was some common unit which duplicated four times gave us *A *and duplicated seven times gave us *B*. The unit was obvious enough if we were comparing quantities of eggs but it was assumed that quantities of flour or even water could be compared in this way even though the ‘unit’ was not immediately obvious and might need to be defined e.g. in terms of cupfulls or basinfulls. Note, however, that the quantities being compared were *of the same kind *─ it was not obvious that unlike quantities could be meaningfully compared in such a way.

Originally Greek mathematics seems to have been arithmetic rather than geometrical ─ although Pythagoras is remembered today for the geometric theorem named after him, his greatest achievement at the time was probably his intuition that sounds could be compared numerically, hence the terms ‘fifths’ and ‘fourths’ we still use today. But very soon geometry turned up a seemingly insurmountable problem. Clearly, the diagonal of a unit square was ‘of the same kind’ as the side ─ both were line segments composed of atoms ─ but apparently the diagonal did not share a common unit with the side ! Today, we say that the diagonal was an ‘irrational’ number, namely Ö*2*, but the Greeks didn’t put it like this: they said the diagonal and the side were ‘incommensurable’, i.e. lacked a common unit of measurement.

Once the geometry of circles and triangles took off, there was the serious question of the relation of an arc of a circle to the radius. Curved lines were seemingly ‘incommensurable’ with straight lines, but one could hardly do much geometry, let alone engineering, without comparing the two. A way of proceeding was found by extending the meaning of ratio to figures of slightly different types.

“The question, ‘What is the area of a circle?’ would have had no meaning to the Greek geometers. But the query, ‘What is the ratio of the arcs of two circles?’ would have been a legitimate one, and the answer would have been expressed geometrically: ‘the same as that of squares constructed on the diameters of the circles.’ “ Boyer, *The History of the Calculus *p. 32

This way of proceeding was reasonable enough but it meant that Greek mathematics in its official form (Euclid, Archimedes &c.), tended to be extremely long-winded. Archimedes does not actually give us the formula for the volume of a sphere, he simply states that *“judging from the fact that any circle is equal [in area] to a triangle with base equal to the circumference and height equal to the radius of the circle, I apprehended that, in like manner, any sphere is equal to a cone with base equal to the surface of the sphere and height equal to the radius”*. Note the phrase ‘*in like manner’*.

Because of what seems to modern authors squeamishness concerning irrationals, the Greek method of exhaustion, though very similar in approach to the Integral Calculus, is not identical with it. Archimedes and others used the method of inscribed and circumscribed polygons to pin down the area of a circle and thus get out a numerical value for *p*. But the Greeks did not view the area of the circle as the ‘limit’ of this process because this would imply that the areas of two *dissimilar* figures, circle and polygon, could ‘ultimately’ be the same.

Newton found himself up against exactly the same problem when he came to write the *Principia* but, since his main concern was working out the orbits of heavenly bodies (rather than just doing fancy pure mathematics) he needed a more decisive approach than was offered by Greek mathematics. Immediately after the Axioms, Newton has a Section entirely devoted to the question of limits and he kicks off with lemma I

“*Quantities, and the ratios of quantities, which in any finite time converge continually to equality, and before the end of that time approach nearer to each other than by any given difference, become ultimately equal.”*

Such a statement would have horrified a Greek or indeed a modern mathematician but Newton needs it, or believes he does, in order to get out various results concerning orbits.

The modern approach to limits is almost the exact opposite of Newton’s though also quite different from the Greek attitude. Modern analysis typically concerns itself with ‘infinite’ sequences which converge to a limit (or fail to do so), while neatly avoiding the vexed question of whether the sequence actually *attains *this limit. In this way it is possible to deal with functions that are not even defined at a particular point but which nonetheless ‘*have a limit*’ as they approach this point. For example the function *1/(x – 3)* is not defined at the point *x = 3 *because division by zero is not allowed. Nonetheless, such a function does have a limiting value (namely zero) as *x →** 3 *either from below or above just so long as it is not exactly *3*. Similarly, the hyperbola *y = 1/x*, although to all intents and purposes it is zero for very large *x*, does not actually attain the limit zero.

But how do we know that zero *is* the limit for *1/x*? Because of the way a limit is defined in modern mathematics. The precise *d**, **e* definition is rather finicky but the basic idea is that of a challenge between two persons. Person *A *claims that such and such a function has a limit *l*. Person *B *says, “In that case, you must give me an *x *(or other independent variable) such that *f(x) *for your value of *x *is closer to the limit *l *than *any *non-zero quantity. Moreover, you must show that all subsequent *x’s *will also have fall within this margin.”

In some cases, it is very easy to pick up the challenge. For example, if I claim that the limit to *1/x *as *x *increases without bound is zero, my challenger will say, “I want a margin of error smaller than 1/10^{4}.” That is easy enough since I only have to choose a number > 10^{4} and *1/x *and all subsequent values will differ from zero by less than *1/10 ^{4}*. Moreover,

*whatever number*my challenger gives me, I can produce an

*x*that lies inside the margin. Therefore I win. But it is often not at all obvious whether certain algebraic expressions do have limits or not, and even when there are good reasons to believe they do have limits, we are unable to say exactly what this limit is.

Newton is inconsistent or, if you like, opportunistic in his use of limits. The trouble with Calculus is, of course, that it uses limits either implicitly or explicitly all the time: the derivative is itself a limit since it is the ratio of

__increase in the dependent variable__ = __f(x + ____d__* x) – f(x)* increase in the independent variable

*(x +*

*d*

*x) − x*

as

*d*

*x →*

*0.*Thus, the derivative of

*f(x) = x*is

^{2}*lim*

*d*

*x →*

*0*

__d__*=*

__f(x)__

__(x +__

__d__

__x)__^{2}– x^{2}*=*

__2x__

__d__

__x +__

__d__

__x__^{2}*= 2x +*

*d*

*x*

*d*

*x*

*(x +*

*d*

*x) − x*

*d*

*x*

The limiting value of the R.H.S. is obviously *2x *since we can make it as close to *2x *as required by simply diminishing *d**x. *It is tempting to simply set *d**x *at zero and have done with it but this gets us into trouble on the L.H.S. since one is not allowed to divide by zero. What one wants to do is simultaneously to let *d**x* go to zero on one side but not on the other. But this is hardly consistent.

Newton, unlike Leibnitz, is aware of the logical difficulty but never quite manages to dispose of it satisfactorily. On one and the same page he says “*There is a limit which the velocity at the end of the motion may attain, but not exceed” *and a little further on he speaks of *“limits to which the ratios of quantities decreasing without limit do converge…..but never go beyond, nor in effect attain to till the quantities are diminished ad infinitum….” *In effect, as Bishop Berkeley pointed out in Newton’s own time, the idea that a body has an ‘*instantaneous velocity’ *when at a particular point is nonsensical since when it is actually at such a point it is, by definition at rest. Modern Calculus, while freeing the subject from glaring inconsistency, has also succeeded in removing it from the domain of physical reality which gave rise to it in the first place.

Newton, who was a natural philosopher first and a pure mathematician second, would most likely have regarded the modern analytic definition of a limit, originally due to Weierstrass, as a fudge. The key proposition concerning limits in the *Principia *is lemma I in Book I Section I :

*“Quantities, and the ratio of quantities, which in any finite time converge continually to equality, and before the end of that time approach nearer to each other than by any given finite difference, become ultimately equal”*.

Newton’s ‘proof’ is brief and to the point:

*“If you deny it, suppose them to be ultimately unequal, and let D be their ultimate difference. Therefore, they cannot approach nearer to equality than by that given difference D; which is contrary to the supposition.”** *

* *This is admirably frank: Newton is, as it were, putting his hands above the table and showing that he has nothing in them. Certainly, he needs such a proposition and appeals to it implicitly or explicitly all the time. But the question every enquiring mechanics student wants to pose is : “Is the ‘limit’ of a convergent sequence actually *attained*?” The modern definition of a limit artfully avoids the problem since the phrase “*approaches nearer than any finite quantity*” applies equally well to a zero or a non-zero difference (provided we can always exhibit such a difference when challenged). This is *mathematically speaking *perfectly satisfactory ─ but not physically speaking.

Newton’s own naively pragmatic approach nonetheless leads to the strange sounding lemma (vii)

“*I say that the ultimate ratio of the arc, chord, and tangent, any one to any other, is the ratio of equality”*.

This stretches credulity a shade too far but follows logically enough if you accept lemma (I).

To my mind, if you are a realist, you either have to take on board Newton’s Lemma with all that it entails or propose a counter Lemma on the following lines:

*“Quantities, and the ratio of quantities, which converge to equality do not ordinarily approach nearer to each other than a particular given difference, and thus do not become ultimately equal.”** *

* *This Lemma would itself rely on a ‘finitist’ Axiom such as the following :

*“Every quantity such as length, time, force &c. has a minimum value which cannot be further diminished such as, for example, a smallest possible unit of length, the *stralda*, or the shortest possible interval of time, the *ksana*.”*

* ** *This Axiom has its own problems but it enables one to avoid infinite regress and is surely more reasonable than belief in the ‘infinite divisibility of space and time’. Some contemporary physicists suggest that ‘space-time’ is ‘grainy’ ─ though few people have worked out the considerable conceptual and physical consequences of such an approach. *SH *18/8/19