§4a. The scholarly consensus is that there have been three big periods of crisis in the foundations of Western math. The first was the Pythagorean incommensurables. The third is the era (which we’re arguably still in) following Gödel’s Incompleteness proofs and the breakdown of Cantorian set theory.1 The second great crisis surrounded the development of calculus.
The idea now is going to be to trace out how transfinite math gradually evolves out of certain techniques and problems associated with calculus/analysis. In other words, to build a kind of conceptual scaffold for viewing and appreciating G. Cantor’s achievements.2 As mentioned, this means going diachronically back to where the original timeline left off at the end of §3a.
So then now we’re near the end of the seventeenth century, the time of the Restoration and the Siege of Vienna, of towering perukes and scented hankies, etc. Doubtless you know already that calculus was the most important mathematical discovery since Euclid. A seminal advance in math’s ability to represent continuity and change and real-world processes. Some of this has already been talked about. You probably also know that I. Newton or/and G. W. Leibniz are usually credited with its discovery.3 You might also know—or at least have been able to anticipate from §3a’s timeline—that the idea of exclusive or even dual credit is absurd, as is the notion that what’s now called the calculus comprises any one invention. By even the simplest accounting, royalties would need to be shared by a good dozen mathematicians in England, France, Italy, and Germany who were all busily ramifying Kepler and Galileo’s work on functions, infinite series, and the properties of curves, motivated (meaning the mathematicians were, as has also been mentioned) by certain pressing scientific problems that were also either math problems or treatable as same.
Here were some of the most urgent motivating problems: calculating instantaneous velocity and acceleration (physics, dynamics); finding the tangent to a curve (optics, astronomy); finding the length of a curve, the area bounded by a curve, the volume bounded by a surface (astronomy, engineering); finding the maximum/minimum value of a function (military science, esp. artillery). There were probably some other ones, too. We now know that these problems are all closely related: they’re all aspects of calculus. But the mathematicians working on them in the 1600s didn’t know this, and Newton and Leibniz do deserve enormous credit for seeing and conceptualizing the relations between, for example, the instantaneous velocity of a point and the area bounded by its motion’s curve, or the rate of change of a function and the area given by a function whose rate of change we know. It was N. & L. who first saw the forest—meaning the Fundamental Theorem that differentiation and integration are mutually inverse—and were able to derive a general method that worked on all the above-type problems. On the mystery of continuity itself. Although not without having to dance around some nasty crevasses in this forest, and certainly not without all sorts of other people’s preliminary arboreal results and discoveries. Those in addition to the ones already timelined include, e.g. : 1629—P. de Fermat’s method for finding the max. and min. values of a polynomial curve; 1635ish—G. P. de Roberval’s discovery that a curve’s tangent could be expressed as a function of the velocity of a moving point whose path composed the curve4; 1635—B. Cavalieri’s Method of Indivisibles for calculating the areas under curves; 1664—I. Barrow’s geometrical Method of Tangents.
Plus, c. 1668, there’s a great prescient ¶ in the preface to J. Gregory’s Geometriae Pars Universalis whose upshot is that the really important division of math is not into the geometrical v. the arithmetical but into the universal v. the particular. Why this is prescient: Various mathematicians from Eudoxus to Fermat had invented and deployed calc-type methods, but always geometrically and always in relation to specific problems. It’s Newton and Leibniz who combine the various methods of Latitudes and Indivisibles & c. into a single arithmetic technique whose breadth and generality (i.e., its abstractness) are its great strength.5 The two’s backgrounds and approaches are different, though. Newton comes to calculus via Barrow’s Method of Tangents, the Binomial Theorem, and Wallis’s work on infinite series. Leibniz’s route involves functions, patterns of numbers called ‘sum-’ and ‘difference-sequences,’ and a distinctive metaphysics6 whereby a curve could be treated as an ordered sequence of points separated by a literally infinitesimal distance. (In brief, curves for Leibniz are generated by equations, whereas quantities varying along a curve are given by functions (pretty sure we’ve mentioned that he copyrighted ‘function’).)
We’re not going to get too much into the Newton-v.-Leibniz thing, but the metaphysical differences in the way they viewed infinitesimal quantities are highly germane.7 Newton, at heart a physicist who thought in terms of velocity and rate of change, used infinitely tiny increments in his variables’ values as disposable tools in arriving at the derivative of a function. Newton’s derivative was basically a Eudoxian-type limit of these increments’ ratio as they got arbitrarily small. Leibniz, a lawyer/diplomat/courtier/philosopher for whom math was sort of an offshoot hobby,8 had an aforementionedly idiosyncratic metaphysics that involved certain weird, fundamental, infinitely small constituents of all reality,9 and he pretty much built his calculus around the relations between them. These differences had methodological implications, obviously, with Newton seeing everything in terms of rates of change and the Binomial Theorem and thus tending to represent functions10 as infinite series, v. Leibniz preferring what are known as ‘closed forms’ and avoiding series in favor of summations and straight functions, including transcendental functions when algebraics wouldn’t work. Some of these differences were just taste—e.g., the two used totally different notations and vocab, although Leibniz’s was better and mostly won out.11 For us, the important thing is that both men’s versions of calculus caused serious problems for mathematics as a deductive, logically rigorous discipline, and were vigorously attacked at the same time that they enabled all sorts of incredible results in math and science. The source of the foundational shakiness should be easy to see, whether the problem appears more methodological (as in Newton’s case) or metaphysical (as in G. W. L.’s). As has been mentioned in §2b and probably elsewhere (and is well known anyway), the trouble concerns infinitesimals, which all over again in the late 1600s force everybody to try to deal with the math of ∞.
The best way to talk about these problems is to sketch the way early calculus works. We’re going to do a somewhat nonstandard, quadrature-type derivation that manages to illustrate several different aspects of the technique at once so that you don’t have to sit through a bunch of different cases. We’re also going to sort of mix and match N. & L.’s different methods and terminology, since the aim here isn’t historical accuracy but clarity of illustration. For the same reason, we’ll eschew the usual how-to-find-the-tangent or how-to-go-from-average-speed-to-instantaneous-speed cases most textbooks use.12
Refer first to what we’ll call Exhibit 4a, which please note isn’t even remotely to scale but does have the advantage of making stuff easy to see. For the same reasons, the relevant ‘curve’ in Exhibit 4a is a straight line, the very simplest kind of curve, w/r/t which the calculations are minimally hairy.13 E4a’s curve here can be regarded either as a set of points produced by a continuous function on a closed interval or as the path of a moving point in 2D space. For the latter, Newtonian case (which is what most college classes seem to prefer), note that here the vertical axis indicates position and the horizontal axis is time, i.e. that they’re reversed from the axes in the motion-type graphs you’re apt to have had in school (long story; good reasons). So:
Exhibit 4a
First, posit that A, the area under the curve, is equal to x2. (This will seem strange because E4a’s curve is a straight line, so it looks like A really ought to be ; but for most curves drawn exactly to scale x2 is going to work, so here please just play along and pretend that y exactly equals x.) Meaning formally we assume that:
(1)A = x2. Then posit that x increases by some infinitesimally tiny quantity t,14 with the area under the curve consequently increasing by tz. Given this, and given the equality in (1), we have:
(2)A + tz = (x + t)2. Multiplying out (2)’s binomial, we get:
(3)A + tz = x2 + 2xt + t2. And since, by (1), A = x2, we can reduce (3) to:
(4)tz = 2xt + t2. Now watch close. We take (4) and divide through by t to get:
(5)z = 2x + t. Watch again: since we’ve defined t as infinitesimally tiny, 2x + t is equivalent, in finite terms, to 2x, so the relevant equation becomes:
(6)z = 2x. At which point fini. What-all this shows we’ll see in a moment.
You will likely have noticed some serious shiftiness in this derivation’s treatment of infinitesimal quantity t. In the move from (4) to (5), t is sufficiently > 0 to be a legal divisor. In the move from (5) to (6), though, t appears to be = 0, since t added to 2x yields 2x. In other words, t is being treated as 0 when it’s convenient and as > 0 when it’s convenient,15 which appears to create the contradiction (t = 0) & (t ≠ 0), which—as you’ll recall from the previous discussion of reductio-type proofs—seems like ample grounds for going back and saying there’s got to be something wrong with using infinitesimal quantities like t. At the very least, the t thing looks like a notational trick, some math version of Cooking the Books in accountancy.16
Except here’s the thing. If you blink the apparent contradiction, or at least hold off on running a reductio on it, a derivation like E4a’s (which, notwithstanding its resemblance to Leibniz’s Characteristic Triangle, is actually a simplified version of the process Newton uses in De Analysi17) turns out to be a truly marvelous piece of mathematical ordnance, one that yields at least two crucial results. Result #1 is that the rate of change of x2 can be shown to be 2x if you accept the computation as representing the change in x during the ‘instant’ t.18 Result #2 is that you can show the rate of change of area A to be the ‘curve’ (namely E4a’s y (remember that a straight line is a kind of curve)) that bounds A. To see this, compute
and cancel to get
then divide through by the suspiciously convenient t to get z, which remember is only ‘infinitesimally greater’ than y and so here can be regarded as = y.19 You end up with y = 2x, which happens to be the function that produces Exhibit 4a’s curve. Which means that packed into the result y = 2x is the basic principle of integral calc: the rate of change of the area bounded by a curve is nothing other than that very curve. Which in turn means that the integral of a function that has a given derivative is the function itself, which happens to be the Fundamental Theorem of the Calculus,20 viz. that differentiation and integration are inversely related21 the same way multiplication/division and exponents/roots are, which is why the calculus is so powerful and N. & L. deserve so much credit—the F.T.C. combines both techniques in one high-caliber package (… so long as you accept the equivocations about whether t = 0).
This is, however, not the way most of us had these matters explained in school. If you took Calc I, chances are that you learned, via velocity-and-acceleration graphs, to ‘take the limit of Δx,’ or that being Leibniz’s notation and the limit concept being postcalc analysis’s subsequent way of finessing the whole problem of infinitesimals. You might, for example, know or recall that in most modern textbooks an infinitesimal is defined as ‘a quantity that yields 0 after the application of a limit process’. If you are an actual Calc I survivor, you surely can also remember how brutally abstract and counterintuitive the limits thing is to try to learn: almost nobody ever tells undergraduates the whys or whences of the method,22 or mentions that there’s an easier or at least more intuitive way to understand dx and Δx and t, namely as orders of
Most teachers instead try to distract students with snazzy examples of calculus’s ability to solve all kinds of complex real-world problems—from instantaneous velocity and -acceleration as 1st and 2nd derivatives, to Kepler’s elliptical orbits and Newton’s F = m(dx), to the motions of sprung springs and bouncing balls, eclipses’ penumbrae, loudness as a function of a volume-knob’s rotation; not to mention the trigonometric vistas that open up when you learn that d(sin x) = cos x and d(cos x) = − sin x, that the tangent is the limit of the secant, etc. These are usually presented as the inducements for mastering the limits concept, a concept that is really no less abstract or algesic than trying to conceive of dx or t as just incredibly, mindbendingly tiny.
As should be understandable from the foregoing, the true motive behind the limits approach to calc was that Newton and Leibniz’s infinitesimal quantities and notational sleights of hand had opened up some nasty cracks in math’s foundations, given that the proposition ‘(x = 0) & (x ≠ 0)’ violates all sorts of basic LEM-ish axioms. Given our Story so far, the easiest thing to say appears to be that most of the supposed problems here were actually caused by math’s inability to handle infinite quantities—that, as with Zeno’s Dichotomy and Galileo’s Paradox, the real difficulty was that no one yet understood the arithmetic of ∞. It wouldn’t exactly be wrong to say this, but for our purposes it would be at least semi-impoverished.23 As with everything else about math after calculus, the real problems and stakes here are more complex.
§4b. Let’s restate and summarize a bit. The sheer power and significance of calculus presented early-modern math with the same sort of crisis that Zeno’s Dichotomy had caused for the Greeks. Except in a way it was worse. Zeno’s Paradoxes hadn’t solved any extant problems in math, whereas the tools of calculus did. The panoply of real-world results that calc enabled has already been detailed, as has the extraordinary timing—every kind of applied science is butting up hard against problems of continuous phenomena just as Newton and Leibniz and their respective cadres come up with a mathematical account of continuity.24 One that works. One that leads directly to the great modern decoding of physical laws as differential equations.
Except foundationally it’s a disaster. The whole thing’s built on air. The Leibnizians25 couldn’t explain or derive actual quantities that were somehow not 0 but still infinitely close to 0. The Newtonians,26 who claimed that calculus didn’t really depend on infinitesimal quantities but rather on ‘fluxions’—w/ fluxion meaning the rate of change of a time-dependent variable—foundered on the requirement that the ratios of these fluxions be taken just as they vanish into or emerge from 0, meaning really the infinitesimal first or last instant when they’re > 0, which of course is just trading infinitely tiny quantities for infinitely brief instants. And the Newtonians had no better account of these instantaneous ratios than the Leibnizians did of infinitesimal quantities.27 The only real advantage of the Newtonian version (for everyone but Calc I students) is that it already has the limit concept sort of implicitly contained in the idea of a vanishingly tiny first/last instant—it would be mostly A.-L. Cauchy, then later K. Weierstrass, who drew all this out. (Weierstrass, by the way, was a teacher of Cantor’s.)
Apropos continuity and infinitesimals and calculus, it’s worth looking quickly at one more of Zeno’s anti-motion Paradoxes. This one’s usually called the Arrow, because it concerns the time-interval during which an arrow is traveling from its bow to the target.28 Zeno observes that at any specific instant in this interval, the arrow occupies “a space equal to itself,” which he says is the same as its being “at rest.” The point is that the arrow cannot really be moving at an instant, because motion requires an interval of time, and an instant here is not an interval; it’s the tiniest temporal unit imaginable, and it has no duration, just as a geometric point has no dimension. And if, at each and every instant, the arrow is at rest, then the arrow does not ever move. In fact nothing whatsoever moves, really, since at any given instant everything is at rest.
There’s at least one implicit premise in Zeno’s argument, which schematizing helps make overt:
(1)At each and every instant, the arrow is at rest.
(2)Any interval of time is composed of instants.
(3)Therefore, during any interval of time, the arrow isn’t moving.
The covert premise is (2), which is just what Aristotle attacks in the Physics, dismissing the whole Z.P. on the grounds that “ … time is not composed of indivisible instants,”29 i.e. that the very notion of something being either in motion or at rest at an instant is incoherent. Notice, though, that it is precisely this idea of motion at an instant that N. & L.’s calculus is able to make mathematical sense of—and not just general motion but precise velocity at an instant, not to mention rate-of-change-in-velocity at an instant (= acceleration, 2nd derivative), rate-of-change-in-acceleration at an instant (= 3rd derivative), etc.
Arrow-wise, the fact that classical calc is able to handle precisely what Aristotle says can’t be handled is not a coincidence. First off, have another glance at the thing about an instant having “no duration” two ¶s back, and see that this term is somewhat ambiguous. It turns out that the kind of instant Zeno is talking about is, at least mathematically, not something of 0 duration, but an infinitesimal. It has to be. Consider again the hoary old jr.-high formula for motion, Rate × Time = Distance, or An arrow at rest has an r of 0 and covers 0d, obviously. But if, timewise, an instant = 0, then Zeno’s scenario ends up positing 0 as the divisor in
which is mathematically illegal/fallacious in the same way the whole
is illegal/fallacious.
Except here we have to be careful again, just as with the other Z.P.s. As has been mentioned in all kinds of different contexts already, there’s ‘handling’ something v. really handling it. Even if we grant that Zeno’s instant is an infinitesimal and thus ripe for treatment by Newtonian fluxion or Leibnizian dx, you can probably already see that a classical-calc-type ‘solution’ to Zeno’s Arrow is apt to be trivial in the same way that is trivial w/r/t the Dichotomy. That is, the Arrow is really a metaphysical paradox, and it’s precisely a metaphysical account of infinitesimals that calc hasn’t got. Without such an account, all we can do is apply to the Arrow some sexy-looking formula that will depend on the same mysterious and paradoxical-looking infinitesimals that Zeno’s using in the first place; plus there will still be the unsettling question of how an arrow actually gets to the target over an interval comprising infinitely many
-size instants.30
The problem is that where the Arrow is metaphysical it is also extremely subtle and abstract. Consider for instance another hidden premise, or maybe a kind of subpremise that’s implicit in Zeno’s (1): is it really true that something’s got to be either moving or at rest? At first it certainly looks true, provided we take ‘at rest’ to be a synonym for ‘not moving’. Remember LEM, after all. Surely, at any given instant t, something is either moving or else not moving, meaning that it has at t either a Rate > 0 or a Rate = 0. That in truth this disjunction is not valid—that LEM doesn’t really apply here—can be seen by examining the difference between the number 0 and the abstract word ‘nothing’. It’s a tricky difference, but an important one. The Greeks’ inability to see it was probably what kept them from being able to use 0 in their math, which cost them dearly. But 0 v. nothing is one of those abstract distinctions that’s almost impossible to talk about directly; you more have to do it with examples. Imagine there’s a certain math class, and in this class there’s a fiendishly difficult 100-point midterm, and imagine that neither you nor I get even one point out of 100 on this exam. Except there’s a difference: you are not in the class and didn’t even take the exam, whereas I am, and did. The fact that you received 0 points on the exam is thus irrelevant—your 0 means N/A, nothing—whereas my 0 is an actual zero. Or if you don’t like that one, imagine that you and I are respectively female and male, both healthy and 20–40 years of age, and we’re both at the doctor’s, and neither of us has had a menstrual period in the past ten weeks, in which case my total number of periods is nothing, whereas yours here is 0—and significant. End examples.
So it’s simply not true that something’s always got to be either 0 or not-0; it might instead be nothing, N/A.31 In which case there’s a nontrivial response to Zeno’s premise (1), to wit: the fact that the arrow is not moving at t does not mean that its r at t is 0 but rather that its r at t is nothing. That this slipperiness in premise (1) is not spotted right away is due in part to the 0-v.-nothing thing and in part to the vertiginous, Level-Four abstractness of words like ‘movement’ and ‘motion’. The noun ‘motion,’ for example, is especially sneaky because it doesn’t look all that abstract; it seems straightforwardly to denote some single thing or process—whereas, if you think about it,32 even the simplest kind of motion is really a complicated relation between (a) one object, (b) more than one place, and (c) more than one instant. Upshot: The fallacy of the Arrow lies in Zeno’s assumption that the question ‘Is the arrow in motion or not at instant t?’ is any more coherent than ‘What was your grade in this class you didn’t take?’ or ‘Is a geometric point curved or straight?’ The right answer to all three is: N/A.33
Granted, this response to the Arrow is, strictly speaking, philosophical rather than mathematical. Just as a classical-calc-type solution will be philosophical, too, in the sense of having to make metaphysical claims about infinitesimals. Modern analysis’s own way of dealing with this Z.P. is very different, and purely technical. If, again, you ever did the Arrow in college math, you probably learned that Zeno’s specious premise is (1) but heard nothing34 about an instant as an infinitesimal. This (again) is because analysis has figured out ways to dodge both the infinitesimal and the 0-as-divisor problem in its representations of continuity. Hence, in a modern math class, premise (1) is declared false because the arrow’s r-at-instant-t can be calculated as ‘the limit of average r’s over a sequence of nested intervals converging to 0 and always containing t,’ or something close to that. Be apprised that the language of this solution35 is Weierstrassian: it’s his refined limit-concept36 that will allow calculus to handle the related problems of infinitesimals and Zeno-type infinite divisibility.
The specific relations between these problems are intricate and abstract, but for us they’re totally apropos. However weird or foundationally corrosive infinitesimals are, it turns out that their disqualification from math/metaphysics creates some wicked little crevasses as well. Example: Without infinitesimals, it apparently makes no sense to talk about the ‘next instant’ or ‘very next split-second’—no two instants can be quite successive. Explanation: Without infinitesimals, then respecting any two supposedly successive instants t1 and t2, there are only two options: either there’s no (meaning 0) temporal interval between t1 and t2, or there’s some temporal interval 0 between them. If there’s 0 interval, then t1 and t2 are clearly not successive, because then they’re the exact same instant. But if there is some temporal interval between them, then there are always other, tinier instants between t1 and t2—because any finite temporal interval can always be subdivided tinier and tinier, just like distances on the Number Line.37 Meaning there’s never going to be a very next t2 after t1. In fact, so long as infinitesimals are non grata, there must always be an infinite number of instants between t1 and t2. This is because if there were only a finite number of these intermediate instances, then one of them, tx , would by definition be the smallest, which would mean that tx was the instant closest to t1, i.e. that tx was the very next instant after t1, which we’ve already seen is impossible (because of course what about the instant
If you’re now noticing a certain family resemblance among this no-successive-instant problem, Zeno’s Paradoxes, and some of the Real Line crunchers described in §2c and -e, be advised that this is not a coincidence. They are all facets of the great continuity conundrum for mathematics, which is that ∞-related entities can apparently be neither handled nor eliminated. Nowhere is this more evident than with s. They’re riddled with paradox and can’t be defined, but if you banish them from math you end up having to posit an infinite density to any interval,38 in which the idea of succession makes no sense and no ordering of points in the interval can ever be complete, since between any two points there will be not just some other points but a whole infinity of them.
Overall point: However good calculus is at quantifying motion and change, it can do nothing to solve the real paradoxes of continuity. Not without a coherent theory of ∞, anyway.
1 Command Decision: We’re going to quit saying ‘see below’ all the time and simply assume that from here on it will be obvious when it applies.
2 IYI As mentioned, the rhetorical aim here is to rig the discussion so that it’s not grotesquely reductive but is simple and clear enough to be followable even if you’ve had no college math. It’s true that it would be nice if you’ve had some college math, but please rest assured that considerable pains have been taken and infelicities permitted in order to make sure it’s not required.
3 In fact there was a big row in European math over which one of them really invented it, specifically over whether Leibniz, whose first calc-related paper was in 1674, had plagiarized Newton, whose De analysi per aequationes numero terminorum infinitas circulated privately in 1669.
4 Sorry about the hideous syntax here; there’s no nice way to compress Roberval.
5 This is why trying to settle the credit question by saying that Newton invented differential calculus and Leibniz invented integral calculus (which some math teachers like to do) is confusing and wrong. The whole point is that N. & L. understood the Problem of Tangents (= instantaneous velocity) and the Problem of Quadratures (= areas under curves) to be two aspects of a single larger problem (= that of continuity) and thus treatable by the same general method. The whole reason N. & L. are math immortals is that they didn’t split calc up the way intro courses do.
6 Leibniz, like Descartes, being also of course a big-time philosopher, of whose ontology you may have heard terms like ‘individual substance,’ ‘transcreation,’ ‘identity of indiscernibles,’ and ‘windowless monad’.
7 IYI Some of the following might be a bit eyeglazing in the abstract, but it will make more sense shortly when we look at a simple example.
8 Surely we all hate people like this.
9 IYI these being the monads mentioned three FNs up.
10 IYI even functions involved in area problems.
11 IYI Among other Leibnizisms are ‘differential calculus,’ ‘integral calculus,’ ‘dx,’ and the good old vermiculate integral sign ‘∫,’ which latter (Gorisian factoid:) Leibniz originally meant as an enlarged S denoting “the sum of the [y-coordinates] under a curve.”
12 N.B. re possible IYI: If you’ve got a strong math background, feel free to skip the following Exhibit and gloss altogether—the simplifications may bother you more than it’s worth.
13 IYI For readers w/ strong backgrounds who nevertheless haven’t skipped all this but are noticing already that Exhibit 4a looks like a very simplified illustration of Leibniz’s “difference quotient,” and are maybe wondering why we don’t just go ahead and do his famous Characteristic Triangle, the answer is that using the C.T. would cause the problems’ explication to eat 6+ pages and subject everybody to too much calc-detail that ends up not being important.
14 Again, if E4a is treated as a rate-of-change problem, t is an infinitesimal instant. (IYI If you’ve encountered the somewhat unfashionable term infinitesimal calculus in connection with classical calc, you can now see that the term derives from the infinite tininess of quantities/durations like t.)
15 N.B. that according to Leibniz this is precisely what infinitesimals are—they’re critters you can do this with. See for example this excerpt from a letter to J. Wallis around 1690:
It is useful to consider quantities infinitely small such that when their ratio is sought, they may not be considered zero but which are rejected as often as they occur with quantities incomparably greater. Thus if we have x + [t], [t] is rejected. But it is different if we seek the difference between x + [t] and x ….
16 An even better analogy might be an experimental scientist Skewing his Data to confirm whatever hypothesis he wants confirmed.
17 IYI Newton’s examples in D.A. were messier and depended more on the Binomial Theorem, whereby an equivalence like z = rxn (where r is a constant and the n may be a fraction or even negative) can be expanded to show that rxn ’s rate of change will always = nrxn−1. This is what allows for the theoretically infinite chain of higher derivatives in college math. As in the 1st derivative of, e.g., y = x4 is 4x3; the 2nd derivative is 12x2, and so on, until any nth derivative can be found via the ratio —although you usually don’t get into anything higher than 2nd derivatives in regular calc.
18 Observe, though, that you have to use the same sort of questionable accounting practices in this calculation:
(1)
(2) which =
(3) at which point you assume that t ≠ 0 and divide through to get:
(4)2x + t, whereupon you assume that t = 0 and toss it out to get:
(5)2x
19 Or you can get the same result by treating z as effectively equivalent to y in equation (6) of the original derivation.
20 IYI as first articulated by Leibniz in 1686.
21 IYI This is why syllabiphiles call integration ‘antidifferentiation’.
22 In this context, what the limits method really is is a metaphysical accounting trick that makes infinitude/infinitesimality a feature of the calculation process rather than of the quantities calculated. As should be evident by now, the regular laws of arithmetic don’t work on ∞-related quantities; but by basically restricting itself to partial sums through 99% of the calculation, limits-based calculus lets these rules apply. Then, once the basic calculation is completed, you ‘take the limit’ and let t or dx or whatever ‘approach 0,’ and extrapolate your result. In pedagogical terms, the math student is asked here to presume that certain quantities are finite and stable for calculation purposes but then vanishingly tiny and protean at the actual results stage. This is an intellectual contortion that makes calculus seem not just hard but bizarrely and pointlessly hard, which is one reason why Calc I is such a dreaded class.
23 IYI (and maybe not even a good idea to mention) There is, as a matter of fact, a nontrivial way to say the same thing, but it involves nonstandard analysis, which is the invention of one A. Robinson in the ’70s and professes to rigorize infinitesimals in analysis via the use of hyperreal numbers, which themselves basically combine the real numbers and Cantorian transfinites—meaning the whole thing’s heavily set-theoretic and Cantor-dependent, plus controversial, and wildly technical, and well beyond this discussion’s limits … but nongrotesqueness appears to require at least mentioning it, and maybe commending w/r/t any burning further interest on your part Prof. Abraham Robinson’s Nonstandard Analysis, Princeton U. Press, 1996.
24 This probably needs to be explained instead of just asserted over and over. In classical calculus, continuity is treated as essentially a property of functions: a function is continuous at some point p if and only if it’s differentiable at p. This is why Bolzano and Weierstrass’s finding those continuous but nondifferentiable functions in the 1800s will be such a big deal, and why modern analysis’s theory of continuity is now a lot more complicated.
25 IYI meaning mainly the two J. Bernoullis and J1’s son D., plus G. F. A. l’Hôpital, who’d been one of the J.s’ patron (the Bernoullis are hard to keep straight), all flourishing in, say, the early 1700s.
26 IYI = primarily the U.K.’s E. Halley, B. Taylor, and C. Maclaurin, also early 1700s.
27 IYI It so happens that Bishop G. Berkeley (1685–1753, major empiricist philosopher and Christian apologist (and a world-class pleonast)) has a famous critique of classical calc along just these lines in an eighteenth-century tract whose 64- (yes, 64-) word title starts with “The Analyst ….” A representative snippet being:
Nothing is easier than to devise expressions or notations for fluxions and infinitesimals…. But if we remove the veil and look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion; nay, if I mistake not, direct impossibilities and contradictions.
Berkeley’s broadside is in some ways Christianity’s return-raspberry to Galileo and modern science (and it’s actually great cranky fun to read, though that’s neither here nor there). Its overall point is that eighteenth-century math, despite its deductive pretensions, really rests on faith no less than religion does, i.e. that “[H]e who can digest a second or third fluxion, a second or third [derivative], need not, methinks, be squeamish about any point in divinity.”
On the other hand, M. J. l.R. d’Alembert (1717–1783, big post-calc mathematician and all-around intellectual, plus one of the first proponents of the idea that “the true metaphysics of the calculus is to be found in the idea of a limit”) objects to infinitesimals on wholly logical LEM grounds in the famous Encyclopédie he coedited with D. Diderot in the 1760s, as in e.g.: “A quantity is something or nothing; if it is something, it has not yet vanished; if it is nothing, it has literally vanished. The supposition that there is an intermediate state between these two is a chimera.”
28 IYI The Arrow, like the Dichotomy, gets discussed in Book VI of Aristotle’s Physics; it also appears in fragmentary form in Diogenes Laërtius’s Lives and Opinions.
29 IYI It goes without saying that the compatibility of this claim with Aristotle’s time-series objections to the Dichotomy in §2b is somewhat dubious. There are ways to reconcile A.’s two arguments, but they’re very complicated, and < 100% convincing—and anyway that’s hardcore Aristotle-scholar stuff and well outside our purview.
30 IYI Feel free to review §2a’s harangue about applying formulas v. truly solving problems, which applies here in spades.
31 IYI Regarding d’Alembert’s objection to infinitesimals in FN 27 supra, it is just this third possibility that makes his argument unsound.
32 IYI Here’s a nice example of where some horizontal early-morning abstract thinking can really pay off. Once we’re up and about and using our words, it’s almost impossible to think about what they really mean.
33 IYI Observe, please, that this is not at all the same as Aristotle’s objection to ‘Is the arrow … instant t?’ What he thinks is really incoherent is premise (2)’s idea that time can be composed of infinitesimal instants, which is an argument about temporal continuity, under which interpretation the Arrow can be solved with a simple calc formula. As you’ve probably begun to see, Aristotle manages to be sort of grandly and breathtakingly wrong, always and everywhere, when it comes to ∞.
34 (not 0)
35 which solution, though 100% technical, at least has the advantage of recognizing that motion-at-an-instant is a concept that always involves more than one instant.
36 IYI As we’ll see in §5, what Weierstrass basically does is figure out how to define limits in a way that eliminates the ‘tends to’ or ‘gradually approaches’ stuff. Expressions like these had proved susceptible to Zenoid confusions about space and time (as in ‘approaches from where?’ ‘how fast?’ etc.), besides being just generally murky.
37 This is the rub, and why the relation between infinitesimals and Zenotype divisibility is sort of like that between chemo and cancer. The thing about quantities that are less than but still greater than 0 is that you cannot get to them by dividing over and over and over again—the same way you can’t get to a transfinite number by adding or multiplying finite numbers. ∞ and
are uniquely exempt from all the paradoxes of infinite subdivision and expansion … even though they are in a sense the very embodiments of those paradoxes. So the whole thing is just very strange.
38 meaning interval in time, in space, or on the Real Line—all three are continuity’s turf.