70
$\begingroup$

I think most people believe that mathematical truths are logically necessary. The fact that $\sqrt{2}$ is irrational doesn't depend on who proved it, when they proved it, whether they liked it, or any other contingent historical facts.

However, which mathematical structures and theorems we choose to investigate, build other mathematical structures on, and use in formulating physical theories, could be historically contingent. For example, here are some ways that one could argue mathematics might have turned out differently.

  1. The so-called "foundational crisis" in the early 20th century might have been "won" by a different side. For instance, if intuitionism/constructivism had been victorious, modern-day mathematics might be use constructive logic rather than classical logic by default.

  2. A more rigorous justification of infinitesimal numbers, along the lines of non-standard analysis, might have been noticed earlier. In this case, it might have had a chance of becoming the "usual" foundations of calculus and analysis in place of Cauchy's $\epsilon$-$\delta$.

  3. Joel David Hamkins has argued that one possible version of "early infinitesimals" could also have led to the continuum hypothesis being accepted as a fundamental axiom of set theory.

In any of these cases one can argue about how likely such an alternate history is, but at least one can imagine it happening (for instance, in a science fiction story).

What are some other ways in which it's possible to imagine that mathematics might have turned out differently?

I'm not interested in "cosmetic" differences such as the widespread use of base 10 numerals or the notation "$=$" for equality. I'm also not really interested in tweaks to the structure of definitions, like whether a "ring" has a multiplicative identity or whether a "manifold" has a boundary, unless they can be argued to go along with some more significant difference -- I'm interested in differences in which mathematical objects are studied, not in what we call them.

Edit: I am not interested in hypothetical mathematics, e.g. "if X currently-open problem had been solved long ago, then ...". I'm only interested in mathematics that we know now is true and correct, but which might have ended up playing a different role in the culture of mathematics for contingent historical reasons. See my examples: we know now that constructive mathematics is meaningful and that non-standard analysis is rigorous. I'm also not interested in hypotheticals about how the nature of mathematics could be different, e.g. if standards of proof were different.

Edit 2: I'm also not interested in hypothetical non-mathematics, e.g. "if X currently-solved problem hadn't been solved at all, then...". The scenarios I'm interested in are alternate worlds in which more or less all the same truths are known at the present day, but different axioms or formalizations are preferred for historical reasons. If in the real world theorem X was proven before theorem Y, but it could have been the other way around, then mathematics would certainly have looked somewhat different in the interregnum while Y was known but not X; but I would only consider that an answer to the question if you can argue that the inversion of order would cause a lasting effect on mathematics continuing after X was eventually also proven.

$\endgroup$
17
  • 10
    $\begingroup$ Perhaps non-Euclidean geometry could have been discovered much earlier, and been treated on an equal footing with Euclidean geometry? $\endgroup$ Commented Jul 9, 2025 at 21:14
  • 28
    $\begingroup$ Okay, here is a much more speculative suggestion. In the short story "Story of Your Life" by Ted Chiang (which the movie "Arrival" is based on) there is an alien species called heptapods who experience time differently than humans do. In particular this leads them to formulate basic physical laws not in terms of differential equations, as humans have, but in terms of least energy (or least distance/least time) principles. We can imagine heptapod math might have developed differently as a result, with e.g. more focus on calculus of variations. $\endgroup$ Commented Jul 9, 2025 at 21:19
  • 8
    $\begingroup$ Imagine if math has been completely bourbakized, what a nightmare that would have been. $\endgroup$ Commented Jul 9, 2025 at 21:56
  • 35
    $\begingroup$ SF scenario: Grothendieck stays in analysis in the late 1940's-early 1950's. By now we have a working definition of functional integrals, and rigorous QFT, and understand PDEs much better, but algebraic geometry still uses Weil's foundations. $\endgroup$ Commented Jul 9, 2025 at 23:12
  • 11
    $\begingroup$ While I think this is a fine question in the abstract, I think the answers/comments need to be policed heavily lest we get a bunch of half-baked speculations from people with little experience with serious mathematics. Somehow these sorts of semi-philosophical questions attract people who are just dying to tell us their "theories". $\endgroup$ Commented Jul 10, 2025 at 1:23

13 Answers 13

55
$\begingroup$

I think the notion of topological space could have very easily not existed - or at least be a lot more of a marginal notion.

Of course, there would probably still have been some notions of "space" and "continuous function". But there are many alternatives to our traditional concept of topology that could have been used instead, and I feel like once one of these becomes standard there are no strong incentives to really develop another.

  • There are many non-equivalent point-set notion of spaces, like convergence spaces, condensed/pyknotic sets, sequential spaces, compactological spaces, the sort of compactly generated spaces used to get convenient categories of spaces, etc...

  • There could be no "general" notion of spaces, but lots more specific notions that we use in different contexts (manifolds, analytic sets, CW-complexes, pro-discrete spaces, ...)

  • A philosophical aversion to consider the continuum as a collection of isolated points could have led to something more like our pointfree topology (where open subspace and a notion of locality are taken as primitive independently of an underlying set of points) introduced before point-set topology and become the dominant notion.

  • There are probably some other alternatives we haven't even considered.

$\endgroup$
9
  • 4
    $\begingroup$ Further thoughts on pointfree topology as an alternative may be found in this MO answer. $\endgroup$ Commented Jul 10, 2025 at 12:10
  • 2
    $\begingroup$ This is a really interesting suggestion! I'm curious if anyone knows, historically, how it came to be that open-set-based topological spaces dominated. Was it entirely an accident, or were there reasons that at least seemed good to people at the time? $\endgroup$ Commented Jul 10, 2025 at 16:44
  • 5
    $\begingroup$ @Mike Shulman: For how the open set formulation of topological spaces came to dominate, see The emergence of open sets, closed sets, and limit points in analysis and topology by Gregory H. Moore (2008; especially Section 14, but note all the prior formulations discussed, not all of which by the way are actual "logical equivalences" of what is now called a topological space) and the mathoverflow question Why is a topology made up of 'open' sets? $\endgroup$ Commented Jul 10, 2025 at 17:18
  • 1
    $\begingroup$ +1, since this also answers “how would you like mathematics to be different?” $\endgroup$ Commented Jul 12, 2025 at 13:56
  • 2
    $\begingroup$ Obsessing over the "right" notion of topological space is pointless since (almost) no one actually studies topological spaces. Rather, they study much more geometrically rich and structured objects that happen to be topological spaces, which gives a convenient minimal language to talk about continuity and related notions. You have to choose some such language to get going, but once you start doing real work the point-set details become almost irrelevant. I guess for the purposes of communication it's nice that everyone uses the same formalism. $\endgroup$ Commented Jul 14, 2025 at 19:15
39
$\begingroup$

We can only speculate how our modern mathematics could be different. But there is one case which can be studied "empirically". I mean there is at least one historic example of two different ways mathematics can be developed.

I mean the difference between Greek and Babylonian mathematics. They developed at approximately the same epoch in two very different civilizations. While early Greek mathematics concentrated on geometry and rigorous proofs, Babylonian mathematics concentrated on computations and algorithms. In particular they invented a positional system, including the notation for zero. But did not invent "axiomatic method", calculus and advanced geometry.

One of the main applications of mathematics in both civilizations was to astronomy, and here we have much more documented material. There is a well developed Babylonian mathematical astronomy which is completely different from what we know from Greece. Despite the fact that after 300bc there was a substantial interaction between the two, Babylonian astronomy survived as a separate science until 19 century (in Tamil astronomy in India). There is an enormous difference in how both systems treated the same phenomena, for example Lunar and Solar eclipses. And both systems were successful to some extent. Probably one cannot say that Babylonian methods were very "inferior" to Ptolemy in predictive power.

Refs. O. Neugebauer, The exact sciences in antiquity, Princeton, 1962.

B. L. van der Waerden, Ontwakende Wetenschap (Science awakening), Groningen 1950, there are English translations.

$\endgroup$
6
  • 1
    $\begingroup$ Can you expand on the "enormous differences", maybe in the specific example of eclipses? $\endgroup$ Commented Jul 11, 2025 at 11:38
  • 3
    $\begingroup$ @Alex Kruckman: To the best of out knowledge, Babylonians did not have any geometric picture of how and why the eclipses occur. They treated a sequence of phenomena like a time series in statistics trying to find numerical patterns. Of course, this is only what we can infer from the clay tables that we found. We don't have a Babylonian treatise similar to Ptolemy to judge about their methods. $\endgroup$ Commented Jul 11, 2025 at 14:11
  • $\begingroup$ This is interesting but I don't think it answers the question. $\endgroup$ Commented Jul 11, 2025 at 19:49
  • 2
    $\begingroup$ @Michael Bachtold: Yes, to some extent. But the Greek point of view plays a much more prominent role. In fact already Ptolemy incorporates both points of view to some extent. There is a large literature on the influence of Babylonian astronomy on Hipparchus. $\endgroup$ Commented Jul 14, 2025 at 11:54
  • 3
    $\begingroup$ @Michael Bachtold: Not even mentioning that we all measure time and angles in Babylonian way. $\endgroup$ Commented Jul 14, 2025 at 11:57
26
$\begingroup$

Nowadays, we associate mathematical rigor with symbolic/algebraic reasoning. Geometric reasoning is considered rigorous only insofar as it can be backed up with symbolic/algebraic reasoning, and "picture proofs" are regarded with suspicion.

However, one can imagine an alternate universe in which the script is flipped. After all, for centuries, the paradigmatic example of mathematical rigor was Euclidean geometry. In A formal system for Euclid's Elements, by Avigad, Dean, and Mumma, it is shown that the diagrammatic reasoning in Euclid can be made completely rigorous. So perhaps one can imagine a world in which geometry continues to be the ultimate foundation of mathematics, with increasingly strong logical systems being characterized by increasingly strong geometric axioms, rather than increasingly strong set-theoretic axioms.

$\endgroup$
10
  • 2
    $\begingroup$ Can you give any examples of such "increasingly strong geometric axioms"? $\endgroup$ Commented Jul 10, 2025 at 1:14
  • 1
    $\begingroup$ Perhaps instead of "formal system" I should say "foundational system". My suggestion is that in a world where mathematical rigour is associated with geometry, it seems at least plausible that our foundations would also be different. (I don't intend to criticise your answer, but rather just raise a possibility.) $\endgroup$ Commented Jul 10, 2025 at 1:34
  • 7
    $\begingroup$ Your first paragraph is true of a lot of mathematicians, but definitely not all of them. Indeed, in eg geometric topology and geometric group theory, people are totally fine with proofs by pictures. I even sometimes hear people argue that pictorial proofs are better than symbolic/algebraic ones. $\endgroup$ Commented Jul 10, 2025 at 4:05
  • 1
    $\begingroup$ I for one am not convinced that the suspicion surrounding “picture proofs” is merely historically contingent. No doubt, we can gain tremendous insight into an argument through visual aids—the human visual system is remarkable—*but* our visual system is easily tricked as demonstrated by innumerable optical illusions. Moreover, in my experience, when a “picture proof” is especially challenging to formalize, this is often a sign that there are edge cases I had not considered and might not have on the basis of the visual reasoning alone. Perhaps we have sound reason to be suspicious. $\endgroup$ Commented Jul 10, 2025 at 9:29
  • 2
    $\begingroup$ @MattF. I agree that what they showed was how to make the diagrammatic reasoning rigorous, given our current ideas about rigor, and that they did not establish (nor were they attempting to establish) an alternative way to think about rigor. I'm just saying that, given that the bottom line is "the diagrammatic reasoning of the Elements is rigorous," that opens the door to an alternative history in which some kind of directly geometric reasoning is accepted as rigorous, and developed fully. $\endgroup$ Commented Jul 10, 2025 at 14:47
20
$\begingroup$

Joe Shipman has argued for an alternative history in which people came to accept the existence of a countably additive "measure" on all subsets of the reals. This "measure" would not be translation-invariant, of course. But the axiom has some nice consequences, including strong Fubini theorems. In this hypothetical world, the negation of the continuum hypothesis would be regarded as a theorem.

I'm not sure how plausible I think Joe Shipman's scenario is, but there are other scenarios surrounding the axiom of choice that one could imagine. People could have come to accept "all sets are Lebesgue measurable" along with the axiom of dependent choice. In this scenario, measure theory would better match probabilistic intuition (there would be no Banach–Tarski paradox, for example, because full AC would be rejected).

$\endgroup$
6
  • 8
    $\begingroup$ In Shipman's world, the negation of the continuum hypothesis would be a theorem. In fact the cardinality of the continuum would be greater than or equal to the first weakly inaccessible cardinal. $\endgroup$ Commented Jul 10, 2025 at 0:38
  • $\begingroup$ @AndreasBlass Oops, of course you are correct! $\endgroup$ Commented Jul 10, 2025 at 0:43
  • $\begingroup$ 'the existence of a countably additive "measure" on all subsets of the reals.' -- Is the non-atomicity condition missing here? Otherwise, consider e.g. a Dirac measure. $\endgroup$ Commented Jul 10, 2025 at 2:31
  • 2
    $\begingroup$ @IosifPinelis Yes, the scare quotes were my way of sweeping some details under the rug. Another way to put it is that the measure should extend Lebesgue measure. $\endgroup$ Commented Jul 10, 2025 at 2:46
  • 2
    $\begingroup$ I think people would not have developed a taste for countable additivity without tasting its fruits in a more concrete setting first. And then one would naturally want to have a concrete version that could be specified by something like a CDF. And we are back to the status quo. $\endgroup$ Commented Jul 10, 2025 at 5:39
18
$\begingroup$

Descriptive set theory was essentially born in 1917 because Souslin noticed a mistake in a 1905 paper of Lebesgue. To be more precise Lebesgue "proved" that the projection to $\mathbb R$ of a Borel set in $\mathbb R^2$ is Borel (his mistake was in assuming that decreasing countable intersections commute with projections).

It's hard to guess exactly what would have happened if Lebesgue had noticed this issue himself, but the whole field of descriptive set theory could have developed in a different way from the one we know.

$\endgroup$
15
$\begingroup$

This has been talked about before on MathOverflow, but I suspect that there's a fair amount of historical contingency in the handling of size issues in category theory. In particular, I could imagine an alternate history in which the amount of effort put into addressing size issues is roughly the same as it is now but in which the communities of fields that regularly use inaccessible cardinals (i.e., Grothendieck universes) are willing to confidently assert that their results go through in $\mathsf{ZFC}$.

Part of the reason I suspect this is that this is essentially the case with size issues in model theory. Model theory becomes moderately easier under the assumption of the existence of many inaccessible cardinals, because it allows one to build saturated models of arbitrary theories. Despite this, model theorists culturally do not like assuming the existence of inaccessible cardinals and instead (sometimes implicitly) rely on certain approximations of inaccessible cardinals (and occasionally a more specific approximation of saturated models called 'special models').

The way I would envision this working is that (analogously to the situation in model theory), one would pick some cardinal $\kappa$ that they're reasonably sure is going to be larger than all of the small objects they actually care about and then work in categories derived from or closely related to the category $\def\Set{\mathrm{Set}}\Set_{\beth_{\kappa^+}}$ of sets of size less than $\beth_{\kappa^+}$, which is closed under limits and colimits of size at most $\kappa$ (since $\kappa^+$ is a regular cardinal and $\beth_{\kappa^+}$ is a strong limit cardinal). If one finds that they need a bigger 'little universe', they could then pass to the category of sets of size at most $\beth_{(\beth_{\kappa^+})^+}$ and then to $\beth_{(\beth_{(\beth_{\kappa^+})^+})^+}$ and so on.

Obviously, just like with model theory, there is an (in my opinion pretty small) amount of extra conceptual burden here, in that $\Set_{\beth_{\kappa^+}}$ has two notions of 'smallness' (i.e., $\leq \kappa$ and $< \beth_{\kappa^+}$) rather than the single notion of smallness one gets with an inaccessible cardinal. And I think that on some level this explains the cultural difference with regards to size issues between model theory and fields outside of logic. Many of the people working in model theory in the early days were pretty familiar with set theory and would have been fine with the occasional cardinal arithmetic calculation needed to make the above kind of reasoning work carefully, and (in part because of this) most model theorists today are comfortable handwaving these kinds of size issues entirely while asserting that their theorems are theorems of $\mathsf{ZFC}$, even if they aren't particularly comfortable with set theory. This seems analogous to me to the way many people often handwave the precise bookkeeping of universes entirely, which is why I feel like the assertion in my first paragraph is reasonable.


Now, it should be said that I also don't think that this matters that much. I frequently use much larger large cardinals in my own papers. And while I do think that $\mathsf{ZFC}$ is a more natural theory than people seem to sometimes think, I also don't think its particular consistency strength is all that special.

In principle, one might be worried about 'universe creep' (i.e., the gradual increase in the number of universes one regularly assumes) as a result of building on earlier work. But in the 95 years since Sierpiński and Tarski and separately Zermelo wrote down the definition of (strongly) inaccessible cardinals, the extent of this creep in mathematics outside of set theory seems to be at most $3$ to $5$ universes. (If I recall correctly, somewhere in the work of Jacob Lurie he explicitly assumes $3$ universes. Moreover, in a recent conversation on the Lean Zulip, Mario Carneiro calculated the maximum number of universes actually used in Mathlib and it was somewhere between $3$ and $5$ depending on how one counts. I also just don't think I've ever seen the explicit assumption of an explicit finite number of universes greater than $3$ in a paper.) As someone who regularly interacts with set theorists, this is an amusingly weak assumption to me given that even a Mahlo cardinal (i.e., an inaccessible cardinal $\kappa$ such that the set of inaccessible cardinals smaller than $\kappa$ is stationary) is regarded by set theorists as puny and the first 'actually big' large cardinal is regarded as being something like a measurable.

$\endgroup$
10
  • 1
    $\begingroup$ This is different than your point about category theory, but your answer also makes me curious whether you think the "standard" set theory might plausibly have ended up having a different consistency strength than ZFC? $\endgroup$ Commented Jul 10, 2025 at 16:51
  • 2
    $\begingroup$ Perhaps connected with this is the seminars of Bourbaki in the early 60s where universes and galaxies were discussed. The latter are smaller and exist in ZFC (and are more-or-less due to Samuel), and one could imagine if Grothendieck and Bourbaki had been more sensitive to the relative consistency issues and pushed for these, we would be in the situation you outline, with SGA4 introducing galaxies instead of universes. $\endgroup$ Commented Jul 11, 2025 at 2:25
  • $\begingroup$ @DavidRoberts If a "galaxy" corresponds to a strong limit cardinal as suggested at your link, I'm a bit dubious that it would end up sufficing in place of a universe. I'd be more convinced by an argument for a worldly cardinal (which also exist in ZFC). $\endgroup$ Commented Jul 14, 2025 at 16:39
  • $\begingroup$ @MikeShulman Regarding your first question, I'm actually not sure. I regard all of the axioms of ZFC other than foundation, choice, replacement, and maybe power set as pretty fundamental to the intuitive concept of sets. Of these only replacement and power set change consistency strength and they're both pretty heavily involved in what was going on in early set theory. (For instance, Cantor wrote down $\aleph_\omega$ well before ZFC was formalized.) $\endgroup$ Commented Jul 14, 2025 at 18:53
  • 1
    $\begingroup$ @MikeShulman I'm not saying a galaxy would work in place of a universe, but the culture of just assuming a universe or two, rather than looking very carefully at cardinal bounds, is not inevitable. I was just trying to point at actual historical evidence that shows James' idea was a real possibility, but the dice fell out for just using universes. Colin McLarty is very clear in his talks/papers that for Grothendieck, universes were just a convenience, and not an indispensable aspect of his work. $\endgroup$ Commented Jul 14, 2025 at 23:48
14
$\begingroup$

If computer science and computability were a thing much earlier, we maybe would use computable numbers instead of the reals in many places, since they have the advantage of being a countable set containing all "interesting reals" like roots, $\pi$, $e$ and so on.

$\endgroup$
6
  • 2
    $\begingroup$ Is not it less or more the same alternative as constructivism? $\endgroup$ Commented Jul 10, 2025 at 7:34
  • 9
    $\begingroup$ @FedorPetrov Not really. One is about dropping axioms like AC or excluded middle, the other about changing what you think is the important object of studies from the set of reals number to the set of computable real numbers. The only link between the two is that without excluded middle the statement "every real is computable become consistent". But you can believe excluded middle holds and that computable real and computable functions are more important that general ones, or not believe in excluded middle and still think that non-computable real exists and are important. $\endgroup$ Commented Jul 10, 2025 at 8:08
  • 3
    $\begingroup$ @SimonHenry but philosophical paradigm (like constructivism) precedes an axiomatic system. Thus, if we were thinking about reals as of computable reals, the mainstream axioms also could likely be different. $\endgroup$ Commented Jul 10, 2025 at 9:49
  • 1
    $\begingroup$ @FedorPetrov A plausible axiomatic system for "computable mathematics" is $\mathsf{RCA}_0$, which is based on classical logic (in particular, the law of the excluded middle), and therefore different from, say, Heyting arithmetic, which is arguably closer to what people mean by "constructivism." $\endgroup$ Commented Jul 10, 2025 at 12:14
  • 2
    $\begingroup$ I don't think there is a real difference between logical rules and axioms ( for example, some system like type theory don't have this distinction at all), so I'm not sure I agree with the begining of your sentence (unless when you say that constructivism is a philosophical paradigm, you are refering to the fact that there are various flavour of constructivism). In any case my point can be phrased without talking about axioms: you can talk about "computable" things in classical ZFC set theory and you can work in a constructive system without having the assumption that everything is computable. $\endgroup$ Commented Jul 10, 2025 at 13:00
12
$\begingroup$

Modern calculus as we know it is all about real numbers and the functions between them. However in the times of Newton and Leibniz, it was all about the change of quantities in relation to the change of other quantities, often involving geometry (Interestingly this viewpoint is still quite prevalent in the way physicists understand their calculations). It is only in the centuries afterwards that the viewpoint shifted.

I would blame a good part of that shift to people having a good understanding of numbers, developed in medieval times through countless classic texts on calculations, elementary algebra and crucially by adopting the Arabic number system, allowing for things like decimal points in favor of fractions. Another simple, but crucial idea was the Cartesian coordinate system, turning geometry into number problems.

Now people often argue that Archimedes was another founder of calculus. He certainly had a clear understanding of integration, in particular in terms of volume and area. So an alternative history, where he or one of his contemporaries also discovers differentiation and the fundamental theorem is not too far fetched.

But then since algebra and a flexible number system are missing, this inverted order of discovery would have lead to centuries more of geometric, "quantity" focused development of calculus. People might have even come up against the limits of that approach (e.g. the questions of differentiability and "what is a function") in a completely different way and solved them using different tools.

$\endgroup$
4
  • 7
    $\begingroup$ By the way, the Arabic number system originated in India, hence why numbers in Arabic are still read from left to right, although Arabic text itself is read from right to left. $\endgroup$ Commented Jul 11, 2025 at 1:55
  • 1
    $\begingroup$ @HollisWilliams oh, that's really cool! $\endgroup$ Commented Jul 11, 2025 at 2:17
  • $\begingroup$ @User2021 In principle, sure. $\endgroup$ Commented Jul 11, 2025 at 8:01
  • 1
    $\begingroup$ @User2021 As far as I know, roman numerals had some support for fractions, but not with arbitrary precision, so one would need to first extend that to do the modern type of calculus on real numbers. But my point was that calculus can also be done geometrically without numbers at all. I am no mathematical historian, but as far as I understand ancient geometers didn't really think of lengths and their ratios as numbers (which are for counting discrete things), but rather as different objects that just happen to be expressible as numbers sometimes (e.g. AB being as long as two copies of CD). $\endgroup$ Commented Jul 11, 2025 at 8:57
10
$\begingroup$

The solution to FLT was historically contingent. If Frey had not made his observation about a counterexample to FLT leading to a possible counterexample to modularity of elliptic curves over $\mathbf Q$, which was put into a precise form by Serre and then proved by Ribet, Wiles would not have been inspired to spend $7$ years trying to prove modularity. In the BBC documentary on FLT, Coates said the general feeling in the 1980s was that modularity of elliptic curves over $\mathbf Q$ would not be proved “in our lifetime”.

So arguably no Frey observation linking a counterexample to FLT with unusual elliptic curves probably would have meant no solution to FLT in the $1990$s, which would have made the announcement in $2012$ about a solution to the $abc$ conjecture take on an additional significance because the $abc$ conjecture implies FLT for all large exponents (nonconstructively), making the interest in a more explicit form of $abc$ more compelling.

[Edit] Another example is the Weil conjectures as the reason for Grothendieck��s work in algebraic geometry. Logically speaking you don’t need the Weil conjectures in order to develop modern algebraic geometry, but it is nevertheless true that solving those conjectures was the long-term goal inspiring Grothendieck (just as FLT was for Wiles). The comment by @algori to Shulman’s post imagines a hypothetical past where Grothendieck never worked in algebraic geometry. If there had been no Weil conjectures, then that SF past is much more likely, which would have made some areas of math very different today.

$\endgroup$
4
  • 10
    $\begingroup$ I interpreted Shulman as asking for "large-scale" alternative histories. If we're talking about individual theorems then there are many candidates. I've heard it suggested that if Donaldson's work had preceded Freedman's proof of 4D Poincare, then 4D Poincare might have taken much longer to be resolved, because people might very well have conjectured that it was false. For example, 4D Poincare would imply that $\mathbb{R}^n$ has nonequivalent differentiable structures if and only if $n=4$, which is obviously ridiculous. :-) $\endgroup$ Commented Jul 10, 2025 at 3:01
  • $\begingroup$ @TimothyChow Shulman wrote “which mathematical structures and theorems we choose to investigate could be historically contingent”. Modularity of elliptic curves over $\mathbf Q$ was an important goal on its own, but it was the link to FLT that was the catalyst for Wiles to work on modularity (FLT has no logical role in that work except as an historically interesting corollary). The methods introduced in that work had more uses, e.g., proving the Sato-Tate conjecture, which otherwise would likely still be open. I now see Mike’s Edit 2, but I think it wasn’t there when posting my answer. Was it? $\endgroup$ Commented Jul 10, 2025 at 4:00
  • 11
    $\begingroup$ An amusing comment I heard in grad school was that in an alternate universe where the importance of modularity had been enough motivation to get it solved on its own merits without the input from Frey, one can imagine someone coming along years after modularity was a known result with a one-page proof of FLT based on already known mathematics, surprising everyone. $\endgroup$ Commented Jul 10, 2025 at 4:09
  • $\begingroup$ Yes, Edit 2 postdates your answer. Sorry that I didn't also comment on your answer -- this sort of thing is interesting, but not really what I'm going for. $\endgroup$ Commented Jul 10, 2025 at 4:36
7
$\begingroup$

KConrad mentioned FLT in an answer, in the context of the late XXth century. But its rôle in the development of number theory, commutative algebra and algebraic/arithmetic geometry has been, one might argue, enormous starting from its very first appearance. And I think it might well happen that if Fermat would not, by whatever reason, write his famous

... cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet

the whole extraordinarily rich cascade of breakthroughs in these fields might well be delayed for many decades, and the current shape of those domains of mathematics might quite well become entirely different from what we have now.

$\endgroup$
7
  • 9
    $\begingroup$ And on the other hand if the margin had been big enough to contain Fermat’s proof… $\endgroup$ Commented Jul 11, 2025 at 4:03
  • 6
    $\begingroup$ If not Fermat, Euler surely would have come to this equation himself, wouldn't he? $\endgroup$ Commented Jul 11, 2025 at 8:25
  • $\begingroup$ @liuyao Let me speculate too - he would hardly create that much suspense around it: he was admired for so many different things... Besides, that's already one century of a time distance $\endgroup$ Commented Jul 11, 2025 at 12:02
  • 1
    $\begingroup$ @HollisWilliams The tools we used need not be the only possible tools; the folklore, as I understand it, is that maybe Fermat had some really clever tool that's sitting just outside sight for all of us. I personally agree that it's unlikely, but not impossible. $\endgroup$ Commented Jul 14, 2025 at 20:31
  • 2
    $\begingroup$ @liuyao Coming up with it is one thing, but getting interested in it is another. Gauss said that he didn't find FLT very interesting because one could easily generate tons of similar intractable questions. $\endgroup$ Commented Jul 15, 2025 at 18:37
6
$\begingroup$

I keep wondering why probability theory, measure theory, and (naive) set theory did not develop earlier, especially in comparison to algebra, consider, for example, Galois theory. When I learnt Galois theory, I was surprised how old the results were. On the other hand, when I was tought the law of large numbers from pairwise independent identically distributed random variables, I was suprised how recently the result had been obtained. I find this especially surprising as probability theory seems closer to applications than algebra and unlike in the case of physics one cannot argue so easily that one had to wait for experimental technology to catch up.

$\endgroup$
5
  • 7
    $\begingroup$ Probabilistic thinking is so second nature to us moderns that indeed it seems hard to believe humans did not always reason probabilistically, but it seems to me that in the past people genuinely did not (with the Fermat-Pascal correspondence - see en.wikipedia.org/wiki/Problem_of_points - being an important milestone in the development of this kind of thought) $\endgroup$ Commented Aug 10, 2025 at 15:25
  • 7
    $\begingroup$ Along the same lines as @SamHopkins' comment, it is funny to learn and hard to believe that d'Alembert (yes, the same d'Alembert as in d'Alembert's theorem a.k.a. the fundamental theorem of algebra and d'Alembert's equation!) thought that the probability of getting tails twice when flipping two coins is 1/3… $\endgroup$ Commented Aug 10, 2025 at 16:08
  • 1
    $\begingroup$ Thank you both , I should look into these. I vaguely remember that our middle school teacher talked about the confusion of mathematicians regarding probabilty centuries ago and I found it surprising then and still do. It was probably about the Fermat-Pascal correspondence, I am not sure. I feel tempted to conclude that it is quite difficult to put oneself into the shoes of people living that long ago, not only with respect to mundane issues but also quite abstract ones. $\endgroup$ Commented Aug 10, 2025 at 16:20
  • 4
    $\begingroup$ @SamHopkins if you teach an undergraduate probability course then I think you will quickly be disabused of the impression that probabilistic thinking comes second nature to modern humans. :) $\endgroup$ Commented Oct 17, 2025 at 0:15
  • $\begingroup$ To augment Keith Conrad's point, the continuing solvency of casinos and the successful revenue generation from state lotteries relies on probabilistic thinking not being second nature. $\endgroup$ Commented Mar 31 at 22:50
4
$\begingroup$

Somehow the notion of a function (between sets) play an important role and have contributed to forming concepts in mathematics. Sometimes I wonder how mathematics would have evolved if we used relations instead of functions.

Often you see proofs that under certain assumption something is well defined and they you define a function mapping something to that element. If the assumptions do not hold, then you can't continue to work. In the setting of relations, we would then map the source element to all of the choices.

$\endgroup$
0
$\begingroup$

I would like to propose another alternate history, this time about set theory. Set theory has, so to speak, as most things, two parents, analysis and logic. The impression I got was that the developments in these areas happened quite independently. Mathematicians were able to do number theory for millenia without resorting to axioms. In the case of set theory, they did it not even for half a century. I am afraid that I do not know enough about the history of geometry before Euclid or in other intellectual traditions to give an exposition here. But I find it perfectly imaginable that set theory could have been practised "naïvely" for a considerably longer period than it historically has been. Suppose that we are in the present, i.e. the year 1950, set theory is practised for the better part of two centuries but academics are not that interested in logical questions and Gödel stuck to studying physics. Then the continuum hypothesis would be regarded more like the question whether there are any odd perfect numbers, old, interesting, but it not having been resolved would not be regarded as an embarrassment to the field to any further extent than in the case of the existence of odd perfect numbers and number theory. I would stop short of claiming that set theory could have had a naïve period quite as long as number theory but one should keep in mind that we have seen a dramatic increase in the efforts spent on science and mathematics in the period discussed which distorts the comparison to number theory a bit. I believe that only a few things would have to have been a little bit different, a greater diversity in people influencing the field at an early stage, a closer relationship to analysis, less of an obsession with attaining a complete theory and probably a few more things which I fail think of at the moment.

$\endgroup$
2
  • 1
    $\begingroup$ In this history, how would mathematicians have responded to the paradoxes (eg Russell’s and Burali-Forti’s)? Making this history plausible requires reorienting not just Gödel and philosophers like Frege and Russell, but also mathematicians like Hilbert and von Neumann, who actually saw plenty of motivation for the axiomatic approach. $\endgroup$ Commented Aug 10, 2025 at 23:44
  • $\begingroup$ The hypothesis would be that the notion of sets emerged more than in reality from gathering mathematical objects rather than thinking of them as extensions of properties. And I would argue that the reason the axiomatic approach was so popular had at least as much to do with what was popular at the time as with the subject matter. Cantor was aware that certain ways of arguing were in conflict with each other and he seemed to have an idea which ones to prioritise. $\endgroup$ Commented Aug 11, 2025 at 1:49

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.