101
$\begingroup$

Since long time ago I have been thinking in two problems that I have not been able to solve. It seems that one of them was recently solved. I have been thinking a lot about the motivation and its consequences. Mostly because people used to motivate one of them with some very interesting implications. My conclusion however, is that there is a mistake in the motivation of the problem, and that, while being a really interesting result, it does not make any sense in the setting in which is formulated. As my opinion is not relevant compared to one of the experts in the area, I do not say anything.

My question is if you can provide me some examples of conjectures that were believed to be interesting in the mathematical community because of a specific reason, but that once having the proof, people realized that the reason to motivate the problem was not truly related to its solution. Or in other words, the solution of the problem gives no clues about the original motivation.

$\endgroup$
6
  • 6
    $\begingroup$ Mathematical logic in its modern form came into being basically as a response to Hilbert's illusions (is "wishful thinking" a better term?) about the deductive power of formal systems. $\endgroup$ Commented Jun 3, 2014 at 19:14
  • 18
    $\begingroup$ Regarding the title, do you really mean a false illusion, or rather do you just mean "illusion"? To my way of reading, a false illusion would not really be an illusion at all, but real. $\endgroup$ Commented Jun 3, 2014 at 21:22
  • 5
    $\begingroup$ So this is really about bad/outdated goals in research rather than actual mistakes? $\endgroup$ Commented Jun 3, 2014 at 22:11
  • 7
    $\begingroup$ Would be interesting, how the P=?NP question is judged if it were settled; to me it seems a good candidate for unfulfilled expectations. $\endgroup$ Commented Jun 4, 2014 at 13:52
  • 3
    $\begingroup$ nice question but dislike the vagueness of the initial paragraph. why not cite the example? sounds like you could be referring to an important open problem... $\endgroup$ Commented Jun 6, 2014 at 23:14

8 Answers 8

105
$\begingroup$

The three-body problem is one of the most famous problems in the history of mathematics, which also has an important application in science: it was supposed to explain the Moon's motion, among other things. Enormous effort was spent on this problem by many famous mathematicians of the 18th and 19th centuries. Since Newton's time it was clear that there was no simple closed form solution. (The problem also had an important practical application in 18th century, namely to navigation. If you can predict the motion of the Moon for few years ahead with sufficient accuracy, you can determine longitude at sea without a chronometer, just by observing the Moon's position with respect to the stars.)

Bruns proved in 1887 that the problem is not integrable (does not have a complete system of algebraic integrals). In the end of the 19th century, an exact mathematical formulation of what was desired was achieved: to express the motions of the bodies in the form of convergent series of functions of time, valid for all times. This statement is due to Weierstrass.

Poincaré was awarded a prize for his work on this problem, but he did not solve it. His results were mostly negative; he discovered what is called “chaos” in this work. The work of Bruns and Poincaré gave a strong indication that no solution in the form proposed by Weierstrass exists.

Few people remember nowadays that in this precise form the problem was actually solved by Karl Frithiof Sundman, in 1912. This solution can be found in Siegel's book on celestial mechanics.

But by that time it was already understood that this solution was useless for practical purposes, namely for prediction of the Moon's motion over long time periods. It was also useless for understanding the qualitative features of the motion. It has been estimated that ne needs about $10^{8{,}000{,}000}$ terms of Sundman's series to compute Moon's position with the accuracy required in astronomy.

This does not mean that the work of Sundman was useless: the methods developed there had a substantial influence on the further development of celestial mechanics.

Here is an excellent historical exposition, with many references:

J. Barrow-Green, The dramatic episode of Sundman, Historia Mathematica, 3 (2010) 164–203.

A modern mathematical exposition of the main idea can be found in

D. Saari, A visit to the Newtonian $N$-body problem via elementary complex variables, American Mathematical Monthly, 1990, Vol. 97, No. 2, 105–119.

$\endgroup$
5
  • 2
    $\begingroup$ It is also useless, if I'm not mistaken, because the series converges extremely slowly. $\endgroup$ Commented Apr 13, 2016 at 18:28
  • 4
    $\begingroup$ Yes. But the Taylor series of the exponential which converges OK (as most series do) is also useless, if you want, for example to study the behavior as $x\to-\infty$. $\endgroup$ Commented Apr 13, 2016 at 18:34
  • 1
    $\begingroup$ Have there been any modern attempts to find series for the three-body problem with faster convergence? Or is there an impossibility result in this direction? $\endgroup$ Commented Jul 22, 2020 at 22:03
  • 1
    $\begingroup$ Some people prefer not to have links to Elsevier in their posts. I forget if you're one of them, but, just in case, here's a link to The dramatic episode of Sundman and, while I'm here, A visit to the Newtonian $N$-body problem via elementary complex variables. $\endgroup$ Commented Feb 2 at 16:12
  • 2
    $\begingroup$ @LSpice: I am one of those people (I signed a boycott of Elsevier). $\endgroup$ Commented Feb 2 at 23:46
102
$\begingroup$

Computer designers and programmers dreamed, from the earliest days of the computer, of a computer that could play chess and win. Even Alan Turing had that dream, and designed turochamp, the first chess-playing computer algorithm (it was executed on paper by hand at first, since no device could yet implement the algorithm in 1948).

As researchers realized the difficulty of playing chess well, the chess challenge was taken on in earnest. The conventional view was that to design a computer that could play chess and win would partake in the essence of artificial intelligence, and in the 1970s, computer chess was described as the Drosophila Melanogaster of Artificial Intelligence, because the work on computer chess was thought to be laying the foundations of artificial intelligence.

The basic conjecture, that computers would play chess well, turned out to be true. But the way that computers played chess well was by brute-force methods, rather than with the kind of subtle intelligence that had been expected to be necessary, and so many artificial intelligence researchers were disappointed, and lost interest in chess.

Meanwhile, the situation has led to debate in the AI community, as some researchers have argued that AI research should in fact follow the computer chess paradigm.

$\endgroup$
21
  • 6
    $\begingroup$ @StefanKohl I agree with that, but that issue is orthogonal to the point of the metaphor. If you follow the link, you'll see that Donald Michie was saying that research on computer chess was fundamental, like Mendel's genetic experiments on peas or the work on drosphila. He says, "The use of chess now as a preliminary to the knowledge engineering and cognitive engineering of the future is exactly similar, in my opinion, to the work on drosophila. It should be encouraged in a very intense way, for these reasons." The subsequent computer chess developments reveal that we was probably wrong. $\endgroup$ Commented Jun 3, 2014 at 16:02
  • 6
    $\begingroup$ @DavidFernandezBreton Computers are now simply "mediocre" at Go (Wikipedia lists a timeline of progress) . To my knowledge, the best algorithms can reasonably compete against most amateurs now. Hillariously, the AI breakthrough that has allowed this is the progression from "brute force search" to "guess and check". $\endgroup$ Commented Jun 3, 2014 at 18:21
  • 5
    $\begingroup$ @Alexis the recent breakthrough is a bit more than just "guess and check". A big part of it comes from studying the explore/exploit trade off (applied to deciding which branches of the game tree you should think hardest about). $\endgroup$ Commented Jun 3, 2014 at 19:59
  • 42
    $\begingroup$ John McCarthy, the inventor of LISP, has a good quote about this: “Computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies." $\endgroup$ Commented Jun 4, 2014 at 0:13
  • 27
    $\begingroup$ Maybe it's time to update the comments on Go. $\endgroup$ Commented Apr 14, 2016 at 0:06
89
$\begingroup$

The simplex method for linear programming was published by Dantzig in 1947. For years many variants were known to provide good performance in practice, but it was unknown whether any of these ran in polynomial time. Klee and Minty showed in 1972 that at least one such variant takes exponential time in the worst case, but the question of whether linear programs are solvable in polynomial time remained open. Given the wide-ranging applications of linear programs, this was viewed not just as a theoretical question, but an important practical one as well.

In 1979 Khachiyan published a version of the ellipsoid method which solves linear programs in polynomial time. This was viewed as an enormous breakthrough, but it turned out that while it answered the theoretical question of polynomial-time solvability, it did so in a way which was completely impractical. It shed no light on the efficiency of the simplex algorithm, which almost always runs much faster than the ellipsoid algorithm. The ellipsoid algorithm typically requires many more iterations and is numerically unstable to boot. Instability can be handled by keeping track of more and more bits of precision at each iteration, but this makes the algorithm even slower, especially in comparison to the simplex method, which does not have these instability problems.

In the years since, a variety of so-called "interior point" algorithms have appeared which are both efficient in theory and practice, but the simplex method remains competitive. There has also been work on smoothed complexity explaining why this is so.

I do not mean to diminish the theoretical importance of the ellipsoid method, but I think it is a good example of the type you ask for. While it technically answered the question at hand, it was in many ways a disappointment.

$\endgroup$
64
$\begingroup$

Before Erdős and Selberg found an elementary proof of the prime number theorem, G. H. Hardy had predicted that the discovery of such a elementary proof would be cause "for the books to be cast aside and for the theory to be rewritten." Although Erdős and Selberg's work was a major accomplishment, it did not revolutionize number theory in the way that Hardy envisaged.

See also this related MSE question.

$\endgroup$
2
  • 35
    $\begingroup$ Then again, maybe we still don't know the "right" elementary proof of the PNT... $\endgroup$ Commented Jun 3, 2014 at 22:44
  • 1
    $\begingroup$ You mean those people on the internet who have written a short elementary proof in half a page ? : ) $\endgroup$ Commented Nov 22, 2015 at 19:08
36
$\begingroup$

The simplicial homology of a simplicial complex is not a function of the complex, but of the complex together with a simplicial decomposition. The Hauptvermutung ("main conjecture") stated that simplicial homology could be proven to be an invariant of the complex by showing that any two decompositions have a common refinement.

When Milnor disproved this conjecture in 1962, the invariance of simplicial homology had long been proved using singular homology, and simplicial homology itself was regarded obsolete by many.

$\endgroup$
21
$\begingroup$

In knot theory, the motivation came from outside of math but it is a good example of what you discuss. It was first developed to help study atoms, following an atomic model by Thompson; this model was later completely discarded but knot theory took a life of its own and is now a thriving field of mathematics.

$\endgroup$
20
$\begingroup$

It was conjectured for quite a long time that all simple $D$-modules are holonomic (see comments on Is simple non-holonomic D-module a local concept? for example) until Stafford gave counterexamples. I have been told that after this was proved, people decided that there had been no particular reason to believe it in the first place, other than that "holonomic" meant having the smallest possible dimension in some sense, and "simple" things ought to be small.

$\endgroup$
19
$\begingroup$

A famous example of a "false illusion" is Hilbert Problem 2: to prove that axioms of arithmetic are consistent. Hilbert even did not care to state this as a question: are they consistent or not, so much he was sure that this can be proved.

$\endgroup$
1
  • 2
    $\begingroup$ Similarly in the $10$th problem, about Diophantine equations, Hilbert asked to give an algorithm, not whether an algorithm exists. $\endgroup$ Commented Feb 1 at 18:00

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.