2

I came across this interesting video by a philosopher named Harjit Bhogal (see here) with a related paper here where he talks about the notion of a “robust” explanation.

He claims that when we come across facts that seem to “call out” for an explanation, such as a coin landing on heads 50 straight times, we don’t prefer the chance explanation and rather the design explanation. One of the reasons for this, he claims, is that the former’s explanation would have to do with all the micro-physical conditions before the coin tosses lining up just right to produce a desired sequence whereas the design explanation (such as a coin being rigged) would not.

More specifically, it is easy to imagine how a tiny difference in the microphysical conditions of each coin toss would produce a subsequent change in the final series of coin toss outcomes such that it is not all heads anymore. This makes the explanation “flimsy”. On the other hand, if the coin is rigged, tiny differences in microphysical conditions would still result in the same outcomes since the coin is rigged to land on heads (let’s say via the coin being a double sided coin that lands on heads). It is also a more coarse grained explanation rather than the fine grained micro physical explanation.

Now, what I find interesting about this is that it seems to give more of a preference to design explanations over all “non design” explanations in general. When there is a designer capable of and intending to design a certain thing, different kinds of microphysical conditions would still result in the same designed outcomes

But can’t certain design explanations also be “flimsy”? For example, suppose you have Bob, a person who is historically known to be against cheating, playing a game where if he tosses 3 straight heads, he wins a prize of 5 dollars. He then tosses a coin 3 straight times and it lands on heads. Now, if he cheated and somehow rigged the coin, then the explanation seems to look “robust” compared to the chance explanation where any little difference in microphysical conditions would lead to different coin toss outcomes.

But the conditions leading up to Bob’s motivation and ability to rig a coin seem to conceivably also be flimsy. It seems, especially given his history, that a lot of things would also have to be “just right” for Bob, a person known for not cheating, to bother trying to rig a coin for a prize of a measly 5 dollars in the first place. How does one then, no pun intended, robustly define the difference between a robust and non robust explanation?

4
  • 1
    I think that it is only descriptive, see Robustness. Commented Mar 28 at 18:33
  • While neither variance nor statistical significance is mentioned in the paper, I get the impression that the author has something like it in mind: if an event such as a particular series of coin-tosses is statistically improbable on the assumption of a fair coin, then we should consider that hypothesis to be flimsy, not robust... In your "Honest Bob" example, if we did find out he cheated, then we move from robust vs. flimsy to true vs. false; absent that knowledge, supposing that he did not cheat looks robust - unless there's a robust (plausible) reason to suspect he did cheat this time. Commented Mar 28 at 23:59
  • If we understand how a designer could do something, and there is a motive, then that makes sense. If a designer isn't needed for an explanation, that makes even more sense. An explanation which basically doesn't have to explain anything is best. Commented Mar 29 at 1:51
  • It seems like you're just describing Occam's Razor. The "robust" argument is the one that requires fewer assumptions. Commented yesterday

6 Answers 6

9

The phrase "robust explanation" does not have a particular definition in philosophy. One would have to ask any given speaker to clarify what they mean in the circumstance.

The ordinary connotation of "robust" though would suggest that a robust explanation is one that would withstand minor deviations of the relevant facts, but this would have to be based on some shared understanding of "minor" and "relevant" in the circumstances.

4

A coin landing on heads 100 times in a row might only "call out" for an explanation if one has some existing beliefs about the coin, creating the (failed) expectation that it would land on heads roughly half the time. Without such beliefs or expectations, neither explanation "calls out" for an explanation any more than the other.


What's actually going on is that we have existing beliefs (or hypotheses), based on experience, that:

  • The coin is fair (based on coins usually, but not always, being fair).
  • Tossing a coin is chaotic enough to result in a random distribution.

If the above were true, it would be an unlikely outcome to get heads many times in a row (we can model this via confidence intervals). So we might suppose that either or both claims are false, and we can consider whether and how competing claims may better explain the evidence (be that the coin being weighted or a coin-toss not actually being as chaotic as believed).

Though note that a coin being fair and a coin being weighted are both designed, because essentially all coins are designed. Calling the latter "rigged" suggests intent (i.e. "more" design). In the case of a coin, being weighted may be intentional (whereas being fair may merely be a side effect of design*). But this isn't, as such, a principle we can generalise to determine whether something was designed (though it may relate to other such principles). Naturally-forming things could lean towards some outcomes above others due to simple asymmetries in how those things formed. Coins are usually designed to be roughly symmetrical (fair), but any given natural forming process may or may not produce symmetry. And some natural things (e.g. physical laws or physical constants) may not have formed at all - they may just always have been that way.

* If we compare this to e.g. dice, those are usually designed specifically to be fair, since producing each side with equal probability is the main use of a dice (whereas the main use of a coin is as currency).

So, what is a "robust" explanation, then?

The concept of a "robust" explanation is not particularly robustly defined.

Though we could define it in terms of having high explanatory power, such as having greater predictive power, fewer surprising facts and fewer additional claims to retrofit the explanation to the evidence.

In the case of flipping a coin, the fair-chaotic-coin explanation would predict that we're get a roughly-even distribution between heads and tales across many flips, and it would be surprising if we get either heads or tales exclusively or many times in a row. With a weighted coin, this is less surprising and we may correctly predict this outcome going forward, if indeed it's weighted.

There may be additional claims required to explain how you ended up with a weighted coin, or a fair coin for that matter, in the first place. But both of these things are fairly common occurrences*, so neither would be too extraordinary of a claim to add to our explanation. If the existence of a weighted coin were near impossible, it may still be more likely that what actually happened was the exceedingly unlikely possibility that a fair coin landed on heads many times in a row.

* A coin being fair is more common than it being weighted. The former would often be the default explanation, the null hypothesis that we'd need to reject, based on the evidence, to accept that the coin is weighted. Though in certain settings, weighted coins may be more likely and there may be reasons to mitigate that possibility, e.g. a casino wouldn't let you bring your own coins (or dice, rather), because they'd lose a lot of money if it ends up being weighted. And you'd win a lot of money, which would make it more likely for someone to bring weighted dice.

Never mind that we also need to consider the ways we could end up with a weighted coin. For a coin in particular, we generally broadly have 2 main possibilities:

  • The coin was intentionally made to be weighted.
  • There was an inconsistency in the production process (or being weighted was a side effect of patterns on the coin).

The former may be more likely, but neither of these are particularly extraordinary. Note that the latter doesn't entail intent (at least not in the design of the coin, specific to it being weighted).

We wouldn't say e.g. the coin being fair is unlikely, therefore it was zapped into existence by a unicorn (unless you already have strong evidence that said unicorn exists and has a habit of zapping coin-like things into existence). We wouldn't say that even if we didn't know how weighted coins could come about.

For other things for which people say "this would be unlikely, therefore this other explanation [of it being designed this way] is correct" (in particular in relation to a god's existence, i.e. fine tuning, which has other issues):

  • We need to consider how extraordinary (or otherwise justified) this explanation is.
  • We also need to consider competing ways that this "weighting" could've happened.

Just because someone comes up with an explanation to avoid some improbability, doesn't mean they're correct.

4

As a big fan of Quine’s web of belief theory, I see it this way.

Each of us has an interconnected system of beliefs, made up of links and self-reinforcing relationships. At the core of the web are self-evident, almost indubitable factual beliefs (I exist)... Moving outward toward the periphery are very well-corroborated theories (Mars and the Earth are planets), in the sense that they are not only strongly demonstrated within their own domain, but are also highly compatible with many other parts of the web of beliefs. These are very difficult to change, even though it is possible through a paradigm shift. But the effect can be potentially cascading; if I discovered that the Sun and the Earth are not planets, I would have to call into question my entire education, the reliability of everything I’ve read in school books, my map of the cosmos, and so on. At the periphery there is an mist of extremely fragile theories and beliefs, connected to the web by only a few thin links. These are very easy to change; it only takes a small shift in a variable. For example, I might think that a certain politician is competent, but I can quickly change my mind. I might think that Oppenheimer is a great movie, but a friend might point out some sharp observations that convince me it’s overrated.

Well, I’d say your example fits quite well into this framework. First of all, a theory is robust and well-reinforced when it is solidly embedded in the web of beliefs. It is justified, and in turn justifies, other nodes and networks. In your example, one of your deep beliefs is that statistically rare events may exist, but that when they occur, especially when humans tools and agency are involved, they are often not the product of pure randomness but of hidden variables/manipulation. However, you might also hold another belief that Bob is honest (and this belief too may be well justified, because maybe you know personally Bob, and have had the chance to verify many times that he is honest). So if Bob flips heads ten times in a row, in a sense you would find yourself in difficulty interpreting this new event: did he have incredible luck, or did honest Bob cheat? “maybe the coin had a tiny defect”?... maybe Bob lost his job and is desperate and forced to cheat... all of these are possible explanations, but they are not robust beliefs. Because ultimately they are conflicting beliefs. None of them would be particularly convincing, and you would be inclined to modify them. On the other hand, if Bob were a well-known dishonest person, whom you have seen lie and deceive multiple times, and he flips heads ten times in a row, suddenly the two things would strongly reinforce each other toward a new belief: there it is, Bob cheated. And it would take further and more convincing evidence to persuade you that it was actually pure chance.

5
  • I hadn't heard of this framework before. Do you have any particular references you could point me towards for it? It sounds intriguing. Commented yesterday
  • 1
    @Idran en.wikipedia.org/wiki/Hold_come_what_may and of course Quine's own book, the Web of Belief (1978) Commented yesterday
  • 2
    Most people don't consider the Sun a planet, in fact. Commented yesterday
  • @Obie2.0 Good point, I read over it because I (thought I) knew what the OP meant: Substitute "spherical celestial bodies". Commented 23 hours ago
  • 2
    Good explanation and application of the concept of "web of beliefs." What we think to be true is interrelated like a crossword puzzle. It is rare that we can "surgically" replace a single word (i.e., conviction) without endangering the cross-linked ones, with a ripple effect. The flat Earth theory is an excellent example. This is also related to the aphorism "bold claims need bold proof" (e.g., isolated observations may have different explanations). Commented 23 hours ago
3

The issue a about Scientific Explanation are many.

An explanation must be nomological (involving laws) and causal.

But a deduction from scientific laws is not enough: it must be produced in a relevant context.

More specifically:

(page 685) many philosophers share the basic thought that explanations should be in a certain way ‘robust’ — they should apply to a wide range of possible situations and not only the very specific situation that actually occurred.

This is knoiwn as rejection of ad hoc hypothesis: "an ad hoc hypothesis is a hypothesis added to a theory in order to save it from being falsified."

The issue is already in the title: coincidences (like miracles) are not "robust".

This is correct: they are not nomological nor causal and probably neither relevant.

See also G. Bamford, Popper's Explications of Ad Hocness: Circularity, Empirical Content, and Scientific Practice (1993).

1
  • 2
    Right, design explanations don't prove that there is a designer, because it is assumed in stating the problem. (and you know what happens when we ass/u/me...) Commented 2 days ago
2

I would say our most robust reasoning is about applying the continuous symmetries of conservation laws. This is even more robust than causal closure because it applies to quantum as well as classical phenomena.

Heads & tails of a coin have different printed volumes, so weights, similarly for faces of a die marked with different numbers of hollows; but we assume they are smaller than the σ, standard deviation, of initial conditions we would measure to predict from. The case of using lasers to predict where the ball in a roulette wheel will fall (Guardian newspaper article) from during it's spin, further illustrates the difference between idealised & practical randomness. See the 3.8 σ anomaly in Superbowl coin tosses for another practical example of the divergence between our idealised expectation that coin tosses are random, and the practical reality that highly motivated humans can systematically influence outcomes of coins they toss: Explore the intriguing trend of the Super Bowl coin flip and the NFC's unprecedented winning streak that challenges probability (link to Discover magazine article).

We use idealisations and the physics of solid bodies and statistical distributions inferred from truly random inputs acting on them, to model our randomness generators, like discs and cubes and balls spun in wheels. But if you think something is truly random, I'd say, how much would you bet..?

Lavarand, a wall of lava lamps used by Internet security company Cloudflare to seed random number generators for internet security cryptography

This is Lavarand, a wall of lava lamps used by Internet security company Cloudflare to seed random number generators for internet security cryptography. According to W3Techs, Cloudflare is used by approximately 21.3% of all websites on the Internet as of January 2026. The difference between nearly random and even a tiny bit predictable has significant implications for digital security, and they use this and also a wall of double-pendulums and a Geiger counter, to reduce issues of predictability of their strings.

See eg. Randomness and Pseudorandomness for the issues with generating relatively random strings from effectively classical systems, and the wider philosophical issues. In information theory and the physics of entropy, it may be the the case that better measurements or models of microstructures yield substantially different inferences and insights.

5
  • 1
    It is interesting that adding the dimples to a golf ball made it fly more true. So randomness can increase predictability. The temperature of a roomfull of air is pretty even. Maybe adding dimples to the Roulette ball would make it more predictable? Turbulence usually makes noise and drag, but for owl feathers, it makes it quieter, and for shark skin, reduces drag. Commented 2 days ago
  • @ScottRowe Add dimples and shark skin to an owl? Commented 11 hours ago
  • @keshlam you'd get a cute owl that flies true. Penguins are birds that fly underwater. Commented 5 hours ago
  • @ScottRowe: I entered a yearly pengun song challenge last year about this time, writing We Will Swim Fast to the tune of Fred Small's I Will Stand Fast. Unfortunately I lost one of the required words in a late edit and invalidated myself. Commented 3 hours ago
  • @keshlam maybe it should be "I Will Fail Fast"? Or, We Will Drive CriglCragl Batty :-) This site needs more friction. Too noisy. Commented 3 hours ago
2

The author is using 'robust' in a particular way. Initially, he says that:

[for explanations to be robust[ they should apply to a wide range of possible situations and not only the very specific situation that actually occurred (pg 686)

then later he develops that into a principle of robustness, saying:

[robustness means] that explanations are better if in more of (that is, a higher proportion of) the physically possible worlds where the explanandum is true, the explanans explains the explanandum.

In both cases, robustness entails the idea that an explanation is more acceptable if it explains a wider range of phenomena, not just the particular phenomena being observed. So, to take the original example, say we see a coin flip heads fifty times in a row. Now we have a choice:

  1. we can assert that this is a fair coin that just happened to flip fifty heads in a row, or…
  2. we can assert that some manner of cheating is occurring.

Now in the infinite possible worlds in which we flip a fair coin 50 times, only 1 in 250 will produce fifty sequential heads. Therefore this explanation only works as an explanation in 1:250 cases. But in the infinite possible worlds in which people try to cheat, far more than 1:250 cases will create fifty sequential heads (because that is the intention of cheating). The 'cheating' explanation works as an explanation in more cases, therefore it is more robust than the 'fair coin' explanation.

Of course, we can bring anything into our analysis that we like, simply by modifying our set of 'possible worlds'. For instance (using your last example) when we say that Bob is disinclined to cheat, we mean that (as a matter of Bob's character) there are few possible worlds in which he would cheat. Then we have a metaphysical function: the possible worlds based on the statistics of a fair coin intersected with the (reduced) possible worlds in which Bob would cheat. This would give us a new set of possible worlds, and thus a new calculation of robustness. If in fact we can assert that there are no possible worlds in which Bob would cheat, then the 'fair coin' explanation is suddenly the most robust because we've excluded all the cases in which Bob got his results by cheating; all that's left is 'fair coin' cases.

Robustness (like statistical power) is dependent both on the assumptions we are willing to make about our explanations, and on the correctness of those assumptions.

5
  • Maybe we should just stick with the actual world? Commented 2 days ago
  • 1
    @ScottRowe: It's the nature of an 'explanation' to say why things when the way they did and not some other way. If we don't care to consider who things might have gone wrong (in some non-actual world), then we can't ever explain anything; everything 'just is'. Careful of the pre/trans fallacy here; the transcendental awareness of the present moment is not at all the same thing as animalistic moment-by-moment existence. Commented 2 days ago
  • Ok. I guess to me an explanation means, "this is the sequence of events that happened." So possible worlds just sounds like probable rubbish. Alternate worlds, many worlds and so on are basically like trying to explain things using Astrology. You can say anything that way. Tell me what did happen, or say, "I don't know." I respect not having an answer more than hand waving. Commented 2 days ago
  • 1
    @ScottRowe: Well, I suppose I'd call that a description, not an explanation. An explanation has to have some effort to get at the 'why' or 'how' of things. I mean, if I roll a die and I get a 5, I have to acknowledge (1) that I could have gotten a different number, and (2) that there's a reason I got a 5 instead of one of the others. A description just says "I got a 5"; and explanation starts to consider °1 and °2. 'alternate worlds' is just a jargony way of saying 'other things could have happened" (e.g., an alternate world in which I got a 4, or one in which I got a 3). Commented 2 days ago
  • 3
    @ScottRowe: Just think of 'alternate worlds' or 'possible worlds' as the sample space of a (not necessarily probabilistic) event, like saying there are alternate worlds where you might have had a chicken salad sandwich or tuna casserole for lunch. Commented 2 days ago

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.