According to the pessimistic meta-induction, none of our current theories, hypotheses or assumptions are true and will be preserved in the future. For some hyper-optimistic outlooks (Doppelt 2007, 2011), unlike past science practically all current science is true and (save minor adjustments) it will stay forever. A much more plausible view is that not all, but some (perhaps many) of our scientific claims are at least approximately true and “future-proof”, in the sense that they will never be rejected. The problem, however, is telling which ones. On the one hand, it seems that in order to identify them we should be able to anticipate future scientific progress, which is impossible. On the other hand, this question is becoming crucial today, not only for philosophers or historians of science, but also for policymakers and the general public: the Covid-19 pandemic has shown how important it is that even individual laypersons become able to distinguish between mere scientific opinions and established scientific facts. In a forthcoming book (Identifying Future-Proof Science, Oxford University Press) Peter Vickers maintains that due to the current level of specialization and the interdisciplinary nature of many issues, not only philosophers or laypersons, but even no individual scientist can possibly examine all the relevant first-level evidence, in order to identify future-proof claims. However, he argues that they can be identified by a second-level criterion: If the relevant scientific community is sufficiently large and diverse, and at least 95% of its members believe that a claim C describes an established scientific fact, C is future-proof. This, of course, runs against the current wisdom that consensus may be due to purely sociological reasons, and much science unanimously accepted in the past was subsequently rejected by the “scientific revolutions” (Kuhn, 1962). However, Vickers holds this criterion is borne out by the history of science: no claim fulfilling this requirement has ever been rejected. Yet, in spite of many interesting and insightful observations and arguments, he falls short of giving a full principled explanation of how scientists may reach such a 95% consensus, and why it should be so reliable. Here I suggest some further steps toward answering these questions, starting from on the “no miracle argument” from novel predictions. While the probability of a hypothesis H given old evidence e is given by Bayes’ theorem, the probability that by chance a false hypothesis implies a true prediction ne is equal to the logical probability lp(ne). Hence, the probability that H is true given ne is p(H/ne) = 1-lp(ne). Thus, in the ideal case a future-proof statement might be recognized by just one piece of evidence. For less risky predictions (with higher lp) this will not be the case, but typically H licenses various independent novel predictions e1…en, whose conjunctive probability diminishes with their number. Thus, p(H/ e1…en), i.e., 1-lp(e1…en), may still be quite high. Even old evidence confers to H some probability, which grows with the number of empirical data e1…ek accounted for by H, the number of the auxiliary hypotheses required to entail them, and therefore the number of theories with which H must be consistent. In fact, when these numbers raise, it may become improbable that H was found just by puzzle-solving skill, and more probable that (first and foremost) the theoretician searched for a true hypothesis (which as such entails true consequences), and actually found one. This might account for the confirmatory power of the convergence of independent theories, or of measurements based on independent theories, of non-ad hoc explanations, etc. (Alai 2014b). Yet, I argue that in this way we may be confident that a claim is future-proof only in the weaker sense that some of its parts are going to be preserved forever (Alai 2021).

Second-level Evidence for Future-proof Science?

Mario Alai
In corso di stampa

Abstract

According to the pessimistic meta-induction, none of our current theories, hypotheses or assumptions are true and will be preserved in the future. For some hyper-optimistic outlooks (Doppelt 2007, 2011), unlike past science practically all current science is true and (save minor adjustments) it will stay forever. A much more plausible view is that not all, but some (perhaps many) of our scientific claims are at least approximately true and “future-proof”, in the sense that they will never be rejected. The problem, however, is telling which ones. On the one hand, it seems that in order to identify them we should be able to anticipate future scientific progress, which is impossible. On the other hand, this question is becoming crucial today, not only for philosophers or historians of science, but also for policymakers and the general public: the Covid-19 pandemic has shown how important it is that even individual laypersons become able to distinguish between mere scientific opinions and established scientific facts. In a forthcoming book (Identifying Future-Proof Science, Oxford University Press) Peter Vickers maintains that due to the current level of specialization and the interdisciplinary nature of many issues, not only philosophers or laypersons, but even no individual scientist can possibly examine all the relevant first-level evidence, in order to identify future-proof claims. However, he argues that they can be identified by a second-level criterion: If the relevant scientific community is sufficiently large and diverse, and at least 95% of its members believe that a claim C describes an established scientific fact, C is future-proof. This, of course, runs against the current wisdom that consensus may be due to purely sociological reasons, and much science unanimously accepted in the past was subsequently rejected by the “scientific revolutions” (Kuhn, 1962). However, Vickers holds this criterion is borne out by the history of science: no claim fulfilling this requirement has ever been rejected. Yet, in spite of many interesting and insightful observations and arguments, he falls short of giving a full principled explanation of how scientists may reach such a 95% consensus, and why it should be so reliable. Here I suggest some further steps toward answering these questions, starting from on the “no miracle argument” from novel predictions. While the probability of a hypothesis H given old evidence e is given by Bayes’ theorem, the probability that by chance a false hypothesis implies a true prediction ne is equal to the logical probability lp(ne). Hence, the probability that H is true given ne is p(H/ne) = 1-lp(ne). Thus, in the ideal case a future-proof statement might be recognized by just one piece of evidence. For less risky predictions (with higher lp) this will not be the case, but typically H licenses various independent novel predictions e1…en, whose conjunctive probability diminishes with their number. Thus, p(H/ e1…en), i.e., 1-lp(e1…en), may still be quite high. Even old evidence confers to H some probability, which grows with the number of empirical data e1…ek accounted for by H, the number of the auxiliary hypotheses required to entail them, and therefore the number of theories with which H must be consistent. In fact, when these numbers raise, it may become improbable that H was found just by puzzle-solving skill, and more probable that (first and foremost) the theoretician searched for a true hypothesis (which as such entails true consequences), and actually found one. This might account for the confirmatory power of the convergence of independent theories, or of measurements based on independent theories, of non-ad hoc explanations, etc. (Alai 2014b). Yet, I argue that in this way we may be confident that a claim is future-proof only in the weaker sense that some of its parts are going to be preserved forever (Alai 2021).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2705171
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact