The scientific realism-antirealism debate concerns theories in general. However, as soon as the discussion draws arguments from the historical development of science, some issues emerge concerning how we should regard current theories in particular, as opposed to past and future ones. Positions here range between two extremes: on the one hand a radical version of the pessimistic meta-induction (PMI) would have it that since all past theories older than 100 - 150 years have been proven radically false and rejected , all present and future theories will be rejected within 100-150 year, and there can be no truth in science (at most, empirical adequacy). On the opposite extreme blunt optimists like Doppelt (2007, 2011, 2014) or Park (2017a ... On treating differently....) hold that, given the astonishing qualitative and quantitative technical and methodological progress of research in the last century or so (Fahrbach 2011), current best theories are almost completely and exactly true, so that further progress, besides adding new knowledge, can at best correct minor details of present day theories: they will not be refuted, because they have a “unique status” in the history of their discipline, distinguishing them from other theories, since they “stand alone at the pinnacle of the entire field of inquiry” (Doppelt 2014, p. 285). But evidence is against both extremes: radical pessimism cannot explain why even the ancient and now rejected theories were predictively successful: some of their predictions were as precise as utterly unforeseen and unforeseeable, so that they couldn’t be gotten right except by a miracle. But an explanation of those successes is available, and it is that those theories had some true components which by themselves were sufficient to derive their predictions, and have actually been located by recent research. Therefore past theories were not completely false; hence, even the PMI cannot show that current theories are completely false. Again, radical pessimism cannot explain the rapidly increasing rate of success of science; which again, is plainly explained by the fact that the true components of older theories are typically preserved in current ones. On the other hand, it is a priory implausible that just now we have reached the “end of history” in scientific research, a sort of promised land of pure truths, or Peirce’s ideal limit of research, and that our science is infallible. We see the mistakes of past science, while we obviously cannot see those of present science, and this ingenerates the illusion that there are none. But we have seen the same illusion arising in the past, age after age, only to be deluded: in the 18th Century people thought Newton had definitely opened up for us God’s blueprint of the Universe. In 1874 Philipp von Jolly advised a young Max Planck against studying physics, because “In this field, almost everything is already discovered, and all that remains is to fill a few holes”. In addition, we positively know that there are mistakes in current theories because even two of the most successful ones, quantum mechanics and relativity, are at variance with one another and are beset by unsolved riddles. No doubt contemporary science has made astonishing empirical and methodological progresses with respect to past science. But also in modern science vs. medieval science, etc. Nonetheless, even in the past empirical knowledge and scientific methodology had improved steadily: for instance, they had improved a lot from 1000 AD to 1700 AD, yet many wrong theories were still held at that date, and even thereafter. So, is hard to think that any improvement of our background empirical knowledge and methods can at some point make scientists practically infallible; and even more that this point has already been reached. Brad Wray (2016) pointed out that, just like at any time there are unconceived alternatives to the current theories, there are also unconceived methods (and instruments), by which those theories could be overthrown, like the discovery of the astronomical telescope in the helped overthrowing geocentrism, the discovery of the microscope jeopardized the theories of spontaneous generation, etc. Such new methods and instruments are produced all the time by the very progress of science, so the very advancements of contemporary science allow to suppose that many will be discovered even in the next future, which will undermine today’s theories, opening the way to new revolutions. The arguments against both extremes are all sound, and beware, they are not mutually incompatible! Together, therefore, they drive to a moderate intermediate position: current theories are partly true, in fact more (perhaps much more) largely true than past ones, yet they probably still include important false components which might be replaced by future revolutionary changes, more or less like Newton’s absolute space and time and gravitational theory were replaced by Einstein’s spacetime and by its curvature, or the dichotomy of matter and energy, still surviving in late 19th Century electromagnetism and statistical mechanics was replaced by the continuum of quantum mechanics and special relativity. Contrary to Kuhn’s view, revolutions and progress go hand in hand. Of course current science stands on much firmer grounds than pre-20th Century science, yet there are continuities which justify some inductive inferences from pat to present and future: we are still humans, basically using the same cognitive tool, reason and the five senses, and subject to the same cognitive limits; scientific method is basically the same; above all, Nature is still very complex, in fact unfathomably complex: it works in different ways at different scales or different locations in space or time. For instance, it is (roughly) deterministic at large scales, but indeterministic at small scales; the physical laws today are probably different from those a few instants after the Big Bang; entropy increases over time in the universe as a whole, but it may decrease in local areas or over short time spans; etc. (Alai 2017, 3282). Every great advancement in science has shown us unsuspected deeper and more basic layers of the structure of Nature, and we don’t know how many of those still lie ahead. Any of those discoveries brought out some basic mistake in our understanding of some of nature’s mechanisms, so spurred some kind of revolution. Induction is a correct inference pattern in general, and it may correctly be applied to past science, on condition of taking a correct image of past science as a premise. If this is done, the conclusion is a more balanced judgment of current and future theories, neither completely pessimistic nor implausibly optimistic. What we observe in past science is that (1) every single theory has been found to be mistaken and replaced; yet (2) mistaken but predictively successful theories had some true components (those essential to deriving their successful predictions); (3) those components were typically preserved in replacing theories, which therefore were more largely true. Since there is no reason to think that this trend is still on, we should conclude that current theories are more largely true than earlier ones, but still partly false. Even if not all the content of current theories exceeding old ones is probably true and will be preserved in the future, still we can appreciate one by one the new pieces of information and the corrections brought up by current theories, and we see they are really many. However, we cannot tell what in our theories is the percent of truth vs. falsity. A fortiori, therefore, we cannot tell how much more largely true than past theories they are, i.e. measure the difference in the respective percents of truth. Even less, of course, can we tell what percent of the whole truth on its particular subject a theory has gotten, since we don’t know what the whole truth is. It might be suggested that if the assumptions which were essential in deriving novel successful predictions are most probably true, as selectivists believe, then we should be able to discriminate what is true and what is false in our theories. However, this is not the case for two reasons. First, even assumptions which have not been so essentially employed might be true: only, we don’t have the acid test of it. Second, essential hypotheses typically appear in the derivation of novel predictions as undistinguished parts of stronger hypotheses which play the official role in the derivation; the latter are not essential, because the prediction might equally well have been derived by their essential part, even if this typically goes unnoticed. Furthermore, it can be argued that the essential content of an hypotheses often can be distinguished from its non-essential content only negatively, retrospectively and in hindsight (Alai 2021). That is, suppose H is an assumption which we now believe to be true because we believe was used essentially in deriving a successful prediction. If in the future we discover that is false, that will be sign that it had not actually been essential, and our discovery will also show which part H’ of H is wrong, hence, not essential. This however will not yet guarantee that all of the remaining part H” was essential and is true. Therefore, looking at current theories we are entitled to believe that there is some truth in them, and more precisely that there is some truth in some hypotheses which appear to have been essential to certain novel predictions; however, we cannot be certain of what exactly is true in them. If we would, that would make our heuristics much easier than they are, for in face of any experimental failure of a theory we would know precisely and in advance which parts of it should not be modified and substituted. A further difficulty is that here my talk of “parts” of theories and hypotheses is vague and rather metaphorical, and there may be different ways (none easy, anyway) to rephrase it literally. So, this adds uncertainty to our outlook. For instance, suppose we formalize theories as collections of sentences, as in the classical “statement view”: then, it might be the case that 95% of the empirical atomic sentences of T are correct, that 50% of its middle-level theoretical atomic sentences are correct, but that 90% of the few very basic atomic theoretical sentences are wrong. If so, it might be matter of taste whether to call T largely true or largely false, but it would be certainly correct to call a “revolution” the substitution of T by a theory T’ which preserved most of the empirical and middle-level theoretical sentences of T while substituting 90% of its very basic theoretical sentences. Now, nothing allows us to exclude that a number of our best theories today is in a position like T. From this and the previous arguments it follows that it cannot be excluded, and in fact it is rather probable, that our science, successful and largely true as it is, will undergo a number of revolutions in the future.

How should we judge current scientific theories?

Mario Alai
In corso di stampa

Abstract

The scientific realism-antirealism debate concerns theories in general. However, as soon as the discussion draws arguments from the historical development of science, some issues emerge concerning how we should regard current theories in particular, as opposed to past and future ones. Positions here range between two extremes: on the one hand a radical version of the pessimistic meta-induction (PMI) would have it that since all past theories older than 100 - 150 years have been proven radically false and rejected , all present and future theories will be rejected within 100-150 year, and there can be no truth in science (at most, empirical adequacy). On the opposite extreme blunt optimists like Doppelt (2007, 2011, 2014) or Park (2017a ... On treating differently....) hold that, given the astonishing qualitative and quantitative technical and methodological progress of research in the last century or so (Fahrbach 2011), current best theories are almost completely and exactly true, so that further progress, besides adding new knowledge, can at best correct minor details of present day theories: they will not be refuted, because they have a “unique status” in the history of their discipline, distinguishing them from other theories, since they “stand alone at the pinnacle of the entire field of inquiry” (Doppelt 2014, p. 285). But evidence is against both extremes: radical pessimism cannot explain why even the ancient and now rejected theories were predictively successful: some of their predictions were as precise as utterly unforeseen and unforeseeable, so that they couldn’t be gotten right except by a miracle. But an explanation of those successes is available, and it is that those theories had some true components which by themselves were sufficient to derive their predictions, and have actually been located by recent research. Therefore past theories were not completely false; hence, even the PMI cannot show that current theories are completely false. Again, radical pessimism cannot explain the rapidly increasing rate of success of science; which again, is plainly explained by the fact that the true components of older theories are typically preserved in current ones. On the other hand, it is a priory implausible that just now we have reached the “end of history” in scientific research, a sort of promised land of pure truths, or Peirce’s ideal limit of research, and that our science is infallible. We see the mistakes of past science, while we obviously cannot see those of present science, and this ingenerates the illusion that there are none. But we have seen the same illusion arising in the past, age after age, only to be deluded: in the 18th Century people thought Newton had definitely opened up for us God’s blueprint of the Universe. In 1874 Philipp von Jolly advised a young Max Planck against studying physics, because “In this field, almost everything is already discovered, and all that remains is to fill a few holes”. In addition, we positively know that there are mistakes in current theories because even two of the most successful ones, quantum mechanics and relativity, are at variance with one another and are beset by unsolved riddles. No doubt contemporary science has made astonishing empirical and methodological progresses with respect to past science. But also in modern science vs. medieval science, etc. Nonetheless, even in the past empirical knowledge and scientific methodology had improved steadily: for instance, they had improved a lot from 1000 AD to 1700 AD, yet many wrong theories were still held at that date, and even thereafter. So, is hard to think that any improvement of our background empirical knowledge and methods can at some point make scientists practically infallible; and even more that this point has already been reached. Brad Wray (2016) pointed out that, just like at any time there are unconceived alternatives to the current theories, there are also unconceived methods (and instruments), by which those theories could be overthrown, like the discovery of the astronomical telescope in the helped overthrowing geocentrism, the discovery of the microscope jeopardized the theories of spontaneous generation, etc. Such new methods and instruments are produced all the time by the very progress of science, so the very advancements of contemporary science allow to suppose that many will be discovered even in the next future, which will undermine today’s theories, opening the way to new revolutions. The arguments against both extremes are all sound, and beware, they are not mutually incompatible! Together, therefore, they drive to a moderate intermediate position: current theories are partly true, in fact more (perhaps much more) largely true than past ones, yet they probably still include important false components which might be replaced by future revolutionary changes, more or less like Newton’s absolute space and time and gravitational theory were replaced by Einstein’s spacetime and by its curvature, or the dichotomy of matter and energy, still surviving in late 19th Century electromagnetism and statistical mechanics was replaced by the continuum of quantum mechanics and special relativity. Contrary to Kuhn’s view, revolutions and progress go hand in hand. Of course current science stands on much firmer grounds than pre-20th Century science, yet there are continuities which justify some inductive inferences from pat to present and future: we are still humans, basically using the same cognitive tool, reason and the five senses, and subject to the same cognitive limits; scientific method is basically the same; above all, Nature is still very complex, in fact unfathomably complex: it works in different ways at different scales or different locations in space or time. For instance, it is (roughly) deterministic at large scales, but indeterministic at small scales; the physical laws today are probably different from those a few instants after the Big Bang; entropy increases over time in the universe as a whole, but it may decrease in local areas or over short time spans; etc. (Alai 2017, 3282). Every great advancement in science has shown us unsuspected deeper and more basic layers of the structure of Nature, and we don’t know how many of those still lie ahead. Any of those discoveries brought out some basic mistake in our understanding of some of nature’s mechanisms, so spurred some kind of revolution. Induction is a correct inference pattern in general, and it may correctly be applied to past science, on condition of taking a correct image of past science as a premise. If this is done, the conclusion is a more balanced judgment of current and future theories, neither completely pessimistic nor implausibly optimistic. What we observe in past science is that (1) every single theory has been found to be mistaken and replaced; yet (2) mistaken but predictively successful theories had some true components (those essential to deriving their successful predictions); (3) those components were typically preserved in replacing theories, which therefore were more largely true. Since there is no reason to think that this trend is still on, we should conclude that current theories are more largely true than earlier ones, but still partly false. Even if not all the content of current theories exceeding old ones is probably true and will be preserved in the future, still we can appreciate one by one the new pieces of information and the corrections brought up by current theories, and we see they are really many. However, we cannot tell what in our theories is the percent of truth vs. falsity. A fortiori, therefore, we cannot tell how much more largely true than past theories they are, i.e. measure the difference in the respective percents of truth. Even less, of course, can we tell what percent of the whole truth on its particular subject a theory has gotten, since we don’t know what the whole truth is. It might be suggested that if the assumptions which were essential in deriving novel successful predictions are most probably true, as selectivists believe, then we should be able to discriminate what is true and what is false in our theories. However, this is not the case for two reasons. First, even assumptions which have not been so essentially employed might be true: only, we don’t have the acid test of it. Second, essential hypotheses typically appear in the derivation of novel predictions as undistinguished parts of stronger hypotheses which play the official role in the derivation; the latter are not essential, because the prediction might equally well have been derived by their essential part, even if this typically goes unnoticed. Furthermore, it can be argued that the essential content of an hypotheses often can be distinguished from its non-essential content only negatively, retrospectively and in hindsight (Alai 2021). That is, suppose H is an assumption which we now believe to be true because we believe was used essentially in deriving a successful prediction. If in the future we discover that is false, that will be sign that it had not actually been essential, and our discovery will also show which part H’ of H is wrong, hence, not essential. This however will not yet guarantee that all of the remaining part H” was essential and is true. Therefore, looking at current theories we are entitled to believe that there is some truth in them, and more precisely that there is some truth in some hypotheses which appear to have been essential to certain novel predictions; however, we cannot be certain of what exactly is true in them. If we would, that would make our heuristics much easier than they are, for in face of any experimental failure of a theory we would know precisely and in advance which parts of it should not be modified and substituted. A further difficulty is that here my talk of “parts” of theories and hypotheses is vague and rather metaphorical, and there may be different ways (none easy, anyway) to rephrase it literally. So, this adds uncertainty to our outlook. For instance, suppose we formalize theories as collections of sentences, as in the classical “statement view”: then, it might be the case that 95% of the empirical atomic sentences of T are correct, that 50% of its middle-level theoretical atomic sentences are correct, but that 90% of the few very basic atomic theoretical sentences are wrong. If so, it might be matter of taste whether to call T largely true or largely false, but it would be certainly correct to call a “revolution” the substitution of T by a theory T’ which preserved most of the empirical and middle-level theoretical sentences of T while substituting 90% of its very basic theoretical sentences. Now, nothing allows us to exclude that a number of our best theories today is in a position like T. From this and the previous arguments it follows that it cannot be excluded, and in fact it is rather probable, that our science, successful and largely true as it is, will undergo a number of revolutions in the future.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2705170
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact