Fine-tuning and the probability distribution on the space of physical constants

Introduction

The motivation and the starting point for the following post is the argument from the fine-tuning to the existence of God. Fine-tuning can be described as the fact that some physical constants (and initial conditions) lie within a relatively small range that allows for the development of any organized life.1 Many have argued that such a coincidence is a priori extremely improbable, proposing the multiverse theory as a solution; others employed the hypothesis of a cosmic designer.

The inference from the fact of fine-tuning to low a priori probability of it has been challenged on the grounds that there is no obvious probability distribution on the space of physical constants. I have discussed this problem on some occasions with my friend Tymoteusz Miara; the problem was also raised during my talk at Apotheosis Society on 11th May 2018. I think that it is a fair objection, at least on its face.

My usual response was to argue that although we cannot justify any concrete probability distribution, we can at least roughly estimate the order of magnitude of the probability of fine-tuning by comparing the life-permitting range with the value of the constant; or with the range of values consistent with the theory. This approach takes for granted that the probability distribution is „smooth enough”, which enables us to deduce the order of magnitude of the probability of fine-tuning. Any sharp peak within the life-permitting range would be so unnatural, given the structure of the theory, that it would surely require the kind of explanation that the designer hypothesis offers. I think an honest man will agree that postulating sharp peaks in the probability distibution amounts to sweeping the problem under the rug.

But I think we can do better than this. First I will sketch my own argument, and then I will try to explain the argument from an article in the European Journal for Philosophy of Science2, which is somewhat similar.

Probability distribution is necessary for measurements

Whenever a physicist puts forward a theory, it might contain some parameters that aren’t there from the start, but have to be measured experimentally. Consider Newton’s theory of gravity. According to Newton, two massive bodies (under certain conditions) will attract each other with a „force” equal to F = G*Mm/r^2. The constant G was not predicted by Newton – it had to be measured experimentally. (There is really no way to guess the value of G without looking at the world.) So suppose the necessary experiment was performed and the experimenters’ observations match the predictions of Newton’s theory with G = 6.7×10^-11 SI units. Should we then conclude that Newton’s theory is true and that G = 6.7×10^-11 SI units?

It might seem obvious to you that the answer is yes. But (leaving aside the question of whether we should accept Newton’s theory as true) I claim that we are justified to accept the measured value of the G constant only if we can say something about its prior probability distribution.

The reason for this interesting conclusion is that scientific experiments are of a probabilistic nature – in the following sense. In the usual cases, we have all reasons to suppose that the experimenters did not make any serious mistakes and the conclusions were drawn correctly. Nevertheless, we do not have absolute certainty – the most that a particular measurement can accomplish is to increase the plausibility of a certain fact. (Here is a famous situation where it actually made more sense to suppose that experimenters had made a mistake.) Although sometimes this increase is so significant that a single experiment can settle the matter, it can never become an abolute proof of the purported fact. Since experiments, at best, merely increase the plausibility of a certain fact, the final plausibility will necessarily depend on the initial (prior) plausibility, according to Bayes’ theorem. (Even if two physicists attribute different prior probability to a certain theory, they often agree on the conclusions – this is because experimental evidence is sufficiently powerful.)

Since experiments have this probabilistic nature, our interpretation of the experiment that measured G will depend on our presuppositions about the probability distribution of G. For suppose, for example, that our prior probability distribution is heavily concentrated around G ~ 10^-5, while G ~ 6.7×10^-11 we assume to be extremely improbable a priori. After the experiment, then, we should conclude that the experimenters probably made a mistake – for any experiment, there exists a prior distribution which makes such an explanation much more plausible than the alternative! But this is clearly absurd. We see, therefore, that in order to perform meaningful measurements of the kind we are discussing, we have to assume some sort of „smooth” probability distribution on the space of physical constants.

The point of all this is that noone can escape the necessity of establishing some kind of probability distribution on the space of physical constants. Therefore, those who object to the fine-tuning argument on the grounds that there is no such probability distribution are shooting their own foot. This is also one of the conclusions of the article I mentioned at the beginning.

Probability distribution is necessary for a theory to be considered scientific

The author of this article goes even futher. He argues that a probability distribution over the space of constants is needed in order to assess the validity of the theory itself. Below is a summary of his argument (emphasis mine):

A physical theory, to be testable, must be sufficiently well-defined as to allow probabilities of data (likelihoods) to be calculated, at least in principle. Otherwise, the theory cannot tell us what data we should expect to observe, and so cannot connect with the physical universe. If the theory contains free parameters, then since the prior probability distribution of the free parameter is a necessary ingredient in calculating the likelihood of the data, the theory must justify a prior. In summary, a theory whose likelihoods are rendered undefined by untamed infinities simply fails to be testable. In essence, it fails to be a physical theory at all.2

Let me explain it in my own words. When someone puts forward a physical theory that predicts new data, the predictions are given, by necessity, in a probabilistic manner. (For example, the Higgs boson theory predicted that the LHC will be likely to observe a resonance near 120 GeV) Then the corresondence between exprimental values and predicted values constitutes experimental evidence for the theory (experiment being the epitome of „verification”), where the strength of the evidence depends on how much likelihood did the theory give to this particular data.

The point is, the theory cannot produce empirically accessible statements – statements about likelihoods of the data – if it is completely agnostic about the probability distribution of its undetermined constants. Note that these are the conditions for a viable, testable physical theory. We haven’t yet raised the matter of fine-tuning!

In conclusion, the article shows that, technically, any fundamental theory needs to specify a distribution on the space of its undetermined constants. This is usually not required in practice, because the details of this distribution do not affect the final assessment of experimental data as long as we are talking about an „honest”, reasonably smooth distribution. Therefore, if we want the science of physics to stand firm, we have to admit that it is justified to assume the existence of a particular („sufficiently smooth”) probability distribution, and in consequence, it is justified to make the inference from

The values of fundamental constants lie within a small range that allows for the development of any organized life.

to

Given pure chance, it is extremely improbable that the values of fundamental constants would lie within a small range that allows for the development of any organized life.


1 Above all, the Higgs coupling constant has this property – see the article below.

2 L. A. Barnes, Fine-tuning in the context of Bayesian theory testing, European Journal for Philosophy of Science, vol. 8 no. 2 (2018).

Reklamy

Skomentuj

Wprowadź swoje dane lub kliknij jedną z tych ikon, aby się zalogować:

Logo WordPress.com

Komentujesz korzystając z konta WordPress.com. Wyloguj /  Zmień )

Zdjęcie na Google+

Komentujesz korzystając z konta Google+. Wyloguj /  Zmień )

Zdjęcie z Twittera

Komentujesz korzystając z konta Twitter. Wyloguj /  Zmień )

Zdjęcie na Facebooku

Komentujesz korzystając z konta Facebook. Wyloguj /  Zmień )

Connecting to %s