back to index

Ensembles all the way down

TL;DR: Experimental reproducibility requires that ensembles are the fundamental objects of our physical theories

One of the reasons many people have trouble understanding quantum theory is a fundamental misunderstanding of what scientific theories describe. The typical understanding of an evolution \(s(t)\) is that it describes the state \(s\) at each instant of time \(t\) for a particular system. That is, \(s\) represents the full description of a particular system at a particular time. This understanding becomes problematic in quantum mechanics, but the problem is not quantum mechanics. It’s that understanding that, on closer inspection, does not work for physics in general.

First, we should realize that \(s(t)\) does not apply to a single realization of a process for a particular system. It applies to all realizations of all similarly prepared systems. That is, \(s(t)\) is not a single filmstrip, but it is the collection of infinitely many copies of similar filmstrips. The laws of physics, in fact, do not tell us what happens in specific instances. They are relationships of the type “whenever I prepare a system of this type in this particular way in these conditions, I will find it prepared in this other way after this amount of time.” This means that, necessarily, the state of the system \(s\) is always an ensemble that represents the collection of outputs of a preparation procedure.

In general, the outputs of a preparation procedure will not all be exactly the same. There will be some variability. The entropy in physics is there to quantify that variability, which is therefore a property of an ensemble. Quantum states are also pure ensembles, which are assigned the same zero entropy. They couldn’t be assigned an entropy if they weren’t ensembles. Classical mechanics assumes that this variability can be made arbitrarily small and, in the limit, made infinitesimal. But given that it is not possible, even conceptually, to have a preparation procedure that is not affected by anything else (e.g. you cannot shield from gravitation), at some point this assumption has to fail. Quantum mechanics, instead, concedes that there is always a finite variability in our preparations. We can still map an ensemble with minimal variability to another ensemble with minimal variability, but we cannot relate the instances of the initial ensemble to the instances of the final ensemble. That is, we have a deterministic and reversible map at the level of the ensembles, which is decoupled from the internal variability of the ensembles. The subensemble dynamics is inaccessible.

Ensembles should be thought of as determining the parameters of the preparation procedure, not the full description of the system. How instances are grouped within an ensemble, in fact, depends on the characteristics of the preparation procedures and the processes we are able to study. When studying a fluid, for example, the preparation, processes, and measurements may not be sensitive to the position and momentum of each molecule, but rather only to the averages of a large number of molecules. In this setting, describing the system as a continuum is perfectly satisfactory. If we are able to manipulate molecules at an individual level, the type of description changes.

The difficulty here is that one is led to ask “ensembles of what?” How can I define an ensemble without defining the elements of which the ensemble is made? For example, to define a probability distribution for a coin flip, first I have to define what heads and tails are. Similarly, if the pure state of an electron is an ensemble, I may want to define what the elements of the probability distribution are. The point is that these elements are themselves ensembles. We can define heads or tails because if we put a coin on the table, it remains heads or tails for a finite amount of time. If the coin kept flipping more than 100 times a second, we wouldn’t perceive heads or tails. We would see a blur. Therefore, heads or tails are stable properties of the coin for a finite duration of time every time we prepare heads or tails. The same will be true for an electron: there will be some properties that a preparation procedure will keep stable for a short but finite amount of time. We trap the electron within a region of space. We polarize it in a particular direction. All preparations and measurements will interact with the system for a finite time, therefore they must be understood as preparing and measuring properties of time averages. The final description of the system, then, is in terms of those time averages as we cannot further resolve it in those circumstances.

Even time should be understood as an ensemble property. The way we define time, the Coordinated Universal Time (UTC), is by having multiple metrology institutes around the world run their own atomic fountain clocks. The results from all of these are collected, the outliers discarded, and averaged. UTC is literally defined as an ensemble average of multiple clocks and each local clock is synchronized to that ensemble average. This is what the \(t\) in our equation maps to experimentally.

Those interested in ontological questions may be totally unsatisfied by this picture, but physics can only describe what is experimentally accessible. The rest is outside its realm: science has its limitation. The point is that understanding this limitation makes many problems in quantum mechanics simply disappear. Ensembles depend on the preparation procedure: they are contextual. They will have properties that are stable, while others that are indeterminate: they exhibit complementarity. And so on. Trying to attribute these features to a full description of the system is, in retrospect, a category error.

But this also clarifies what we should expect from an ultimate physical theory. The quest to find the one true description of the full state of the universe at an instant is experimentally ill-posed, so that can’t be what the laws of physics will describe. What must happen is that, at some point, our preparation and measurement procedures will hit a limit: they won’t be able to prepare and measure a system for time periods shorter than a critical duration. The best we can do, then, is describe those properties that are stable in that regime. That is, those intervals of time cannot be understood as points anymore, but as small durations. Those durations may overlap, but we cannot resolve the overlap. The only thing we are going to be able to describe is the ensemble averages within those different overlapping durations. It’s ensembles all the way down.