Discarding random variables in favor of a domain-less definition?

Probabilists don't care, what exactly the domain of random variables is. Here is an extreme comment that exemplifies this: "You should soon see, if you learn more stochastic stuff, that specifying the underlying probability space is a BAD IDEA (what happens when you add a new head/tails?), and quite useless.

If specifying the underlying probability space domain of a random variable (in short "domain" from now on) is such a useless, bad idea for most scenarios, I'm wondering why no one in the long history of probability theory resp. statistics has come up with a better, slicker definition of random variables, that avoids this unelegant we-have-a-domain-but-we-won't-talk-about-it situation?

It seems the only reason to keep the domain $\Omega$ is to enabling a coupling of random variables, so that we can speak of their independence. But can't such a coupling be realized in a more elegant way, than using a space that we don't want to define in the first place?

As soon as I'm reading texts that go beyond very elementary probability, it seems to me that such domains are treated like the crazy uncle from family parties: which we never show them/him, but know it's there.


Solution 1:

That specifying a probability space is a bad idea in lots of cases does not imply that the definition of probability space lacks formal elegance. On the contrary. Exactly the fact that it can stay "undercover" while handling probability problems is in my view very handsome and somehow a proof that things cannot be made better. The quote in your question states a very recognizable fact. The modeling that goes along with solving the problem can always start with something like: "Let it be that $X,Y,Z$ are random variables on the same probability space, such that ...." What probability space? Who cares. Actually the only thing of importance is that such a space can be constructed, and that the spaces are somehow "isomorphic" when we restrict to the relevant issues. How to construct such model/space must be a part of the course on probability, but this merely to make certain that it is possible. If that knowledge has landed then we can step over on "faith". From that moment on we can just believe in it, and this in the nice certainty that we believe in something that is true. I very much like that comfort.

Solution 2:

For a more synthetic approach, the key insight is to highlight events — random truth values, or equivalently random $\{0,1\}$-valued real functions — as a fundamental object.

One can see this already from the measure-theoretic point of view: there is a bijective correspondence between events and measurable sets.

(there is a good account of how to start from the events and develop a more general universe of random sets — e.g. the real-valued random variables would be precisely the real numbers of this universe)

Already, this suggests looking at the notion of a measure algebra.

There is a subject called locale theory, which studies a fairly general notion of space, but in such a way that spaces don't have an inherent notion of point. (instead, the notion of point is a construction — a continuous function from the locale that is analogous to a one point space. But some nonempty locales don't have any such points!)

And there has been some developments towards redefining measure theory in terms of locales. For example, see Dmitri Pavlov's index of posts about measurable locales, or the nLab page on the subject.

There are some hints that this approach may have some nice features that are awkward or nonexistent in the traditional point-set approach....

But AFAIK, the fact of the matter is that the point-set approach has done a perfectly adequate job of laying the foundations of this subject; there simply is no intrinsic impetus for overhauling the foundations of probability theory. These developments are mainly from people who have prior interest in these more modern developments and want to see how probability theory would be so expressed.