Does randomness exist in computers and in nature?
In the programming language Python, you can import random
and then with
random.random()
you can get a random number between $0$ and $1$.
But is it truly random or are there constraints in how computers are built that makes them not truly random number generators? For instance, could there be a bias around the first value generated or value already in memory? How would one figure out the difference?
Solution 1:
Pseudorandom number generators are based on well-defined mathematical sequences and are perfectly deterministic. Given the same seed, they will always generate the same sequence.
This is actually often a desirable property, as it allows you to test at will in the same conditions.
Sources of true randomness are present in a computer due to the many independent (asynchronous) tasks running and events occurring all the time. RAM content at well chosen addresses or clock time are indeed unpredictable and random. You can use this data to seed your generator every now and then.
Pseudorandom generators are designed to simulate a uniform distribution and they are validated by theory and by statistical tests to ensure that.
On the opposite, other sources of randomness may exhibit different kind of bias and non-uniformities. So in a way, they will look less random. There are probably some random sources (like described above) that you might detect by statistical tests or by discovering patterns, but this is probably a difficult exercise.
Solution 2:
But is it truly random or are there constraints in how computers are built that makes them not truly random number generators?
The CPU of a common computer operates deterministic, given the same input, it will always calculate the same output. So by itself it is not able to come up with a random number.
What you typically see are pseudo random generators (PRNGs) which during their deterministic run can go through a series of numbers with very large cycles and have other favourable properties, which seem to be random and can be used as approximation to random numbers.
E.g. you could involve the system time into the start value of the PRNG to make the generated number seem more random, but if you had a second exact copy of the first system with the same system time, it would generate the same numbers as the first system.
However modern computers are not isolated machines, but have sensors and network connections. These sources can be used to "harvest" entropy. (Link)
An example for this is the system behind https://www.random.org/ which measures atmospheric noise and provides it via the internet.
If there is true randomness in nature is open to debate.
In the realm of classical mechanics one assumes deterministic behaviour, but one might loose track fast due to huge number of interaction partners and sometimes extreme sensitivity to initial conditions and amplifications of errors. That is kind of practical randomness.
In the realm of quantumn mechanics one has the interesting middle ground that individual events can seem to occur at random (e.g. radioactive decay), but nonetheless the distribution of probabilities develops according to strict laws, like Schrödinger's wave equation.
How would one figure out the difference?
A simple test is using the random numbers as coordinates of points in 2D or 3D space and plot them.
The expectation is that the view plane or volume is filled up uniformly. However if pattern form, this is an indication of regularity in the sequence. E.g. see https://www.random.org/analysis/
Or see this interesting article: Strange Attractors and TCP/IP Sequence Number Analysis.
Solution 3:
many computer 'pseudo' random number generators are just calculated number sequences - some of the early ones just multiplied a number by a prime, then took a modulus - and kept the answer as the number to use for the next generation. They often repeated after ~64000 or ~16million or ~2biliion numbers were generated - the newer ones are usually longer sequences - but they are still 'pseudo' generated values - you can actually pick a seed and get the same sequence again - that's useful for simulation, where you might want to use the same randoms every time whilst testing.
Anyway, pseudo random numbers are a bit useless for encryption, or anything where we don't want the user to 'guess' the value we used - so there are better versions that use computer 'entropy' - processor ticks, mouse moves, time at start up, time processes submitted etc - to add (what people hope is) genuine 'randomness' to values.
A good random number generator should be free from bias, it should produce results that appear statistically random - an example of non-randomness is if 0 and 1 are generated with too many/too few ruins of 0's and 1's - as an example.