Successful approaches to the modelization of ''randomness''
Solution 1:
One way to interpret your motivating examples is not that the word random is ill-defined (all of probability theory would disagree with that), but that you want a mathematically natural characterization and generalization of the notion of a uniform distribution. In that case, the answer could be the Haar measure on Lie groups (among other things). This is a measure that is invariant under the action of the group, and if you restrict it to a compact set you can normalize it to form a probability distribution.
For example, the real numbers form a Lie group under addition, and the corresponding Haar measure is nothing but the usual uniform measure on $\mathbb R$, which restricted to $[0,100]$ leads to the uniform distribution on the same. We can tell that the distribution produced by uniformly picking a number in $[0,10]$ and squaring it is not uniform, because it is not invariant under addition (the probability of $[20,30]$ is not equal to the probability of $[20,30]+40 = [60,70]$).
Similarly, when dealing with lines in the plane, the relevant Lie group is the Euclidean group of rigid motions of the plane, which comes equipped with a Haar measure. This induces a measure on the space of lines which is invariant to translation and rotation. When restricted to the lines that intersect a given circle, it gives you something you could objectively call "the" uniform distribution over chords of the circle. This corresponds to picking the angle and the distance from the center uniformly, and matches Jaynes' solution using the principle of maximum ignorance.
The field of integral geometry deals with exactly this sort of thing: the properties of geometrical objects under measures that are invariant to the symmetry group of the geometrical space. It has many interesting results such as the Crofton formula, stating that the length of any curve is proportional to the expected number of times a "random" line intersects it. Of course, this could not be a theorem without precisely formalizing what it means for a line to be random.
Solution 2:
When someone says 'random' there should be a distribution that goes along with it. In your example, to pick a random $x$ from $[0,100]$, it is implied that you pick $x$ over a uniform distribution. Of course, like you pointed out, using a different distribution will give you a different result.
The point is, 'random' needs a distribution to define it.
Solution 3:
A common abuse of language is to say "let $x$ be a random foo" when one really means "let $x$ be a random variable uniformly distributed over all foo".
It is also common to abuse language to use "random" to mean "something that appears too hard to predict".
The fundamental rationale behind applying probability distributions to real world observations is really a matter of metaphysics, not mathematics.
Solution 4:
If we look at dice or a pseudo-random number generating algorithm, we need to know laws (physics or algorithm) and initial conditions to predict the result.
Based on that, here goes my attempt to define randomness:
If function's $f\left( x_1, x_2, \ldots \right)$ value can't be predicted while knowing the values of all variables $x_1, x_2, \ldots$, then the value of the function is a random number.
Any comments on this are welcome.
Truly random numbers in real life
I disagree that a truly random number is impossible.
It would be hard to believe that a deterministic algorithm could produce random results. But an algorithm is not the only way to produce a number. You just need to assign numbers to possible outcomes of some random process to get random numbers. In quantum physics the result of an experiment is random.
Example 1
State of a particle is described by wave function $\psi \left( q \right)$. Probability to find particle somewhere in $\delta$ is $\int\limits_\delta \left| \psi \left( q \right) \right| ^2 dq$. If you perform many experiments, you'll get results distributed according to $\left| \psi \left( q \right) \right| ^2$. But outcome of a single experiment is a random number from that distribution.
Example 2
Let's look at another experimental situation: light travels along $z$ axis and it is polarized along $x$ axis. It falls on a polarizer whose axis of polarization is not the same $x$ axis. For simplicity let's put our polarizer so that the angle between it's axis and $x$ axis is $\pi/4$. In that case half of the light goes through and half is absorbed. for individual photons it means that a photon will randomly either be absorbed or let through with equal probability.
The second experiment is almost the same coin toss, but, when throwing coin, one could think
if i could very precisely know initial speed and position and everything else, I could predict it's final position without any randomness
but in case of photons the are no underlying variables, the result of a single experiment is fundamentally unpredictable (random), however the result of many (infinitely many) experiments approach the 50/50 distribution.
Furthermore, the processes in atmosphere are very unstable so some tiny quantum randomness might actually be enough to make a macroscopic result random.