A formal definition of a variable.

There are three different notions that you have mentioned in you question.

Variables in logic

A variable in logic is just a symbol that can be used in existential or universal quantification. In English we can say "Not everything is perishable." but in logic we'd have to say something like:

$\neg \forall x \in Things\ ( Perishable(x) )$.

Here "$x$" is merely a symbol that appears in two places, the quantification as well as the quantified statement, in order to precisely describe the range of all entities that are claimed to be perishable.

Depending on the kind of logic system you are working in or talking about, you might have free variables and bound variables, the former being not quantified and the latter being quantified. Personally in practical use it is better not having free variables at all. Why do I say so? Consider "$x^2 \ge 0$". It is usually understood correctly because of the context, but it is not actually precise. Better is "$\forall x \in \mathbb{R}\ ( x^2 \ge 0 )$", assuming that "$\ge$" and "$^2$" had been previously defined for real numbers.

Variables in calculus

Variables here are totally different from the variables in logic. This usage was the original usage of variables, which were indeed named aptly since each variable referred to a varying quantity, usually with respect to time. If say we had an object moving in a circle around the origin, then if we express its position using Cartesian coordinates, over time its coordinates $x,y$ would be variables. We could then define the following:

At any time $t$, let $Δx$ to be the change in $x$ from that time.

Note that at time $t$, when $t$ changes to $t+Δt$, $x$ would change to $x+Δx$. For nonzero $Δt$ the ratio $\frac{Δx}{Δt}$ would be interesting to consider, since it captures how much $x$ changes with respect to $t$. Generalizing, if $Δx$ tends to (but not equal to) $0$ as $Δt$ tends to $0$, we can consider the ratio $\frac{Δy}{Δx}$, and then we could define rate of change as follows:

If $\frac{Δy}{Δx}$ eventually stabilizes at a value $r$ as $Δt$ tends to $0$, then let $\frac{dy}{dx} = r$, otherwise we say that $\frac{dy}{dx}$ is undefined.

This kind of reasoning was what originally led to calculus, and in this form it is very useful to treat variables this way as varying quantities. Notice that this viewpoint explains trivially why we can do the following:

Take any variables $x,y$ varying with time, and let $z = x^2 y$. Then $z$ also is a variable, and if $\frac{dx}{dt},\frac{dy}{dt}$ exist then $\frac{dz}{dt} = 2xy \frac{dx}{dt} + x^2 \frac{dy}{dt}$.

Variables in probability theory

The third kind of variable is the random variable in probability theory, which is different from the earlier two kinds of variables. A random variable is best thought of as a generator of values from its probability distribution rather than a single value.

We also define basic arithmetic on random variables and between random variables and ordinary values, so that we can do things like:

Take any real random variables $X,Y$, and let $Z = \frac{1}{2}(X+Y)$. Then $Z$ is also a random variable, and $\mathbb{E}(Z) = \frac{1}{2}(\mathbb{E}(X)+\mathbb{E}(Y))$.

Comments

You can see that random variables and variables in calculus are handled in similar ways, because it follows intuition. Of course if one wants absolute rigour some care has to be taken in setting up the necessary foundations, but it is possible, which is why most mathematicians manipulate these variables in this manner instead of doing everything in set theory over first-order logic. There is no need for concern that these manipulations are non-rigorous, as long as one strictly follows the rules governing these notations.

It is akin to our use of the standard axiomatization of the real numbers rather than the use of an actual structure satisfying those axioms. If you know a bit of programming, the axiomatization is just an interface (contract) while the actual structures are implementations (concrete instances).

Interfaces are always more robust and hence preferable to implementations. This is not just a philosophical viewpoint but a practical one. Should we want to translate our existing mathematical knowledge to a different underlying formal system such as type theory instead of set theory, it certainly would be easier by far for those structures where we use axiomatizations or simpler notation.


Here is an analogy with programming, in case you have some experience with that. A variable in a programming language is not an "object", it's a "name" for an object, which is only seen by the compiler. Using variable names makes it possible for a programmer to refer to multiple data objects in a coherent way so it is clear which data object each part of the code refers to. Once a program is fully compiled into machine code, there are no longer variable names so to speak. Unless "debugging info" is included by the compiler, it is not possible to tell what name was originally used for a data object solely by inspecting the compiled machine code.

Similarly, syntactic variables in mathematics -- expressions such as "x", "t", "Q", etc. -- are not mathematical objects, they are names that mathematicians use in their writing to refer to mathematical objects. Just as a compiled program no longer has variable names, the mathematical objects themselves don't have variable names.

The definition of a variable is, foremost, a definition as part of the language of mathematics. Most mathematics is carried out in natural language, which has no formal grammar. In logic, we sometimes study formalized languages, which do have formal grammars. In these languages, a certain collection of expressions are chosen at the outset to be the "variables". The semantics of the formal languages allow these variables to refer to various mathematics objects.