Were "real numbers" used before things like Dedekind cuts, Cauchy sequences, etc. appeared?
See e.g.:
- Leonard Euler, Elements of algebra (3rd ed. - Engl.transl. by John Hewlett, 1822), pages 1-2 :
ARTICLE I
Whatever is capable of increase or diminution, is called magnitude, or quantity.
[...] §4. the determination, or the measure of magnitude of all kinds, is reduced to this: fix at pleasure upon any one known magnitude of the same species with that which is to be determined, and consider it as the measure or unit; then, determine the proportion of the proposed magnitude to this known measure. This proportion is always expressed by numbers; so that a number is nothing but the proportion of one magnitude to another arbitrarily assumed as tne unit.
§5. From this it appears, that all magnitudes may be expressed by numbers; and that the foundation of all the Mathematical Sciences must be laid in a complete treatise on the science of Numbers, and in an accurate examination of the different possible methods of calculation. This fundamental part of mathematics is called Analysis, or Algebra.
And page 39 :
§128. There is therefore a sort of numbers, which cannot be assigned by fractions, but which are nevertheless determinate quantities; as, for instance, the square root of $12$: and we call this new species of numbers, irrational numbers. They occur whenever we endeavour to find the square root of a number which is not a square; thus, $2$ not being a perfect square, the square root of $2$, or the number which, multiplied by itself, would produce $2$, is an irrational quantity. These numbers are also called surd quantities, or incommensurable.
Yes, mathematicians used the concept of real number long before rigorous definitions arose, just as mathematicians used the concept of a complex number before the argand plane was described, the dirac delta function was used before it became rigorous, etc. Intuition almost always arises before rigour.
In modern times, the style has become to model every mathematical discipline as a field of the theory of sets. Before the late 19th century, sets were virtually nonexistent. Newton could not have possibly come up with dedekind cuts, cauchy sequences, etc, for one requires some intuition about set theory to interpret these results. If one were rigorous, one would instead proceed by an axiom system as in the style of Euclid (though not a system as rigorous as those in the mathematical logic of today).
All constructions of the real numbers above are meant to construct a ``completion'' of the rational line $\mathbf{Q}$. Most of the time, we prove things about $\mathbf{R}$ from abstract field axioms, with a completeness axioms included (least upper bound principle, etc). So Newton could have proceeded from this route, but if we look at Newton's work we do not find this style of axiomatics. Most of Newton's proofs are not analytical -- they do not involve numbers. If you read the Principia, you will find that most proofs proceed by geometrical diagrams, like the greeks. Newton's main method is treating diagrams as infinitisimal, using intuition to obtain the results about the limits of diagrams (for instance, if we take a line between two points on a circle, then when they are taken to be infinitisimally close the line is perpendicular to the circle. In geometry, one uses much more intuition than rigour. In particular, I imagine at some point in the principia Newton applies (via intuition) a disguised geometric form of the intermediate value theorem -- a theorem which, if taken axiomatically, implies the completeness of the real numbers, and thus implies that Newton really is using the reals. This theorem is `obvious' to the unititated, but in introductory analysis courses one finds the idea is much more subtle. Remember that in order for these types of principles to be scrutinized, one needs paradoxes which challenge thought, which space filling curves and nowhere differentiable functions provided in ample amount.
Of course, there was still controversy back then. The philosopher George Berkeley in particular criticized the method:
It must, indeed, be acknowledged, that [Newton] used Fluxions, like the Scaffold of a building, as things to be laid aside or got rid of, as soon as finite Lines were found proportional to them. But then these finite Exponents are found by the help of Fluxions. Whatever therefore is got by such Exponents and Proportions is to be ascribed to Fluxions: which must therefore be previously understood. And what are these Fluxions? The Velocities of evanescent Increments? And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the Ghosts of departed Quantities?
Nonetheless, the results Newton obtained were correct, so there wasn't too much of this backlash.