Mathematical ideas that took long to define rigorously
Solution 1:
The notion of probability has been in use since the middle ages or maybe before. But it took quite a while to formalize the probability theory and giving it a rigorous basis in the midst of 20th century. According to wikipedia:
There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation, sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as usually understood.
Solution 2:
Natural transformations are a "natural" example of this. Mathematicians knew for a long time that certain maps--e.g. the canonical isomorphism between a finite-dimensional vector space and its double dual, or the identifications among the varied definitions of homology groups--were more special than others. The desire to have a rigorous definition of "natural" in this context led Eilenberg and Mac Lane to develop category theory. As Mac Lane allegedly put it:
"I didn't invent categories to study functors; I invented them to study natural transformations."
Solution 3:
Euclidean geometry. You think calculus was missing rigorous understanding? Non-Euclidean geometry? How about plain old Euclidean geometry itself? You see, even though Euclid's Elements invented rigorous mathematics, even though it pioneered the axiomatic method, even though for thousands of years it was the gold standard of logical reasoning - it wasn't actually rigorous.
The Elements is structured to seem as though it openly states its first principles (the infamous parallel postulate being one of them), and as though it proves all its propositions from those first principles. For the most part, it accomplishes the goal. In notable places, though, the proofs make use of unstated assumptions. Some proofs are blatant non-proofs: to prove side-angle-side (SAS) congruence of triangles, Euclid tells us to just "apply" one triangle to the other, moving them so that their vertices end up coinciding. There's no axiom about moving a figure onto another! Other proofs have more insidious omissions. In the diagram, does there exist any point where the circles intersect? It's "visually obvious", and Euclid assumes they intersect while proving Proposition 1, but the assumption does not follow from the axioms.
In general, the Elements pays little attention to issues of whether things really intersect in the places you'd expect them to, or whether a point is really between two other points, or whether a point really lies on one side of a line or the other, etc. We all "know" these concepts, but to avoid the trap of, say, a fake proof that all triangles are isosceles, a rigorous approach to geometry must address these concepts too.
It was not until the work of Pasch, Hilbert, and others in the late 1800s and early 1900s for truly rigorous systems of synthetic geometry to be developed, with the axiomatic definition of "betweenness" being a key new fundamental idea. Only then, millennia since the journey began, were the elements of Euclidean geometry truly accounted for.
Solution 4:
Following from the continuity example, in which the $\epsilon$-$\delta$ formulation eventually became ubiquitous, I submit the notion of the infinitesimal. It took until Robinson in the 1950s and early 60s before we had "the right construction" of infinitesimals via ultrapowers, in a way that made infinitesimal manipulation fully rigorous as a way of dealing with the reals. They were a very useful tool for centuries before then, with (e.g.) Cauchy using them regularly, attempting to formalise them but not succeeding, and with Leibniz's calculus being defined entirely in terms of infinitesimals.
Of course, there are other systems which contain infinitesimals - for example, the field of formal Laurent series, in which the variable may be viewed as an infinitesimal - but e.g. the infinitesimal $x$ doesn't have a square root in this system, so it's not ideal as a place in which to do analysis.