There are two meanings of "foundational research".

If you just mean mathematical logic (containing computability, set theory, model theory, and proof theory), there is a lot of ongoing research in those fields. Of course the cutting-edge results are usually technical, but the same can be said for every other well-developed area of mathematics. Nobody would read a paper by Galois and think that it is reflective of cutting edge work in algebra, or read work by Cauchy and think that is it reflective of current research in analysis. Similarly, it's a mistake to read papers in mathematical logic from the first half of the 20th century and think that they are reflective of current research in the field. If you want to see current work, you could look at the Journal of Symbolic Logic or the Journal of Mathematical Logic, both of which are well-regarded research journals in the field.

Sometimes "foundational research" is used in a different sense, to mean work that is supposed to provide some sort of philosophical foundation for mathematics. For better or worse this is not the direct aim of most researchers in mathematical logic, although they are happy if their work does help provide insight into foundational issues. The idea that there is some "universal foundation" on which all of mathematics is built is much more difficult to defend in light of what we currently know, compared to what people knew in 1900 or 1930.

One recent example of the interplay between technical research and foundational insight is in algorithmic randomness. This field was initiated in the 1960s, but in the 2000s there was an explosion of new work, much of which is documented in the recent 855-page book Algorithmic Randomness and Complexity by Downey and Hirschfeldt. While many of the results appear technical to outsiders, they do provide a much clearer foundational picture of randomness than anyone had in 1995. They do this in the modern style, by deeply exploring and comparing the mathematics of multiple notions of effective randomness.


I wouldn't recommend you the response there as particularly closed to reality. Unfortunately, that might be a consensus on a large part of the academia.

The problem is that non logicians tend to look logic as a rather bizarre and esoteric subject. But it is not. A great example are large cardinal axioms. I guess that people outside the adherence of {set theorists} think what the poster in that forum message said, large cardinals axioms are things set theorists invent to pass the time. But the reality is another. Measurable cardinals where basically proposed as an axiom that could settle a lot of undecidable but "expected" results that are independent from ZF (or ZFC or ZFC+CH).

What are some kind of these results: One of those is that for a certain 2-player game, involving a set of reals that is not very complicated (in a specific way, called analytic), one of the players must have a winning strategy. Why is this result expected? To find a game such that neither player has a winning strategy, one need to invoke the axiom of choice, which usually means a really complicated set is involved. In turn, this has further applications in other areas, as "natural" and "intuitive" sets are usually analytic or similar.

Then you get other areas of logic, which I cannot say so much (I'm a graduate student in set theory). I do know that model theory has a lot of ongoing research related with "more popular" parts of math such as algebraic geometry. Also, lambda calculus + set theory + computer science. Or recursion theory+randomness. There are quite a lot of labs in good universities working in logic, and there's no reason for it to be thought as a dead subject.