Do non-mathematical fields use the appropriate level of analytic/probabilistic rigor?
Talking to students in different areas and taking different classes in math, physics, electrical engineering I have been struck by the differing amounts of rigor in use.
I know little about economics but I'm told that they use measure theory and prove rigorous theorems. Yet in physics, perhaps the most mathematical of the sciences, they solve differential equations and whatever else they do without having taken real analysis (usually, as far as I know). Is this for purely historical reasons or do economists really have more need for existence theorems for example?
Another question I have is about probability. It is taught in different ways to undergrads (with talk of different formulae for discrete and random variables) and graduate students (measure theoretically). Is there some great reason for this? I guess to do Brownian motion and similar you need measure theory (is this true?) but does measure theory give you better theorems or proofs than the undergrad approach where both are possible? In information theory it seems they are happy with undergrad level stochastic processes in their textbooks. Would they be improved by using measure theory?
So, as mathematicians (or perhaps physicists, economists, information theorists etc if that is who you are) do you think the different fields have it right?
I think the shortest answer is that if these other fields don't have enough rigor, the mathematicians will make up for it. In fact, a large number of important mathematical problems are just that: mathematicians working to fill in the gaps left by physicists in their theories.
On the other hand, if an economist tried to publish some grand result that used flawed mathematics, it certainly wouldn't pass through the economics community unnoticed. That being said I have read some (applied) computer science papers which spin a result to sound much grander than it is specifically by appealing to a lot of semi-relevant mathematical abstraction.
As they say in the comments, a random PhD theoretical physicist might not know measure theory, but there are certainly many mathematicians without mastery of physics working on physics equations. Similarly, an economist is unlikely to know group theory while a (quantum) physicist must. The point is that as a community we can achieve greater results.
As to the reason measure theory isn't taught to undergraduates: it's hard! Many undergraduates struggle with real analysis, and even the basic proofs underlying rigorous measure theory require mastery of a first course in real analysis, which is a stretch at a lot of universities, especially for non-mathematics majors. (Of course, at some prestigious schools undergraduate calculus is taught with Banach spaces, so I'm talking about the general undergraduate populace)
In every field more rigor is better than less, but there are costs and tradeoffs.
More mathematics means discouraging some students from the subject who could learn it and contribute. Or replacing some parts of the subject curriculum by mathematics, teaching less physics/economics/engineering/(etc). Or prolonging the training time.
Use of a complicated formalism makes it harder to communicate results. A published paper using concrete probability will be read by more people than a paper using measure theory.
For software, a formal correctness proof is great, but the time and effort involved in attaining that rigor could have been spent on securing against other failure modes, such as hacking or disrupted communication between nodes.
In engineering, simulations, prototypes, stress tests, and other forms of modeling count for more than a perfect mathematical analysis of a simpler case. Both can be trumped by "lack of rigor" in executing the blueprints (e.g. structures collapsing from the use of cheap materials, or the airplane that crashed from allowing ice on the wings). As impressive as it was for a mathematician to have detected the Intel Pentium bug, no system failure was ever attributed to that error, but a lot of engineering simulations were done using those faulty Pentia.
Classical physics describes systems that already exist and satisfy uniqueness (causality). So technical theorems about differential equations are less important than getting a model that works. Similar statements can be made about theories that are probabilistic such as quantum or statistical mechanics (which are also causal but in a different sense). Of course physicists will grab whatever looks useful from mathematics, but the test is applicability rather than formal correctness.
Theoretical economics studies toy models of complicated systems. It is a given that the models will not describe reality, and rigor allows the researchers to claim that they have accomplished something, isolating concepts from the simplified models that might be important for application to "real" economics. Many economists think that highly mathematical approaches to their field are misguided or harmful (such as creating overconfidence in models whose assumptions are rarely satisfied in practice). There have been suggestions to stop awarding economics Nobel prizes for mathematical work, or to cancel the prize because it has been promoting the mathematization of the field. The Black-Scholes-Merton formalism of continuous time arbitrage-free trading is an elegant theory that led to Nobel prizes and a spectacular hedge fund disaster (LTCM bailout), as well as a drastic expansion of the derivatives market implicated in the 2008 failure of the financial system. A lesser impression of mathematical rigor or correctness might have reduced the problems.
Because of specialization, it largely depends on what the student wants to do / know and what job they intend to get. I spent 3 years in an applied mathematics PhD program, and we were required to take a full year of graduate level measure theory and topology in the Math department. We also had our own year-long sequence in measure-theoretic probability theory, and applied functional analysis.
Statistics is one area where I feel that most departments do an inadequate job, including math departments. I work in financial research now, and there is a really widespread misunderstanding of things like p-values and the pitfalls of frequentist methods in many fields. Bayesian methods, and in particular the rigorous functional analysis behind Markov chain methods, are under-appreciated among practitioners.
However, machine learning is largely considered an interdisciplinary field involving probabilists, statisticians, electrical engineers, economists, control theorists, and even philosophers. And in that domain, to genuinely publish widely acclaimed papers, you really do have to have a deeply rigorous understanding of analysis.
On the flip side, I often find that it is the mathematicians who are most lacking, even in mathematics. I know some folks who are experts in arcane sections of group theory, or category theory, etc., and they don't understand very basic ideas about eigenvalues or the computational complexity of important algorithms. Likewise, a ton of people who are drawn to "continuous math" end up having a real lack of topology/algebra knowledge, I mean not even enough to appreciate basic important theorems like the Sylow theorems.
For the general student going to work in a technical field with some connections to research, I think it is important to get up to the point of at least one graduate level analysis/functional analysis type class -- and in particular to cover the analysis of complex functions. Aside from that, they should probably spend time focusing heavily on how mathematics has been applied to their chosen domain or sub-domain and let that guide them in choosing which upper level math subjects to really focus on.
I will talk about both the role of measure theory in probability and the use pf advanced mathematics in econmics, but keep the two separate.
Probability is a field with history that goes back long before measure theory and developed to a large part independently. There is a probabilistic kernel (pun not intended) that is independent of the measure theoreic machinery. There is a beutiful book by Emmanuel Lesigne, Heads or Tails: An Introduction to Limit Theorems in Probability, that demonstrates that one can go quite far without employing measure theoretic concepts. Also, there are still competing approaches in probability to the measure theoretic approach. Edward Nelson has written a terse little book called Radically Elementary Probability Theory, in which he gives very general versions of standard probability results in terms of infinitesimals. There is also an approach to probability that is based on gambling ideas due to Glenn Shafer and Vladimier Vovk, for which they have a nice homepage.
The level of mathematical rigor in economics is not uniform. It is true that the standards of rigor in theoretical economics are essentially the same as in pure mathematics. But it is not hard to find non-rigorous arguments in top level economics journals. And it is not entirely clear that this is all bad. An example is the modelling of "ideosyncratic risk". Large populations of agents are often modelled as a continuum and if all agents face some private, independently and identically distributed risk, we can assume that risk "cancels out in the aggregate". This idea is guided by a sound intuition, but is very hard to make rigorous. The first satisfactory approach to a law of large numbers for a continuum of random variables is in a 2006 paper of Yeneng Sun. To show that an appropriate framework exists at all, Sun had to employ non-trivial machinery from non-standard analysis. The first paper that showed how to construct the appropriate spaces with standard machinery was published in 2010 (an older working paper version can be found here). I don't think it would have been very reasonable not to use the intuitive idea of individual risk cancelling out in the aggregate just because there was no way to make it completely rigorous.
The mathematics economists use is quite special. Sophisticated functional analytic concepts are used- but all vector spaces are real vector spaces. A good overview of the kind of mathematics employed in economic theory is the book Infinite Dimensional Analysis by Roko Aliprantis and Kim Border. Many fields of mathematics are completely foreign to most economic theorists. Don't expect economic theorists to know any complex analysis (econometricians may do). What is true is that a lot of mathematics in economics is qualitative, not quantitative. "An equilibrium exists. It is efficient. It is locally unique...". It is very hard to do this in a heuristic way.
I think it largely depends on the field, the researcher, and what level they're at. Because remember, outside of mathematics, math is a tool, not an end in itself. So the definition of "appropriate" changes somewhat.
Consider many topics in statistics, especially applied statistics rather than research statistics. There are things that probably have an analytic proof, or at least the potential to have one, but as they can be shown with simulation, few people bother. Especially if you are learning how to use the tool - a masters level public health student, for example, probably doesn't need to know the full mathematical underpinnings of many things they do.
On the other hand, some researchers absolutely do approach things with mathematical rigor.
It should also be noted that mathematicians often approach non-mathematical fields with a lower level of subject-matter rigor than many researchers in those fields do. I've seen applied mathematicians make "simplifying assumptions" in pursuit of interesting math that would make a biologist or epidemiologist cringe.