Struggling with "technique-based" mathematics, can people relate to this? And what, if anything, can be done about it?

I'm a 3rd year undergraduate, majoring in pure mathematics. I've done well in the "proof-based" subjects I've taken, and I think that's because I understand the "rules of the game." That is, predicate logic + how to write a coherent proof. Furthermore, people are explicit about what they mean, stating their premises, quantifying explicitly (for all $x$, there exists $y$ such that...), distinguishing between "A implies B" and "A iff B", etc. This obviously really helps.

Recently, however, I've been finding that "technique-based" (as opposed to "proof-based") subjects like complex analysis, vector calculus, differential equations etc. are beginning to frustrate me, and I'm starting to get bad marks, too. It's like when I'm sitting in these lectures, the "logic" of math suddenly becomes opaque. I can never tell what the premises are. I often don't know whether we're trying to show that "A implies B", or whether we're trying to show that "A iff B". Stuff is happening on the board, but the "rules of the game" just aren't clear to me.

Does anyone else have a similar problem with "technique-based" math? And if so, what can be done about it?

Let me give an example. Below, I've copied some of this problem from Wikipedia, and I have inserted my own thoughts in italics.


A separable linear ordinary differential equation of the first order must be homogeneous and has the general form $$(1)\qquad \frac{dy}{dt}+f(t)y=0.$$

where $f(t)$ is some known function.

I can't tell if (1) is being taken as a premise or not.

We may solve this by separation of variables (moving the $y$ terms to one side and the $t$ terms to the other side).

$$(2)\qquad\frac{dy}{y}=-f(t)dt$$

Are you asserting that (2) follows from (1), or are you saying they're logically equivalent? And I still don't know whether equation (1) is a premise, or what our premises are.

Since the separation of variables in this case involves dividing by $y$, we must check if the constant function $y=0$ is a solution of the original equation. Trivially, if $y=0$ then $y'=0$, so $y=0$ is actually a solution of the original equation. We note that $y=0$ is not allowed in the transformed equation.

Clearly, if $y$ is everywhere zero, then equation (1) holds. But what's all this "we must check" nonsense? Are you trying to say that the statement "the function $y$ is everywhere zero, or equation (2) holds" is logically equivalent to the statement that "equation (1) holds?" If that's what you're meaning, why don't you just say so? If the argument was just laid out in a coherent fashion, nonsense like "we must check" simply wouldn't appear.

We solve the transformed equation with the variables already separated by integrating, $$(3) \qquad \mathrm{ln} \,y = \left(-\int f(t)dt\right)+C$$ where $C$ is an arbitrary constant.

Are you trying to say that if (2) holds, then there exists $C$ such that (3) holds? Then why don't you just say so? Or maybe you're trying to say that for all $C$, (3) holds iff (2) holds. I honestly can't tell.


Well you get the general gist. So my question is, can other people relate to this, and what can be done about it?


I can actually relate fairly well to this. Please don't take this the wrong way, but my main advice would be:

Stop worrying!

By this I don't mean that you should not be concerned about your grades getting worse. But while it appears that you have trained yourself to solid, rigorous thinking in proof-making and such, you have not yet learned to simply apply techniques, and worry later. You need to learn to:

(a) live with imprecision,
(b) accept temporary uncertainty, and
(c) focus, at least for a while, on doing simple exercise you might find dull, over and over.

I hope this doesn't strike you as a silly self-help program, and a pat on the back that all will be well. This isn't easy, and, when much younger, I found myself in rather a similar situation. In my math undergrad studies, everything was proofs. And then I studied in France and was blown away by the skill of the average student to simply get stuff done, and to do it fast, and well (I had similar experiences with theoretical physicists everywhere). But while the French educational elite system tends to produce such students, it's also more than anything just a result of having been forced to do the same series majorization over, and over, and over again (to the point that some of my co-students had to recover from mental breakdowns); and in the case of physicists, of having grown up in a world of approximation. But at my old school the saying was also that the best mathematicians were theoretical physicists.

From what you write, you seem well-equipped to handle what you face. But where you see an equation and wonder about where it fits into a big scheme, others will just solve it. So stop that. It takes much repetition to get there. If you are not naturally the type for this, get yourself, eg, some

Schaums's Outline, or
GRE math preparation material.

The stuff covered there is largely dull and repetitive, but provides you with much training. So it also takes time and effort now; but if you put this in, and learn to not over-think everything, I think your chances are good to change results fairly fast.

From how you describe yourself, it will probably take longer to genuinely internalize this than to raise your grades again. If you need extra motivation, even for successful proof-making you need to learn to deal with the gaps. A (simple) paper might result from some lines written down that you feel might be true, and you trust yourself you will fill in the details later. You don't worry about those for now. My main thesis paper had a gap in the middle I couldn't solve for 2 years. I kept writing, and eventually by brainstorming with someone much better than me we saw how it could be done. This isn't so dissimilar to your problem: solve the ODE; then, later when you have time, think about it, and read some theory.

You should also keep in mind that manipulations like the above were what the Leibniz' and Bernoullis etc did on a regular basis, often without having a proof that would live up to today's standards. Deep insights can derive from mastering simple techniques, so throw yourself behind those in the near future. Good luck.


I think I can relate a little bit, being a physicist who finds some problems presented in physics texts are a bit opaque while pure mathematics tends to be much more precise about statements and definitions. Taking more pure math has given me greater insight into the physics problems I work with.

I think one way to help overcome this issue is simply by exposure. Let me try to illuminate what's going on with this wikipedia problem.

Admittedly, I'm not well-versed in formal logic, so I can't evaluate what is and is not a "premise." Still, I will try to clarify what's going on.

First, let us assume there exists a function $y$ obeying the differential equation (1), and we must construct a means to recover $y(t)$ given $f(t)$.

If $y(t) \neq 0$ for all $t$ in the domain of $y$, then equation (2) is equivalent to equation (1). Equation (2) is equivalent to equation (3), for all possible constants $C$. This recovers $y(t)$ whenever $y(t) \neq 0$ for all $t$. (*Note that equation (3) doesn't actually finish recovering $y(t)$. You need to get rid of the logarithm, but this is such a trivial step that the writer clearly didn't even consider it worth getting into.)

If $y(t) = 0$ for some $t = t_0$, then equation (2) is not equivalent to equation (1). However, $y(t_0) = 0$ and equation (1) together imply that $dy/dt = 0$. To be honest, I find a lot of the logic here incomplete; they want to jump straight to considering $y(t) = 0$ everywhere without considering $y(t_0) = 0$ for only an isolated point $t_0$. I would argue that a point $t_0$ such that $y(t_0) = 0$ but with $y(t) \neq 0$ in a neighborhood around $t_0$ can be partitioned: you can solve for $y(t)$ to the left and right of $t_0$ by equation (2), and if you know $y=0$ at $t_0$, you therefore know $y(t)$ everywhere in the interval. Only then can you finally consider the case that $y=0$ everywhere, which has a trivial solution and does obey the conditions of (1).

The essence of the approach here is to take (1) as given and then consider a set of mutually exclusive and exhaustive cases, each of which admits different avenues toward reconstructing $y(t)$ in terms of $f(t)$. It's key that the cases considered are collectively exhaustive--they must cover all possibilities.

One of the things that hung you up here is that "we must check" business. There, the writer had to consider the case $y=0$ everywhere, and he pedantically chose to verify that this case was consistent with the original equation (1). It trivially was, in this case, but it's nevertheless common to consider cases that may or may not be consistent with the original problem--perhaps because it is simpler not to exclude such solutions until a later time.

Part of the difficulty here may be that in proofs, you often know the answer you're supposed to arrive at--there is a clear goal that you must achieve, and the focus is on the logical consistency between steps. Here, the focus was on following logical steps to construct a solution for $y(t)$--or, equivalently, to recover it given only that $y(t)$ obeyed equation (1) and that $f(t)$ was known information. Here, part of the general solution technique was to break down possibilities for $y(t)$ into a set of cases, each of which was easier to analyze individually and construct a solution from on its own than when considered as a whole.


I would not call equation (1) a "premise," it is a type of equation that is being given a name.

The logic behind manipulating differential equations (and corresponding lack of "iff" vs "implies") is no different from that of manipulating ordinary algebraic equations in your intermediate/"college" algebra that you might have taken in high school. Sometimes the manipulation is reversible, in which case there is a tacit iff underneath (say, adding $1$ to both sides), and sometimes it is not reversible (like squaring both sides), in which case there is a tacit implies. Keeping track of the logic is, for those experienced, and in some areas of math, less enlightening and less challenging than the bigger task of finding the manipulations required to do what is desired, which is why it is omitted with the frequency that it is. At any rate, the only thing you need to do is consider what type of manipulation is being done and whether or not it is reversible.

Of course, another carry-over from high school algebra: when you're solving an equation, you are indeed taking it as a given. If you want to solve $x^2-2x-3=0$, you assume by hypothesis that it is a true statement about some number $x$, then find a chain of implications which tells you what $x$ can be. If you complete the square first and then isolate $x$, you will end up doing a square root, which will introduce $\pm$ signs. Note that $x=\pm{\rm blah}$ has the meaning of $x\in\{+\rm blah,-blah\}$. Or in propositional logic (ish), we'd say $(x-1)^2=4\implies (x-1=2\vee x-1=-2)$, and so on.

You will notice that in going from $(1)$ to $(2)$, we had to divide by $y$. This is not possible when $y=0$, so implicitly we have bifurcated into two cases: when $y=0$, and when $y\ne0$. Oftentimes people think hastily; we might have first thought to divide, and then on second thought realized that this isn't always possible and a case where it isn't possible needs to be checked separately. The style and arrangement of mathematical discussion is determined not exclusively by cold logic considerations, but rather is also developed so as to illustrate and mirror natural human thought processes. As with any sort of discussion people have with each other.

Just because you can't make sense out of something, does not make something nonsense, by the way. It is, though, a tad too hastily written. If you're writing and have the opportunity to revise, it is generally a good idea to move thoughts around so that they follow along the thought process of someone just being introduced to material, rather than stream-of-consciousness it out.

In my opinion, the damage is nonetheless small. The issue you're having is not being able to detect the framework behind these sort of problem-solving tasks, and the framework is very basic which most prepared students are familiar with. As I have said a couple times now, the idea of applying manipulations, either reversible or irreversible, and splitting off into multiple cases based on when certain manipulations are applicable or not, is something that goes all the way back to high school algebra. The reason the author does not explicitly say some things is that they are the sort of things that very widely and very typically go without saying in mathematical talk.

The therapy for this is to get into the mindset of problem-solving, not logic. For problem-solving, especially in introductory differential equations (which reads mostly like a large grab-bag of tricks, I think), achieving your goal will involve symbol-pushing, so it is much like chess where you need to move things around according to certain rules in order to achieve one of a number of desired forms. Intermediate steps become present, like "isolate $y$'s" or "reduce the order of derivatives present," or "group like terms" etc. With practice, you can attach the logic to the moves afterwards after you have found the moves you need or want.


A few quick thoughts on your problem:

  1. If you're finding that you can't intuitively follow what is going on, you might just not have enough examples in your head. Instead of trying to understand what is being proved in a "technique course" to use your phraseology, just apply the technique several times (as many times as necessary!) and you will probably get an intuitve understanding of what is happening

  2. Often, there is much theoretical precision that could be used, but isn't because it is too complicated at the moment. In undergraduate differential equations, there really isn't much technique that can be used compared to algebra, because much of the theorem-proof stuff is complicated analysis. Again, concentrate on doing tons of examples. Differential equations does branch off into many subdisciplines later and some of it is rather nicely theoretical, like microlocal analysis and D-modules

  3. Point (2) also holds in a different vein for complex analysis. Some of analysis isn't as "structured" in some sense compared to algebra. However, technique-type courses are more to get an intuitive feel for objects so that you can later apply more structural techniques to them. Sometimes you just have to get your hands dirty so that later when you do learn the theory you'll have a good idea of what to expect.

  4. If you find a subject not up to your standards of rigorous precision, that's fine; different people need varying amounts of rigour to keep them comfortable. Personally, I find it tremendously difficult to understand imprecise statements. This is a good opportunity for you: rephrase the statements more precisely. If you can't figure out how, write down something precise that you think might be true and then see if you can prove it. If it looks hard, ask the instructor if you did this part correctly.

  5. In case I didn't say it clearly: do more examples.


I can see your confusion and frustration but this is something that you should work on getting used to and becoming able to translate into your preferred style. Historically, most mathematics was (and in a lot of fields still is) done in an informal style. It was only in the early 20th century that there was any kind of convincing logical foundation for mathematics and there was a lot of amazing (pure and applied) mathematics done before that.

This is especially true in calculus: there are a number of operations that are justified here by theorems that aren't stated because it's assumed you know them. You might be frustrated by this but the point is that the operations were true before they were proved to levels of current rigor and abstraction. An intuitive meaning of derivatives as rates of change and belief that the notation works allows you to work efficiently and then you can go back and check everything works precisely.

As a pure mathematician you should concentrate more than most on fixed definitions and rigorous theorems, while also realizing that the main use of calculus like this is to solve real world problems.

As to the specifics of the problem (and these are only my interpretations, the point of this is to find all solutions to an equation, which doesn't have to be stated as a theorem):

  1. We are trying to find all solutions to a differential equation. If you wish you can state this as Theorem: For all continuous $f\colon \mathbb{R}\to\mathbb{R}$ and $F\colon \mathbb{R}\to\mathbb{R}$ with $F'=f$, differentiable $y\colon \mathbb{R}\to\mathbb{R}$ satisfies $$\frac{dy}{dt}+f(t)y=0$$ if and only if $y=Ae^{-F(t)}$ with $A\in \mathbb{R}$.
  2. There is a theorem that says if $y\neq 0$ that any solution to (1) is also a solution to (2) with integral signs. This theorem that justifies 'separation of variables' might be in your textbook/lecture notes or you might have to prove it yourself: it's basically the chain rule. Once you have that together with the fact that $y=0$ is a solution then you can see that any solution to (1) must be $0$ or a solution to (2).
  3. There's more definitions and propositions that say all the solutions of (2) are of the form (3). Actually there really should be some absolute value signs around the $y$ as you see in example 2 of separation of variables. Solving this modified (3) gives solutions $y=\pm Ae^{-F(t)}$ where $A=e^C$ for any $C\in \mathbb{R}$ so this together with the $y=0$ solution gives the desired theorem.

Going through all this has reminded me that converting calculus type informal reasoning to precise mathematics is slightly tricky (my discussion above is still lacking some details I think, corrections welcome) and is a worthwhile exercise. So you should do this as necessary but also try to sometimes understand the material in the way it's presented in a slightly more intuitive way.

More random comments:

  • If you want to state this as a theorem you need to know the answer. It seems easier to just find the answer and then stop writing.
  • Thinking about algebraic equations might give you some understanding of what people mean by differential equations. Solving something like $\sqrt{x+2}=-x$ would proceed by assuming the equation is true, getting somewhere and then checking if the solutions work. Try to think about this as implications, subsets etc and that might help before getting into calculus also.