Is there a profitable way to read mathematical proofs

Yes, there is a way to profitably read mathematical proofs, but it takes time. Here is an excerpt from the "note to the reader" in an excellent topology book:

"It is a basic principle in the study of mathematics, and one too seldom emphasized, that a proof is not really understood until the stage is reached at which one can grasp it as a whole and see it as a single idea. In achieving this end, much more is necessary than merely following the individual steps in the reasoning. This is only the beginning. A proof should be chewed, swallowed, and digested, and this process of assimilation should not be abandoned until it yields a full comprehension of the overall pattern of thought." - George Simmons - Introduction to Topology and Modern Analysis

And here is a (well-known) quote from the "Automathography" of Paul Halmos on how to read mathematics:

"Don't just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special case? What about the degenerate cases? Where does the proof use the hypothesis?" -Paul Halmos

Other suggestions:

  • Read several sources.
  • Don't just read; ask someone to help explain it.
  • If you do understand it, try to reprove it or explain it to others.

There are a few types of proofs. For this post I organize them as (I know it's very arbitrary to label proofs this way, I just use these "types" to break down different characteristics of proofs for this post):

1) Proofs for classes that you're expected to create

2) Proofs for classes that you aren't expected to create

3) Research level proofs

The first type I don't think you are having a difficult time with. There are bags of tricks. You just need to practice these tricks to create proofs. The tricks get more complicated and more technical, but it's still not unreasonable most of the time. A lot of proofs are definition chasing, applying theorems, algebra, etc. I think you're stuck at 2 or 3.

The second type are proofs that appear in your math textbook. Every analysis book on Earth proves that $\Bbb{R}$ is uncountable by Cantor's diagonalization argument. Every algebra textbook on Earth proves Lagrange's theorem. Every number theory textbook on Earth proves Fermat's little theorem. Every topology book on Earth proves Urysohn's lemma. Notice something with these theorems I've chosen? They all include a name of a very smart mathematician. These very smart mathematicians spent considerable time thinking about these things.

The truth about these theorems and proofs is that yes, they are inspired. There's no way around it. You don't have the time (or intelligence) to be able to think of these theorems or their proofs. Brilliant mathematicians spent years to think of these proofs. So first, just accept that they are inspired. Okay, so now that we accept that, how do we understand these proofs? Here's where Andrew Kelley's excellent answer comes into play. You have to play with it. You have to see what happens when you drop assumptions, add assumptions, and try to modify the proof.

I find a good way to play with a new theorem/proof is to try to find flaws. One way to do this is to check stupid cases. If they say "a set", did they specify non empty? What happens if the set is empty? If they say "a function", did they mean $f\colon \Bbb{R} \to \Bbb{R}$? What happens if it is on an exotic space? Try to find stupid counterexamples. Try to find hidden assumptions. Every time there is division, do they specifically address that division is defined? Is literally every symbol defined or at least convention? Do letters come up out of nowhere? In short, a good way to approach proofs is to be a pedantic jerk. Be ruthless, act like you're trying to disprove it. This isn't the only way to play with a theorem, you need to find ways that work for you personally. One thing that helps me is to find explicitly where each assumption is used and why it's necessary.

Although this is assuming you understand what the hell the guy is saying. What does the theorem even say? For more complicated theorems, you don't necessarily have an intuition. You don't need an intuition, just understand what the words mean. Intuition will come later. Make sure you know what you the theorem is saying. Do you know what every word and symbol means?

So you're playing around with a theorem, what's the goal? When can you call a proof "learned"? Again, as Andrew's answer says, proofs should be turned into one idea. I think this is where the divergence between proofs in classes and research proofs lies. Proofs in textbooks are usually refined and compact. This is not the case with research proofs. Research proofs are usually very long, and if you really wanted to could be significantly shortened.

For example, Lagrange's theorem. There's four short proofs on proofwiki https://proofwiki.org/wiki/Lagrange%27s_Theorem_%28Group_Theory%29 . There is an entire history to this theorem, with little bits being proven at a time. Wikipedia says:

Lagrange did not prove Lagrange's theorem in its general form. He stated, in his article Réflexions sur la résolution algébrique des équations,[2] that if a polynomial in n variables has its variables permuted in all n! ways, the number of different polynomials that are obtained is always a factor of n!. (For example if the variables x, y, and z are permuted in all 6 possible ways in the polynomial x + y - z then we get a total of 3 different polynomials: x + y − z, x + z - y, and y + z − x. Note that 3 is a factor of 6.) The number of such polynomials is the index in the symmetric group Sn of the subgroup H of permutations that preserve the polynomial. (For the example of x + y − z, the subgroup H in S3 contains the identity and the transposition (xy).) So the size of H divides n!. With the later development of abstract groups, this result of Lagrange on polynomials was recognized to extend to the general theorem about finite groups which now bears his name.

In his Disquisitiones Arithmeticae in 1801, Carl Friedrich Gauss proved Lagrange's theorem for the special case of Z(p)*, the multiplicative group of nonzero integers modulo p, where p is a prime.[3] In 1844, Augustin-Louis Cauchy proved Lagrange's theorem for the symmetric group Sn.[4]

Camille Jordan finally proved Lagrange's theorem for the case of any permutation group in 1861.[5]

https://en.wikipedia.org/wiki/Lagrange%27s_theorem_%28group_theory%29#History

Lagrange's theorem went from being proven over many papers over 90 years to four short proofs.

My point being that in research level math, sometimes proofs aren't as simple as "one simple idea" as they are in your classes. It doesn't quite make sense to try to break them down to that when first encountering them. However that is a large goal of math. Once a result is published, everyone tries to refine it. This was seen in the recent developments in the twin primes conjecture. Yitang Zhang bounded the gap at some ridiculous number, then people like Terrence Tao quickly modified the proof to bring the gap down much further. Zhang's proof was not ideal. In 100 years we might present the proof in textbooks and who knows, maybe it will only be a couple pages? (Be forewarned. I know nothing of his paper. It was just a "hot" example.)

So say you want to understand Zhang's proof, what do you do? Or some other research level paper? It depends on whether you're new to the field or not. If you're new to the field, you have to painstakingly go through every line of every argument to understand what's going on. If you're not new to the field, you can take a more "vague" approach. This leads me to my final point.

It's okay to not understand every line of every proof or even every proof. Honestly, sometimes it is okay to just accept that certain results are true. In a lot of proofs algebra can be skipped. Suffice to understand it as "some algebra", even if you don't what the specific steps are. Same thing with evaluating integrals. If you read a proof that says "this integral is ____". Whatever, just take their word for it because it really doesn't matter. You have to do this much more often as you get more into research level math. When you first learn a field, you have to go through some training, but once you learn it you can skip a lot.

Even in your classes, sometimes knowing the proof of Lagrange's theorem isn't necessary. You just need to know how to apply it.

I wrote more than I intended to and said less than I intended to, so please let me know if you have any questions or comments.