Understanding mathematics imprecisely
Solution 1:
Full understanding is illusory. If you pursue it, you will find yourself trying to say what a number is, or a set, and digressing into the problem of making language, which for math is a meta-language, precise. And, of course, that can't be done.
So regarding your first question, it might help to observe how futile that innate wish of yours is, and how much you understand without full understanding (or compunction) in all other aspects of your life.
Imagine trying to learn biology and studying the chemical processes in the body, then asking "what is a chemical". You are given an answer that has to do with molecules, a term which you then inspect for precision's sake. Atoms come up, then electrons. Eventually you are learning quantum physics when all you wanted to do was understand how allergens work, or some such thing.
You must operate at the appropriate level for a specific problem. It's no use to reinvent the wheel and do everything from first principles. That would be like writing every program in machine code.
One day, our brains may be augmented with enhancements that allow us to have enough knowledge to understand everything down to our "axioms". Until then, it is a matter of becoming comfortable with our limitations and trying to work with what we have to be awesome.
In terms of knowing which questions are interesting, I think that is one of the harder parts of research. One almost has to be prescient.
And as for getting the important ideas of a proof, my first answer is that sometimes you can't really. Some proofs are just a confluence of numerical estimations and limit results and don't give any real insight into what is going on. Since you seem to be a probabilist, I would point to the proof that a random walk in dimension $n$ is recurrent for $n=1,2$ and transient otherwise. One feels there should be an intuitively understandable reason, but all one gets is Stirling's formula.
For other proofs it is a matter of becoming comfortable enough with the terminology and techniques used in the proof (by re-reading) to see the forest for the trees. In Kung Fu one talks about "learning to forget". You learn the movements carefully so you can perform them without thinking about them when the time comes. You do the same when you learn to integrate or differentiate - you don't want to be doing this from the limit definition when crunch time comes (exam say).
Solution 2:
I've observed how my closest colleague did it back in the day of my Phd and postdocs and I took over his technique. What he did was to sit down, read the paper superficially and then try to work out simple stuff he understood on his own. Then he would try to build his own version of what he got from the paper, often without fully understanding what had been going on. But he just had a general idea of the gist of the paper and tried to rebuild the idea in his own words, math, etc...
I remember I then proceeded to do the same later when studying some ecological model that we were trying to pour into mathematical formulas. I felt that the work that had been done was not very rigorous or even incomplete. So I rebuilt the model for myself superficially imitating others at first but gradually abandoning their approach for my own. And this without ever fully learning the necessary techniques of Markov processes, stochastic equations, etc... I feel that by doing this work, my understanding of the material is much deeper than it would have been if I had read a book about it or followed a standard course.
What also helped was the countless conversations and presentations I had to do about my work that forced me to put my thoughts into words understandable to others. They might not have gotten much out of it, but it has been very beneficial to myself for sure.
Solution 3:
Sometimes I take solace in:
"Young man, in mathematics you don't understand things. You just get used to them."
- John von Neumann
It seems to me that some of the art is "if-this-then-that" kind of stuff, but there's a whole bunch more that basically comes from the intuition you get from basically just solving problems.
Solution 4:
I hope that the original poster will see that this thread is not dead! I wanted to reply to your post, not to only answer your questions, but also to point out where I personally am, and see if you might be able to give me some advice as well (the post is four and a half years old, I'm hoping you've gained some insight by now!)
So assuming you've read the paper by Tao pointed to elsewhere amongst the replies, I will refer to the "pre-rigorous," "rigorous," and "post-rigorous" stages of learning math that Tao referred to, and see if I can elaborate on them with personal (concrete) examples, and relate them to your questions. The first time I "learned" calculus, I completely skipped the chapter on limits. I was learning on my own, from my father's college textbook (he went to school for engineering, so there were plenty of application problems). At this point I was absolutely "pre-rigorous." I didn't really understand the proofs (unless they were purely calulatorial in nature), and I honestly didn't really care (I was like, 13). I did, however, notice that the further I went on, the bigger the gaps were. Eventually, I could not progress, I could not really say I was learning any more, rather just trying to memorize strange formulas. This happened when I started trying to pass from calculus on functions from R^3 -> R, to more general functions (maybe called "vector calculus"). I simply got stuck. Once I finally (and quite recently) came to school, now at the ripe old age of 26, I knew that learning the basics I would have to keep an eye out for the mysteries. I had two goals, I wanted to understand the Fundamental Theorem of Calculus, and I wanted to understand what the hell a Taylor Series was.
There was a moment of epiphany for me, I believe when I was reviewing the definition of a Cauchy Sequence (in R). It was such a drastic, intense thought, I realized that I did not understand any of what I was reading, whatsoever. It sounds melodramatic, but I'm not joking. It was as if there is a switch in my brain, when off I didn't understand logical reasoning, the purpose of definitions, how the material was progressing, and when on I could. I'm not saying that I understood everything at once, but I was able to go back to the beginning and learn it correctly. I don't know if this puts me in the "rigorous" phase of learning, but I believe it did. It was also during this stage of learning that I realized an evil trick played by my analysis teachers. The Hausdorff Separation axiom, as manifest in R, was "proven" at some point early in the exercises... If I ever teach a class in analysis, it will be made very clear early on that this is a particular property of R that is intuitively clear, but not something that is just "true."
I'm not sure when (or frankly if) I've made the transfer from "rigorous" to "post-rigorous" mathematics, and I would rather like to think of this stage as being less well defined. For me, I think I started making the crossing when I asked the question "what is an open set?" I had only learned about topological properties in relation to R, and had never read anything about topology on it's own. But nonetheless, I wanted to see, in my own head, what it was about a set that made it open, so I could just look at it and know that it's open... I wanted that "good picture" that Janich talks about in his book on topology... Then it hit me, what is a set? It took a class on Logic, followed by some personal studying into set theory to see that sets were some sort of approximation made by mathematicians to bridge the gap between logic and formalism (this is how I see it, at least). Now, with some grasp of how logical systems worked, and what sets are, I only need to read the first page of the first chapter of a book on topology to get a massive amount of insight into what the heck I've been doing in R. Open sets are open because we say so. But R isn't just "what we say," it has some sort of... maybe "Platonic" reality that we can all "observe." So R has much more structure than just that of being a topological space (it's a metric space, which is from whence the topology derives, and it's a group, which forces an ungodly amount of structure on it, and it's complete, and separable, and uncountable, and it's...).
So now, when I look at R, I know that I'm dealing with some natural beastie, and that when I start putting structures on it, I don't let the intuition and formalism develop separately, I get them going at the same time. When I need to prove a fact, I see what I can do on my own. I look to the knowledge and intuition that is already there to get me part ways, and only when I get stuck, only when I can't progress any further, do I go look at the what the author did. And now, that "formal trick," that simple "symbolic manipulation" is truly a machine manipulating properties of the space that I'm looking at, or uncovering some interesting fact/ insight that I haven't noticed before. Or maybe it's just a general strategy being applied in a new way, and I get some insight into new ways to apply it. In the most extreme cases, it shows me that my intuition about the space, my "view" of the space, is fundamentally flawed, and I need to rebuild it in my mind with different properties effecting the different objects.
As far as making connections to outside fields, even if they are fictitious, I get this feeling as well. I indulge them, because I hope they might make my understanding of the math more "indestructible," but I always remind myself that they are indeed fake. I don't want to end up looking like a crack-pot...
Anyways, that's what I've got. Be forewarned, I'm just an undergrad, so maybe everything here is worthless. But, I'd love to hear what you think, and any insight you may have gained, and progress you've made.