Debug vs. Release performance
Solution 1:
Partially true. In debug mode, the compiler emits debug symbols for all variables and compiles the code as is. In release mode, some optimizations are included:
- unused variables do not get compiled at all
- some loop variables are taken out of the loop by the compiler if they are proven to be invariants
- code written under #debug directive is not included, etc.
The rest is up to the JIT.
Full list of optimizations here courtesy of Eric Lippert.
Solution 2:
There is no article which "proves" anything about a performance question. The way to prove an assertion about the performance impact of a change is to try it both ways and test it under realistic-but-controlled conditions.
You're asking a question about performance, so clearly you care about performance. If you care about performance then the right thing to do is to set some performance goals and then write yourself a test suite which tracks your progress against those goals. Once you have a such a test suite you can then easily use it to test for yourself the truth or falsity of statements like "the debug build is slower".
And furthermore, you'll be able to get meaningful results. "Slower" is meaningless because it is not clear whether it's one microsecond slower or twenty minutes slower. "10% slower under realistic conditions" is more meaningful.
Spend the time you would have spent researching this question online on building a device which answers the question. You'll get far more accurate results that way. Anything you read online is just a guess about what might happen. Reason from facts you gathered yourself, not from other people's guesses about how your program might behave.