How much footprint does C++ exception handling add
This issue is important especially for embedded development. Exception handling adds some footprint to generated binary output. On the other hand, without exceptions the errors need to be handled some other way, which requires additional code, which eventually also increases binary size.
I'm interested in your experiences, especially:
- What is average footprint added by your compiler for the exception handling (if you have such measurements)?
- Is the exception handling really more expensive (many say that), in terms of binary output size, than other error handling strategies?
- What error handling strategy would you suggest for embedded development?
Please take my questions only as guidance. Any input is welcome.
Addendum: Does any one have a concrete method/script/tool that, for a specific C++ object/executable, will show the percentage of the loaded memory footprint that is occupied by compiler-generated code and data structures dedicated to exception handling?
Solution 1:
When an exception occurs there will be time overhead which depends on how you implement your exception handling. But, being anecdotal, the severity of an event that should cause an exception will take just as much time to handle using any other method. Why not use the highly supported language based method of dealing with such problems?
The GNU C++ compiler uses the zero–cost model by default i.e. there is no time overhead when exceptions don't occur.
Since information about exception-handling code and the offsets of local objects can be computed once at compile time, such information can be kept in a single place associated with each function, but not in each ARI. You essentially remove exception overhead from each ARI and thus avoid the extra time to push them onto the stack. This approach is called the zero-cost model of exception handling, and the optimized storage mentioned earlier is known as the shadow stack. - Bruce Eckel, Thinking in C++ Volume 2
The size complexity overhead isn't easily quantifiable but Eckel states an average of 5 and 15 percent. This will depend on the size of your exception handling code in ratio to the size of your application code. If your program is small then exceptions will be a large part of the binary. If you are using a zero–cost model than exceptions will take more space to remove the time overhead, so if you care about space and not time than don't use zero-cost compilation.
My opinion is that most embedded systems have plenty of memory to the extent that if your system has a C++ compiler you have enough space to include exceptions. The PC/104 computer that my project uses has several GB of secondary memory, 512 MB of main memory, hence no space problem for exceptions - though, our micorcontrollers are programmed in C. My heuristic is "if there is a mainstream C++ compiler for it, use exceptions, otherwise use C".
Solution 2:
Measuring things, part 2. I have now got two programs. The first is in C and is compiled with gcc -O2:
#include <stdio.h>
#include <time.h>
#define BIG 1000000
int f( int n ) {
int r = 0, i = 0;
for ( i = 0; i < 1000; i++ ) {
r += i;
if ( n == BIG - 1 ) {
return -1;
}
}
return r;
}
int main() {
clock_t start = clock();
int i = 0, z = 0;
for ( i = 0; i < BIG; i++ ) {
if ( (z = f(i)) == -1 ) {
break;
}
}
double t = (double)(clock() - start) / CLOCKS_PER_SEC;
printf( "%f\n", t );
printf( "%d\n", z );
}
The second is C++, with exception handling, compiled with g++ -O2:
#include <stdio.h>
#include <time.h>
#define BIG 1000000
int f( int n ) {
int r = 0, i = 0;
for ( i = 0; i < 1000; i++ ) {
r += i;
if ( n == BIG - 1 ) {
throw -1;
}
}
return r;
}
int main() {
clock_t start = clock();
int i = 0, z = 0;
for ( i = 0; i < BIG; i++ ) {
try {
z += f(i);
}
catch( ... ) {
break;
}
}
double t = (double)(clock() - start) / CLOCKS_PER_SEC;
printf( "%f\n", t );
printf( "%d\n", z );
}
I think these answer all the criticisms made of my last post.
Result: Execution times give the C version a 0.5% edge over the C++ version with exceptions, not the 10% that others have talked about (but not demonstrated)
I'd be very grateful if others could try compiling and running the code (should only take a few minutes) in order to check that I have not made a horrible and obvious mistake anywhere. This is knownas "the scientific method"!
Solution 3:
I work in a low latency environment. (sub 300 microseconds for my application in the "chain" of production) Exception handling, in my experience, adds 5-25% execution time depending on the amount you do!
We don't generally care about binary bloat, but if you get too much bloat then you thrash like crazy, so you need to be careful.
Just keep the binary reasonable (depends on your setup).
I do pretty extensive profiling of my systems.
Other nasty areas:
Logging
Persisting (we just don't do this one, or if we do it's in parallel)
Solution 4:
I guess it'd depend on the hardware and toolchain port for that specific platform.
I don't have the figures. However, for most embedded developement, I have seen people chucking out two things (for VxWorks/GCC toolchain):
- Templates
- RTTI
Exception handling does make use of both in most cases, so there is a tendency to throw it out as well.
In those cases where we really want to get close to the metal, setjmp
/longjmp
are used. Note, that this isn't the best solution possible (or very powerful) probably, but then that's what _we_ use.
You can run simple tests on your desktop with two versions of a benchmarking suite with/without exception handling and get the data that you can rely on most.
Another thing about embedded development: templates are avoided like the plague -- they cause too much bloat. Exceptions tag along templates and RTTI as explained by Johann Gerell in the comments (I assumed this was well understood).
Again, this is just what we do. What is it with all the downvoting?