Whyever **not** declare a function to be `constexpr`?

Functions can only be declared constexpr if they obey the rules for constexpr --- no dynamic casts, no memory allocation, no calls to non-constexpr functions, etc.

Declaring a function in the standard library as constexpr requires that ALL implementations obey those rules.

Firstly, this requires checking for each function that it can be implemented as constexpr, which is a long job.

Secondly, this is a big constraint on the implementations, and will outlaw many debugging implementations. It is therefore only worth it if the benefits outweigh the costs, or the requirements are sufficiently tight that the implementation pretty much has to obey the constexpr rules anyway. Making this evaluation for each function is again a long job.


I think what you're referring to is called partial evaluation. What you're touching on is that some programs can be split into two parts - a piece that requires runtime information, and a piece that can be done without any runtime information - and that in theory you could just fully evaluate the part of the program that doesn't need any runtime information before you even start running the program. There are some programming languages that do this. For example, the D programming language has an interpreter built into the compiler that lets you execute code at compile-time, provided that it meets certain restrictions.

There are a few main challenges in getting partial evaluation working. First, it dramatically complicates the logic of the compiler because the compiler will need to have the ability to simulate all of the operations that you could put into an executable program at compile-time. This, in the worst case, requires you to have a full interpreter inside of the compiler, making a difficult problem (writing a good C++ compiler) and making it orders of magnitude harder to do.

I believe that the reason for the current specification about constexpr is simply to limit the complexity of compilers. The cases it's limited to are fairly simple to check. There's no need to implement loops in the compiler (which could cause a whole other slew of problems, like what happens if you get an infinite loop inside the compiler). It also avoids the compiler potentially having to evaluate statements that could cause segfaults at runtime, such as following a bad pointer.

Another consideration to keep in mind is that some functions have side-effects, such as reading from cin or opening a network connection. Functions like these fundamentally can't be optimized at compile-time, since doing so would require knowledge only available at runtime.

To summarize, there's no theoretical reason you couldn't partially evaluate C++ programs at compile-time. In fact, people do this all the time. Optimizing compilers, for example, are essentially programs that try to do this as much as possible. Template metaprogramming is one instance where C++ programmers try to execute code inside the compiler, and it's possible to do some great things with templates partially because the rules for templates form a functional language, which the compiler has an easier time implementing. Moreover, if you think of the tradeoff between compiler author hours and programming hours, template metaprogramming shows that if you're okay making programmers bend over backwards to get what they want, you can build a pretty weak language (the template system) and keep the language complexity simple. (I say "weak" as in "not particularly expressive," not "weak" in the computability theory sense).

Hope this helps!


If the function has side effects, you would not want to mark it constexpr. Example

I can't get any unexpected results from that, actually it looks like gcc 4.5.1 just ignores constexpr