So why is i = ++i + 1 well-defined in C++11?

I've seen the other similar questions and read the defect about it. But I still don't get it. Why is i = ++i + 1 well-defined in C++11 when i = i++ + 1 is not? How does the standard make this well defined?

By my working out, I have the following sequenced before graph (where an arrow represents the sequenced before relationship and everything is a value computation unless otherwise specified):

i = ++i + 1
     ^
     |
assignment (side effect on i)
 ^      ^
 |      |
☆i   ++i + 1
     ||    ^
    i+=1   |
     ^     1
     |
★assignment (side effect on i)
  ^      ^
  |      |
  i      1

I've marked a side effect on i with a black star and value computation of i with a white star. These appear to be unsequenced with respect to each other (according to my logic). And the standard says:

If a side effect on a scalar object is unsequenced relative to either another side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined.

The explanation in the defect report didn't help me understand. What does the lvalue-to-rvalue conversion have to do with anything? What have I gotten wrong?


Solution 1:

... or a value computation using the value of the same scalar object ...

The important part is bolded here. The left hand side does not use the value of i for the value computation. What is being computed is a glvalue. Only afterwards (sequenced after), the value of the object is touched and replaced.

Unfortunately this is a very subtle point :)

Solution 2:

Well, the key moment here is that the result of ++i is an lvalue. And in order to participate in binary + that lvalue has to be converted to an rvalue by lvalue-to-rvalue conversion. Lvalue-to-rvalue conversion is basically an act of reading the variable i from memory.

That means that the value of ++i is supposed to be obtained as if it was read directly from i. That means that conceptually the new (incremented) value of i must be ready (must be physically stored in i) before the evaluation of binary + begins.

This extra sequencing is what inadvertently made the behavior defined in this case.

In C++03 there was no strict requirement to obtain the value of ++i directly from i. The new value could have been predicted/precalulated as i + 1 and used as an operand of binary + even before it was physically stored in the the actual i. Although it can be reasonably claimed that the requirement was implicitly there even in C++03 (and the fact that C++03 did not recognize its existence was a defect of C++03)


In case of i++, the result is an rvalue. Since it is already an rvalue, there's no lvalue-to-rvalue conversion involved and there's absolutely no requirement to store it in i before we start evaluating the binary +.

Solution 3:

Long since asking this question, I have written an article on visualising C++ evaluation sequencing with graphs. This question is the identical to the case of i = ++i, which has the following sequenced-before graph:

Sequenced-before graph for i = ++i

The red nodes represent side effects on i. The blue node represents an evaluation that uses the value of i. The reason this expression is well-defined is because there are none of these nodes unsequenced with each other. The use of i on the left of the assigment doesn't use the value of i, so is not a problem.

The main part that I was missing was what "uses the value" means. An operator uses the value of its operand if it expects a prvalue expression for that operand. When giving the name of an object, which is an lvalue, the lvalue has to undergo lvalue-to-rvalue conversion, which can be seen as "reading the value of the object". Assignment only requires an lvalue for its left operand, which means that it doesn't use its value.