Unit testing: why is the expected argument always first in equality tests?

It seems that most early frameworks used expected before actual (for some unknown reason though, dice roll perhaps?). Yet with programming languages development, and increased fluency of the code, that order got reversed. Most fluent interfaces usually try to mimic natural language and unit testing frameworks are no different.

In the assertion, we want to assure that some object matches some conditions. This is the natural language form, as if you were to explain your test code you'd probably say

"In this test, I make sure that computed value is equal to 5"

instead of

"In this test, I make sure that 5 is equal to computed value".

Difference may not be huge, but let's push it further. Consider this:

Assert.That(Roses, Are(Red));

Sounds about right. Now:

Assert.That(Red, Are(Roses));

Hm..? You probably wouldn't be too surprised if somebody told you that roses are red. Other way around, red are roses, raises suspicious questions. Yoda, anybody?

That doesn't sound natural at all

Yoda's making an important point - reversed order forces you to think.

It gets even more unnatural when your assertions are more complex:

Assert.That(Forest, Has.MoreThan(15, Trees));

How would you reverse that one? More than 15 trees are being had by forest?

This claim (fluency as a driving factor for modification) is somehow reflected in the change that NUnit has gone through - originally (Assert.AreEqual) it used expected before actual (old style). Fluent extensions (or to use NUnit's terminology, constraint based - Assert.That) reversed that order.


I think it is just a convention now and as you said it is adopted by "every unit testing framework (I know of)". If you are using a framework it would be annoying to switch to another framework that uses the opposite convention. So (if you are writing a new unit testing framework for example) it would be preferable for you as well to follow the existing convention. I believe this comes from the way some developers prefer to write their equality tests:

if (4 == myVar)

To avoid any unwanted assignment, by mistake, writing one "=" instead of "==". In this case the compiler will catch this error and you will avoid a lot of troubles trying to fix a weird runtime bug.


Nobody knows and it is the source of never ending confusions. However not all frameworks follow this pattern (to a greater confusion):

  1. FEST-Assert uses normal order:

    assertThat(Util.GetAnswerToLifeTheUniverseAndEverything()).isEqualTo(42);
    
  2. Hamcrest:

    assertThat(Util.GetAnswerToLifeTheUniverseAndEverything(), equalTo(42))
    
  3. ScalaTest doesn't really make a distinction:

    Util.GetAnswerToLifeTheUniverseAndEverything() should equal (42)
    

I don't know but I've been part of several animated discussions about the order of arguments to equality tests in general.

There are a lot of people who think

if (42 == answer) {
  doSomething();
}

is preferable to

if (answer == 42) {
  doSomething();
}

in C-based languages. The reason for this is that if you accidentally put a single equals sign:

if (42 = answer) {
  doSomething();
}

will give you a compiler error, but

if (answer = 42) {
  doSomething();
}

might not, and would definitely introduce a bug that might be hard to track down. So who knows, maybe the person/people who set up the unit testing framework were used to thinking of equality tests in this way -- or they were copying other unit testing frameworks that were already set up this way.