How do you unit test a unit test? [closed]

First, testing is like security -- you can never be 100% sure you've got it, but each layer adds more confidence and a framework for more easily fixing the problems that remain.

Second, you can break tests into subroutines which themselves can then be tested. When you have 20 similar tests, making a (tested) subroutine means your main test is 20 simple invocations of the subroutine which is much more likely to be correct.

Third, some would argue that TDD addresses this concern. That is, if you just write 20 tests and they pass, you're not completely confident that they are actually testing anything. But if each test you wrote initially failed, and then you fixed it, then you're much more confident that it's really testing your code. IMHO this back-and-forth takes more time than it's worth, but it is a process that tries to address your concern.


A test being wrong is unlikely to break your production code. At least, not any worse than having no test at all. So it's not a "point of failure": the tests don't have to be correct in order for the product to actually work. They might have to be correct before it's signed off as working, but the process of fixing any broken tests does not endanger your implementation code.

You can think of tests, even trivial tests like these, as being a second opinion what the code is supposed to do. One opinion is the test, the other is the implementation. If they don't agree, then you know you have a problem and you look closer.

It's also useful if someone in future wants to implement the same interface from scratch. They shouldn't have to read the first implementation in order to know what Discount means, and the tests act as an unambiguous back-up to any written description of the interface you may have.

That said, you're trading off time. If there are other tests you could be writing using the time you save skipping these trivial tests, maybe they would be more valuable. It depends on your test setup and the nature of the application, really. If the Discount is important to the app, then you're going to catch any bugs in this method in functional testing anyway. All unit testing does is let you catch them at the point you're testing this unit, when the location of the error will be immediately obvious, instead of waiting until the app is integrated together and the location of the error might be less obvious.

By the way, personally I wouldn't use 100 as the price in the test case (or rather, if I did then I'd add another test with another price). The reason is that someone in future might think that Discount is supposed to be a percentage. One purpose of trivial tests like this is to ensure that mistakes in reading the specification are corrected.

[Concerning the edit: I think it's inevitable that an incorrect specification is a point of failure. If you don't know what the app is supposed to do, then chances are it won't do it. But writing tests to reflect the spec doesn't magnify this problem, it merely fails to solve it. So you aren't adding new points of failure, you're just representing the existing faults in code instead of waffle documentation.]


All I see is more possible points of failure, with no real beneficial return, if the discount price is wrong, the test team will still find the issue, how did unit testing save any work?

Unit testing isn't really supposed to save work, it's supposed to help you find and prevent bugs. It's more work, but it's the right kind of work. It's thinking about your code at the lowest levels of granularity and writing test cases that prove that it works under expected conditions, for a given set of inputs. It's isolating variables so you can save time by looking in the right place when a bug does present itself. It's saving that suite of tests so that you can use them again and again when you have to make a change down the road.

I personally think that most methodologies are not many steps removed from cargo cult software engineering, TDD included, but you don't have to adhere to strict TDD to reap the benefits of unit testing. Keep the good parts and throw out the parts that yield little benefit.

Finally, the answer to your titular question "How do you unit test a unit test?" is that you shouldn't have to. Each unit test should be brain-dead simple. Call a method with a specific input and compare it to its expected output. If the specification for a method changes then you can expect that some of the unit tests for that method will need to change as well. That's one of the reasons that you do unit testing at such a low level of granularity, so only some of the unit tests have to change. If you find that tests for many different methods are changing for one change in a requirement, then you may not be testing at a fine enough level of granularity.


Unit tests are there so that your units (methods) do what you expect. Writing the test first forces you to think about what you expect before you write the code. Thinking before doing is always a good idea.

Unit tests should reflect the business rules. Granted, there can be errors in the code, but writing the test first allows you to write it from the perspective of the business rule before any code has been written. Writing the test afterwards, I think, is more likely to lead to the error you describe because you know how the code implements it and are tempted just to make sure that the implementation is correct -- not that the intent is correct.

Also, unit tests are only one form -- and the lowest, at that -- of tests that you should be writing. Integration tests and acceptance tests should also be written, the latter by the customer, if possible, to make sure that the system operates the way it is expected. If you find errors during this testing, go back and write unit tests (that fail) to test the change in functionality to make it work correctly, then change your code to make the test pass. Now you have regression tests that capture your bug fixes.

[EDIT]

Another thing that I have found with doing TDD. It almost forces good design by default. This is because highly coupled designs are nearly impossible to unit test in isolation. It doesn't take very long using TDD to figure out that using interfaces, inversion of control, and dependency injection -- all patterns that will improve your design and reduce coupling -- are really important for testable code.


How does one test a test? Mutation testing is a valuable technique that I have personally used to surprisingly good effect. Read the linked article for more details, and links to even more academic references, but in general it "tests your tests" by modifying your source code (changing "x += 1" to "x -= 1" for example) and then rerunning your tests, ensuring that at least one test fails. Any mutations that don't cause test failures are flagged for later investigation.

You'd be surprised at how you can have 100% line and branch coverage with a set of tests that look comprehensive, and yet you can fundamentally change or even comment out a line in your source without any of the tests complaining. Often this comes down to not testing with the right inputs to cover all boundary cases, sometimes it's more subtle, but in all cases I was impressed with how much came out of it.