Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)

Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)

  • Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic
  • Thorough: Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions
  • Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.
  • Independent: Very important.
    • Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
    • Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately
  • Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.

  • Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.

Update 2010-08:

  • Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist.

Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.


  1. Don't write ginormous tests. As the 'unit' in 'unit test' suggests, make each one as atomic and isolated as possible. If you must, create preconditions using mock objects, rather than recreating too much of the typical user environment manually.
  2. Don't test things that obviously work. Avoid testing the classes from a third-party vendor, especially the one supplying the core APIs of the framework you code in. E.g., don't test adding an item to the vendor's Hashtable class.
  3. Consider using a code coverage tool such as NCover to help discover edge cases you have yet to test.
  4. Try writing the test before the implementation. Think of the test as more of a specification that your implementation will adhere to. Cf. also behavior-driven development, a more specific branch of test-driven development.
  5. Be consistent. If you only write tests for some of your code, it's hardly useful. If you work in a team, and some or all of the others don't write tests, it's not very useful either. Convince yourself and everyone else of the importance (and time-saving properties) of testing, or don't bother.

Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.

Womp's 5 Laws of Writing Tests:


1. Use long, descriptive test method names.

   - Map_DefaultConstructorShouldCreateEmptyGisMap()
   - ShouldAlwaysDelegateXMLCorrectlyToTheCustomHandlers()
   - Dog_Object_Should_Eat_Homework_Object_When_Hungry()

2. Write your tests in an Arrange/Act/Assert style.

  • While this organizational strategy has been around for a while and called many things, the introduction of the "AAA" acronym recently has been a great way to get this across. Making all your tests consistent with AAA style makes them easy to read and maintain.

3. Always provide a failure message with your Asserts.

Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element 
processing events was raised by the XElementSerializer");
  • A simple yet rewarding practice that makes it obvious in your runner application what has failed. If you don't provide a message, you'll usually get something like "Expected true, was false" in your failure output, which makes you have to actually go read the test to find out what's wrong.

4. Comment the reason for the test – what’s the business assumption?

  /// A layer cannot be constructed with a null gisLayer, as every function 
  /// in the Layer class assumes that a valid gisLayer is present.
  [Test]
  public void ShouldNotAllowConstructionWithANullGisLayer()
  {
  }
  • This may seem obvious, but this practice will protect the integrity of your tests from people who don't understand the reason behind the test in the first place. I've seen many tests get removed or modified that were perfectly fine, simply because the person didn't understand the assumptions that the test was verifying.
  • If the test is trivial or the method name is sufficiently descriptive, it can be permissible to leave the comment off.

5. Every test must always revert the state of any resource it touches

  • Use mocks where possible to avoid dealing with real resources.
  • Cleanup must be done at the test level. Tests must not have any reliance on order of execution.

Keep these goals in mind (adapted from the book xUnit Test Patterns by Meszaros)

  • Tests should reduce risk, not introduce it.
  • Tests should be easy to run.
  • Tests should be easy to maintain as the system evolves around them

Some things to make this easier:

  • Tests should only fail because of one reason.
  • Tests should only test one thing
  • Minimize test dependencies (no dependencies on databases, files, ui etc.)

Don't forget that you can do intergration testing with your xUnit framework too but keep intergration tests and unit tests separate