Best Practices of Test Driven Development Using C# and RhinoMocks [closed]

Solution 1:

Definitely a good list. Here are a few thoughts on it:

Write the test first, then the code.

I agree, at a high level. But, I'd be more specific: "Write a test first, then write just enough code to pass the test, and repeat." Otherwise, I'd be afraid that my unit tests would look more like integration or acceptance tests.

Design classes using dependency injection.

Agreed. When an object creates its own dependencies, you have no control over them. Inversion of Control / Dependency Injection gives you that control, allowing you to isolate the object under test with mocks/stubs/etc. This is how you test objects in isolation.

Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter.

Agreed. Note that even the presenter/controller can be tested using DI/IoC, by handing it a stubbed/mocked view and model. Check out Presenter First TDD for more on that.

Do not write static methods or classes.

Not sure I agree with this one. It is possible to unit test a static method/class without using mocks. So, perhaps this is one of those Rhino Mock specific rules you mentioned.

Program off interfaces, not classes.

I agree, but for a slightly different reason. Interfaces provide a great deal of flexibility to the software developer - beyond just support for various mock object frameworks. For example, it is not possible to support DI properly without interfaces.

Isolate external dependencies.

Agreed. Hide external dependencies behind your own facade or adapter (as appropriate) with an interface. This will allow you to isolate your software from the external dependency, be it a web service, a queue, a database or something else. This is especially important when your team doesn't control the dependency (a.k.a. external).

Mark as virtual the methods you intend to mock.

That's a limitation of Rhino Mocks. In an environment that prefers hand coded stubs over a mock object framework, that wouldn't be necessary.

And, a couple of new points to consider:

Use creational design patterns. This will assist with DI, but it also allows you to isolate that code and test it independently of other logic.

Write tests using Bill Wake's Arrange/Act/Assert technique. This technique makes it very clear what configuration is necessary, what is actually being tested, and what is expected.

Don't be afraid to roll your own mocks/stubs. Often, you'll find that using mock object frameworks makes your tests incredibly hard to read. By rolling your own, you'll have complete control over your mocks/stubs, and you'll be able to keep your tests readable. (Refer back to previous point.)

Avoid the temptation to refactor duplication out of your unit tests into abstract base classes, or setup/teardown methods. Doing so hides configuration/clean-up code from the developer trying to grok the unit test. In this case, the clarity of each individual test is more important than refactoring out duplication.

Implement Continuous Integration. Check-in your code on every "green bar." Build your software and run your full suite of unit tests on every check-in. (Sure, this isn't a coding practice, per se; but it is an incredible tool for keeping your software clean and fully integrated.)

Solution 2:

If you are working with .Net 3.5, you may want to look into the Moq mocking library - it uses expression trees and lambdas to remove non-intuitive record-reply idiom of most other mocking libraries.

Check out this quickstart to see how much more intuitive your test cases become, here is a simple example:

// ShouldExpectMethodCallWithVariable
int value = 5;
var mock = new Mock<IFoo>();

mock.Expect(x => x.Duplicate(value)).Returns(() => value * 2);

Assert.AreEqual(value * 2, mock.Object.Duplicate(value));

Solution 3:

Know the difference between fakes, mocks and stubs and when to use each.

Avoid over specifying interactions using mocks. This makes tests brittle.

Solution 4:

This is a very helpful post!

I would add that it is always important to understand the Context and System Under Test (SUT). Following TDD principals to the letter is much easier when you're writing new code in an environment where existing code follows the same principals. But when you're writing new code in a non TDD legacy environment you find that your TDD efforts can quickly balloon far beyond your estimates and expectations.

For some of you, who live in an entirely academic world, timelines and delivery may not be important, but in an environment where software is money, making effective use of your TDD effort is critical.

TDD is highly subject to the Law of Diminishing Marginal Return. In short, your efforts towards TDD are increasingly valuable until you hit a point of maximum return, after which, subsequent time invested into TDD has less and less value.

I tend to believe that TDD's primary value is in boundary (blackbox) as well as in occasional whitebox testing of mission-critical areas of the system.