Unit Testing File I/O
Reading through the existing unit testing related threads here on Stack Overflow, I couldn't find one with a clear answer about how to unit test file I/O operations. I have only recently started looking into unit testing, having been previously aware of the advantages but having difficulty getting used to writing tests first. I have set up my project to use NUnit and Rhino Mocks and although I understand the concept behind them, I'm having a little trouble understanding how to use Mock Objects.
Specifically I have two questions that I would like answered. First, what is the proper way to unit test file I/O operations? Second, in my attempts to learn about unit testing, I have come across dependency injection. After getting Ninject set up and working, I was wondering whether I should use DI within my unit tests, or just instantiate objects directly.
Solution 1:
There isn't necessarily one thing to do when testing the file system. In truth, there are several things you might do, depending on the circumstances.
The question you need to ask is: What am I testing?
That the file system works? You probably don't need to test that unless you're using an operating system which you're extremely unfamiliar with. So if you're simply giving a command to save files, for instance, it's a waste of time to write a test to make sure they really save.
That the files get saved to the right place? Well, how do you know what the right place is? Presumably you have code that combines a path with a file name. This is code you can test easily: Your input is two strings, and your output should be a string which is a valid file location constructed using those two strings.
That you get the right set of files from a directory? You'll probably have to write a test for your file-getter class that really tests the file system. But you should use a test directory with files in it that won't change. You should also put this test in an integration test project, because this is not a true unit test, because it depends on the file system.
But, I need to do something with the files I get. For that test, you should use a fake for your file-getter class. Your fake should return a hard-coded list of files. If you use a real file-getter and a real file-processor, you won't know which one causes a test failure. So your file-processor class, in testing, should make use of a fake file-getter class. Your file-processor class should take the file-getter interface. In real code, you'll pass in the real file-getter. In test code you'll pass a fake file-getter that returns a known, static list.
The fundamental principles are:
- Use a fake file system, hidden behind an interface, when you're not testing the file system itself.
- If you need to test real file operations, then
- mark the test as an integration test, not a unit test.
- have a designated test directory, set of files, etc. that will always be there in an unchanged state, so your file-oriented integration tests can pass consistently.
Solution 2:
Check out Tutorial to TDD using Rhino Mocks and SystemWrapper.
SystemWrapper wraps many of System.IO classes including File, FileInfo, Directory, DirectoryInfo, ... . You can see the complete list.
In this tutorial I'm showing how to do testing with MbUnit but it's exactly the same for NUnit.
Your test is going to look something like this:
[Test]
public void When_try_to_create_directory_that_already_exists_return_false()
{
var directoryInfoStub = MockRepository.GenerateStub<IDirectoryInfoWrap>();
directoryInfoStub.Stub(x => x.Exists).Return(true);
Assert.AreEqual(false, new DirectoryInfoSample().TryToCreateDirectory(directoryInfoStub));
directoryInfoStub.AssertWasNotCalled(x => x.Create());
}
Solution 3:
Q1:
You have three options here.
Option 1: Live with it.
(no example :P)
Option 2: Create a slight abstraction where required.
Instead of doing the file I/O (File.ReadAllBytes or whatever) in the method under test, you could change it so that the IO is done outside and a stream is passed instead.
public class MyClassThatOpensFiles
{
public bool IsDataValid(string filename)
{
var filebytes = File.ReadAllBytes(filename);
DoSomethingWithFile(fileBytes);
}
}
would become
// File IO is done outside prior to this call, so in the level
// above the caller would open a file and pass in the stream
public class MyClassThatNoLongerOpensFiles
{
public bool IsDataValid(Stream stream) // or byte[]
{
DoSomethingWithStreamInstead(stream); // can be a memorystream in tests
}
}
This approach is a tradeoff. Firstly, yes, it is more testable. However, it trades testability for a slight addition to complexity. This can hit maintainability and the amount of code you have to write, plus you may just move your testing problem up one level.
However, in my experience this is a nice, balanced approach as you can generalise and make testable the important logic without committing yourself to a fully wrapped file system. I.e. you can generalise the bits you really care about, while leaving the rest as is.
Option 3: Wrap the whole file system
Taking it a step further, mocking the filesystem can be a valid approach; it depends on how much bloat you're willing to live with.
I've gone this route before; I had a wrapped file system implementation, but in the end I just deleted it. There were subtle differences in the API, I had to inject it everywhere and ultimately it was extra pain for little gain as many of the classes using it weren't hugely important to me. If I had been using an IoC container or writing something that was critical and the tests needed to be fast I might have stuck with it, though. As with all of these options, your mileage may vary.
As for your IoC container question:
Inject your test doubles manually. If you have to do a lot of repetitive work, just use setup/factory methods in your tests. Using an IoC container for testing would be overkill in the extreme! Maybe I am not understanding your second question, though.