How I Structure Unit Tests
I've been asked how I structure my unit tests. As I think the style that I've converged on through practice is a bit unusual in the .NET world, I thought my answer warranted a blog post.
My philosophy is that tests are a form of living documentation of your code. If I need to learn about some code, the tests are the first place I go to. I want them to be simple, communicative and representative of real-world usage. When I'm thinking about a chunk of code that I want to test, I want to ensure that the tests describe the intent of the code, without being tightly coupled to a particular implementation. I struggled with this notion literally for years - how can you do that?!
Let's dive into an example. At the moment I'm building an event cache for a view model, which is hooked up to an underlying event source, like a web service. When the view model is on-screen, it simply forwards events from the source. When the view model is off screen and an event is produced, the cache remembers that an event is produced and will forward it as soon as the view model is re-shown.
So how do we communicate the intent? We consider the thing under test as a black box and test its public API. This keeps our tests representative of real-world usage, but also limits our knowledge to what is publicly exposed. This means that we test by controlling inputs and making assertions about the outputs. This translates, in English, to:
- When source fires event and view model is activated
- Cache forwards event immediately
- When source fires and view model is deactivated
- Cache does not forward event
- And view model is then reactivated
- Cache forwards event produced while deactivated immediately
- Cache forwards further events from source immediately
- And view model is then reactivated
- Cache does not forward event
- When multiple events produced from source with a deactivated view model
- Cache does not forward event
- And view model is then reactivated
- Cache forwards most recent event produced while deactivated immediately
- Cache forwards further events from source immediately
- And view model is then reactivated
- Cache does not forward event
What's appealing about this is that this reads like a specification document. We are using scenarios as a method of grouping pieces of behaviour. This can be directly translated into a suite of unit tests! The mapping is simple - a scenario becomes a test class, a behaviour becomes a test method. NUnit and MSTest both support the notion that a public inner class, annotated appropriately, will be picked up by the test runner and executed.
So in code this will look like:
[TestFixture]
public class CacheWhenDeactivatedSpec {
[TestFixture]
public class When_Source_Fires_Event_And_View_Model_Is_Activated {
[TestMethod]
public void Cache_Forwards_Event_Immediately() { ... }
}
[TestFixture]
public class When_Source_Fires_Event_And_View_Model_Is_Deactivated {
[TestMethod]
public void Cache_Does_Not_Forward_Event() { ... }
[TestFixture]
public class And_View_Model_Is_Then_Reactivated {
[TestMethod]
public void Cache_Forwards_Event_Produced_While_Deactivated_Immediately() { ... }
[TestMethod]
public void Cache_Forwards_Further_Events_From_Source_Immediately() { ... }
}
}
[TestFixture]
public class When_Source_Fires_Multiple_Events_And_View_Model_Is_Deactivated {
[TestMethod]
public void Cache_Does_Not_Forward_Event() { ... }
[TestFixture]
public class And_View_Model_Is_Then_Reactivated {
[TestMethod]
public void Cache_Forwards_Most_Recent_Event_Produced_While_Deactivated_Immediately() { ... }
[TestMethod]
public void Cache_Forwards_Further_Events_From_Source_Immediately() { ... }
}
}
}
The first thing I do when refactoring legacy code is write a new set of tests in this style. Every time I've done this I've noticed the original test code was missing test cases. If you're writing your tests according to the typical MethodUnderTestScenario_Expectation
style, it's very easy to forget a test case. Perhaps this is because you're grouping tests by MethodUnderTest
, not by scenario. In the style of testing described above, you have the ability to use the collapse class ability of your IDE to hide away scenarios that you don't care about at the moment. Indeed, folding everything down to method declaration gives you, with a bit of C# syntax, exactly the same bulleted list as I described above!
Don't just rush into your enterprise codebase and start making this change. I've seen plenty of hand-wringing over this method in Java and .NET land, although it seems pretty common in languages like Ruby because of RSpec. That said, there are .NET tools like SpecFlow which try and follow this line of thinking, but presented in a slightly different way. Try it, your mileage may vary, but it works well for me.