Reading a colleague’s unit tests recently and noticing some odd choices of test cases, like a test called “testConstructor” to confirm that no exceptions are thrown when you call the default constructor, it struck me that the choice of names for their test methods revealed something about their motivation, their philosophy of testing.
If your motivation is simply coverage, and your goal is to ensure that you have a test for every possible method, then you’re going to end up with a test that checks a default constructor works, and a test for a setter that uses 50 lines of reflection to verify it.
Motivated by coverage, that “testConstructor” method or the “testSetX” method will pass through your “do I need this test?” filter.
Those names, like “testConstructor”, “testSetX”, “testSomeMethodName”, all scream “coverage”. The author’s motivation was to write at least one test for each method. Coverage, coverage, coverage.
What’s wrong with that? Isn’t coverage the whole point of a unit test?
Well, no, not in my philosophy.
Lots of people would agree that the purpose of a unit test is to ensure that the test subject works.
But what does “work” mean?
Something “works” if it behaves correctly. What does that mean? What determines “correctness”?
That’s the point of the tests. The tests define what correct behaviour means for the test subject. The tests are the specification for all the things the test subject should do and all the different circumstances that expose that behaviour.
As a specification, a good unit test does two things. First it is executable, so it can be used to verify and protect that correct behaviour. Second, it is readable, so that it communicates the specification to future maintainers, or your code review buddy.
If you think of tests this way, then “testConstructor” will never pass through your “do I need this test?” filter, because it says nothing about the required behaviour of the test subject, and it does nothing to communicate the specification.
If you apply a naming pattern that matches your “clearly communicated, executable specification” goal, then instead of “testConstructor” you’d have typed something like “shouldNotThrowExceptionWhenConstructed”.
That makes for a fluent expression of the intended behaviour in a particular scenario, but it also makes it more obvious how redundant such a test is. Is this really an important part of the specification for this component that we want our readers to notice and understand?
No. It really isn’t.
If you think about tests in terms of coverage, you’ll choose names like “testConstructor” and write tests that serve only to enhance coverage statistics.
If you think that tests should be a clearly communicated and executable expression of the required behaviour of the test subject in particular scenarios, then you’re more likely to choose names like “shouldDoXWhenY”, and you’ll write tests that explore, expose and express what it means for something to “work”.
Come back to that test method name, “shouldNotThrowExceptionWhenConstructed”, or think of what that test was doing – calling the constructor. If you write your code test-first, would you ever think of writing such a test? Would you ever think “OK, what’s the next test I need to make this thing pass? Oh, I know, lets check that you can call the constructor”?
“Test-last” fans tend to focus on coverage, guide their next testing steps by looking at which methods haven’t been hit yet, write test names that describe coverage, and produce test classes that don’t express the specification. You end up with a test method with a name that says which method it wanted to call (or at least, the name of the method as it was a one time in the past), but no explanation of the intent of that test; No explanation of the scenario the tester was trying to create and the behaviour they thought was required in that scenario.
TDDers tend to write tests that express the specification, with names that reveal the intended scenario and behaviour. In other words, better.