Saturday, February 12, 2011

Code coverage isn't everything. But it's not nothing, either.

Recently I've been spending a lot of time writing tests.

I'm not a fanatic Test First Development developer, though I think there's a lot of value in that practice. I'd say I:

  • occasionally write my tests first,

  • usually write my tests simultaneously with my code (often with two windows open on the same screen), jumping back-and-forth, adding each new test case as soon as I think of it,

  • rarely write my tests afterwards.



I'm comfortable with that distribution; it feels about right to me.

Interestingly, my behavior changes dramatically when I'm fixing a bug, as opposed to working on new feature development. When I embark on a bug fix, I nearly always write the test first; I think the proportion may be as high as 95% of the time. I do this because:

  • it's extremely comforting to make a bug fix, and watch the test case flip from "failing" to "passing", while all the other test cases continue to pass,

  • but more importantly, I've found, over the years, that writing and refining the test case for the bug is just about the best process for isolating and refining my ideas about what precisely is wrong with the code, and how exactly the code should be fixed.



So, anyway, I've been writing a lot of tests recently, and as part of that effort I've been spending some time studying code coverage reports. I've been using the built-in gcov toolset that is part of the GNU Compiler Collection, and also using the nifty lcov tools that build upon gcov to provide useful and easy-to-read reports and displays.

I take a very pragmatic view when it comes to tests:

  • Testing is a tool; writing and running tests is one way to help ensure that you are building great software, which should be the only sort you even try to build.

  • Code coverage is a tool for helping you write tests. If you care about writing tests (and you should), then you should care about writing the best tests you can. Code coverage is something that can help you improve your tests.



I don't have any sort of religion about code coverage. I don't think that tests with higher coverage are mandatory; I don't think that there is some magic level of coverage that you must achieve. I think that anybody who is spending any time thinking about code coverage should immediately go read Brian Marick's excellent essay on how to use code coverage appropriately: How To Misuse Code Coverage.

However, I do think that, all things being equal, higher code coverage is better than lower code coverage, to wit:

  • If I add a new test case, or suite of cases, and overall code coverage goes up, I am pleased. The test suite is more comprehensive, and therefore more useful.

  • If, however, code coverage tells me that I've already got a lot of coverage in this area, then I need to think about other ways to improve my tests.



In my experience, there are often large gaps in test coverage, and there is often a lot of low-hanging fruit: writing a small number of simple cheap-to-run tests can quickly ensure that your tests are covering a much larger portion of your code.

Furthermore, studying your code coverage tests can help you think about new test cases to write. A good coverage tool (like lcov) will show you not just line coverage, but branch coverage, function coverage, and many other ways to think about how your tests are driving your code. Just sitting down and staring at code coverage reports, I always find that ideas for new tests just seem to leap off the screen.

And that's what I'm really looking for when I pull up the code coverage tool: inspiration. Writing tests is hard, but there are always more tests to write, and always ways to make my tests better, so any tool which helps me do that is a tool which will have a prominent place on my shelf.

So, no: code coverage isn't everything. But it's not nothing, either.

1 comment:

  1. I think one of the key things is to think about how test stuff as you go along, not as an afterthought. Ideally this should happen when APIs are defined, when standards are written "how will I know this feature works" is something everyone should consider. After that, exactly when the tests are written is a detail -as long as the tests get written. The argument against write-tests-after is we get distracted by other issues, and never get back to them.

    ReplyDelete