I was listening to a presentation on testing, and the question on coverage came up. I was asked my opinion, and this interesting thought occurred to me. So I tweeted:
New Rule: At least 80% coverage on 60% of the app. 70% of the time, it works everytime.
Mark Clearwater (me)
So what do I mean by this? Well basically, 100% test coverage overall is not a good idea. Trust me, I've tried, and the diminishing returns of achieving such high coverage is very clear. On top of that, the meaningfulness of that last few percent is ridiculous. It is barely coverage testing at all, more like box ticking.
Having said that, there are key parts of the code base that should clearly have very high, or complete 100% coverage. Your business rules, complex logic, control flow logic, you kind of want to make sure you get verification over this stuff. You probably want to get to a comfortable 80%+ on these parts of the system, firstly to ensure you have enough coverage for the success cases, and second, to ensure you don't aim for a time-wasting 100%.
From experience, and just pulling a nice round number, I would say that kind of code makes up about 60% of your application code. The other 40% is around plumbing, wiring up, framework integration, networking or OS integration, database infrastructure, and other, hard to test functionality.
So that's why I think we should all aim for the 80/60 rule. 80% coverage, over 60% of the application. And what about the other 40%? Realistically, you're probably best off leaving that for the integration and user acceptance tests.
Remember, there are two main reasons for writing tests: verifying complex logic actually works, and providing working use cases and examples of how the code should be or is actually used. The goal of testing is not coverage, it is just another metric in your Information Radiator for tracking the quality and accuracy of your system.