That coverage problem
Thanks mostly to the BBST classes the AST run, I’ve come to understand and appreciate the impossibility of complete testing and the challenges surrounding ‘complete test coverage’.
I find 100% testing coverage a pretty meaningless phrase. Imagine trying the calculate the amount of water in a swimming pool by multiplying the length and width of the pool. Ha! we’d snort and sniff at the stupidity of the idea. But when test managers, project managers and testers ask for “complete test coverage” that is in essence what they are asking testers to do.
In fact, getting to grips on complete testing is even harder than the pool problem, because at least with a pool, once you know the width, length and depth, the volume can be calculated.
In testing though,the number of models we use to design our tests is limited only by our imagination. To try and measure test coverage is a bit like trying to cover a black hole with a tea towel and say you’ve measured it.
Trying to test like this in a short space of time is really stressful and it seems the agile solution is to automate regression tests. But really, is this the best solution we can come up with?
It seems to me this desire to cover as much functionality as possible ends up in this Wac-A-mole game with testers frantically hitting features as fast as possible in order to be able to say its done.
Well, I’m trying a thought experiment at the company I’m consulting at. I’m challenging everyone to drop the idea of coverage and instead focus on some meaningful and valuable testing.
I’m doing this because it seems to me, we testers are far too hung up on this coverage problem and we bias our testing to try and solve it. Yes, this may mean that we fail to test all the functionality – quelle horreur! But get this, when do we *ever* test all functionality?
Dropping the desire to “test everything” is very liberating. We no longer have to fret about trying to test all the functionality in a week (we do weekly releases with multiple hot fix releases). Instead, it’s freed our minds up to reflect and ponder on what is the most beneficial testing to perform given we only have a few days to test. It’s also freed up our toolsmith, allowing them to spend time solving some testing problems through testability instead of frantically creating regression tests.
I’m fully expecting some bugs will get released to production, but you know? that’s nothing new! We’re finding bugs in production all the time, they’re called customer support issues.
Time will tell if this approach proves to be beneficial or not. It’s a little scary, dropping the regression test safety net, but when safety nets start obstructing your work its time to question its validity.
Comments ()