Are we there yet?
Ever been on a long car journey packed with young kids? I remember these eight-hour drives with six kids crammed into a car driving to our holiday destination. My younger brother, in particular, was annoying, constantly asking questions and irritating his sisters. By far the most irritating question (especially to our parents) was: Are we there yet?
This line of questioning would normally begin half an hour into the journey and would be constantly repeated throughout the journey.
This potent cocktail of repetitive questioning usually resulted in one of my parents (usually my Dad) exploding in frustration, yelling at us all to be quiet. This was typically followed by a threat of being dumped on the side of the road. (He actually carried out on his threat once, dumping my brother, who unfazed promptly hid in some nearby field, resulting in the whole carload having to search for him.)
But it was a fair question for us kids. We had no real sense of time or distance to help gauge how far we had come and had to go. We also were totally bored with no iPods or stuff to entertain us. Singing (very Von Trapp like) took us only so far. Counting number plates helped a little. And remember, we were going on holidays, the mere thought conjured up more excitement than our poor little bodies could hold. The journey was always going to be arduous when faced with a destination the held so much promise.
As a parent myself, I have a little more sympathy for my parents. Of course, we have the luxury of allowing our kids to be immersed in some tacky iPod game, distracting them to the point they forget they are going on holidays. But I get it. I get that its really hard to explain to someone with little understanding of distance or time, how long something is going to take.
When testers ask me how we know when we are done in Exploratory Testing, I am faced with a similar challenge. How do I help a tester understand when they are done? Pointing them to the excellent list of Stopping Heuristics on Michael Bolton’s blog helps but how do you apply them? Some of them are easier than others. For example, with the “Time’s up!” heuristic, it's pretty simple to apply. But take something like the Flatline Heuristic. The Flatline heuristic tells us to stop when “No matter what we do, we’re getting the same result.” But as Michael points out, there are hidden risks to this. For example, it may be that there is no new information, but it also may mean we have insufficiently explored the application in depth.
In this situation, how do we know what to do?
Like many things in testing, there is no clear-cut answer to this. A considered answer requires an understanding of what’s happening around testing, and the implications of the decision being made. Hence the inclusion of the word heuristic.
I’ve found a conversation with stakeholders around stopping heuristics *before* testing starts a useful exercise. Knowing that you have a time limit on your testing goes a long way to preventing tester angst about when to stop. Include in that conversation a discussion on what ‘done’ means including into that the impossibility of complete testing. Stakeholders who get that bugs may be missed can influence which stopping heuristics you use.
Like kids in the back seat, as we test we need to repeatedly ask ourselves the question “are we done yet?”. It may be useful to use our emotions to trigger this question. For example, if I’m bored, does that mean I’m done – or does it mean I need to change something in my testing? If I’m anxious does that mean I’m about to hit some constraint in the form of time? If I’m angry, does that mean my information is being ignored and maybe I need to address that instead of raising a tsunami of bugs? If I’m confused, does that mean I need to explore more? Emotions like these can be useful indicators of when to ask if you’re done. Again Michael Bolton has done lots of work in this area.
I’ve also found that in times of deep uncertainty its always a good idea to draw on the “phone a friend” card and ask someone more experienced than you for their opinion. There’s no shame in this, these are tough questions to answer. An outsider may have insight or additional knowledge that you’ve overlooked.
Building relationships and credibility with those around will also go a long way to helping you in situations when that significant bug is missed. And as in most of testing, articulating your process and your decision making helps to demonstrate diligence and considered testing.