Exploratory Testing insight software testing process

Crossing the bridge over no-man’s land

Anyone who has introduced change into a corporate environment will empathize with his situation. You know the decision to break away from the past is the right one, you run eagerly and embrace change, only to find half way through the journey, your path to success becomes blocked.
I’ve been working with my test team for a while now moving from a process driven approach to a more of an Exploratory one.

Indiana Jones in the Temple of Doom  has a “moment of indecision” when half way across a bridge he realises he’s been cornered by Mola Ram(the baddy) and his henchmen.

He looks back and there’s a hoard  of lusty savages baying for his blood, no luck there. He looks ahead only to find an equal challenge ahead. What is poor Indy to do?

Anyone who has introduced change into a corporate environment will empathize with his situation. You know the decision to break away from the past is the right one, you run eagerly and embrace change, only to find half way through the journey, your path to success becomes blocked.

I’ve been working with my test team for a while now moving from a process driven approach to a more of an Exploratory one.

Its not been without its challenges. Some concepts I’ve introduced have been welcomed warmly but the reception to others has been a little icy. In particular I’ve tried to move the team away from a test case management system. This was met with real concern and there was quite a resistance to the idea.

This troubled me as while I understood their concerns, I knew the system was limiting the generation of new testing ideas.

But how could I overcome this resistance? And really was it worth it? Perhaps the changes I had already made would be enough? The company was already more than impressed with the changes I had made so far.

I felt like Indy at the foot of rope bridge, how the hell was I going to solve this one?

So I stood at my crossroads and dithered. Oh God, did I dither. I ummhed and ahhed and pondered what to do. . But worse, I  knew my indecision was making the situation worse, and that the more I dithered, the harder it would be to rid ourselves of the dust bag of tired and well worn ideas.

Indy at this point, decides his only move is to cut the bridge leaving everyone to hang on for their lives.

Fortunately, unlike Indy I had a reliable and trustworthy sidekick.  Together, we setup a task force within the team to attack the problem. After some discussion we decided our approach needed four cornerstones. They were:

1) Creativity.

However we tested, our approach needed to enable us to foster and encourage creativity. With creativity comes new ideas, new models, new tests and so discover new bugs.

We’re covering this one with a number of approaches. One is to improve tester skill through practice and coaching. I’ve also created a folder of ideas for people to draw upon to help trigger new ideas.

2) Visibility

We wanted to be able to provide reporting on any testing we do. The reporting has to be simple yet with sufficient detail to ensure that our stakeholders understand what we have tested and why.

We have our trusty whiteboard which mostly hits the spot. We need to be able to pull up our actual testing including results in an easy to manage way. We’re looking into BBExplorer to handle that.

We will also track any essential test results on wiki in the form of a test report at the end of each iteration.

3) Coverage

We wanted to have some way of ensuring that key functionality/key features are always tested.

We most likely will rely on our test case management system for this, but we’re cleaning out all the dead wood and making the tests lighter and less authoritative.

4) Knowledge

We wanted to create a knowledge base. Our system is complex and it requires in-depth knowledge to test some areas. We want to store that information and knowledge. We also have a serious amount of test data we want everyone to be able to access, modify and improve.

We’ll use our internal wiki for this.

What I really like about what’s happened here is that the team came up with a solution to solve the problem. It’s a team decision which has got to mean easier implementation.

I think a couple of really powerful things have come out of this. I’m listing them here:

1) Change can be scary. Not changing is worse. Get on with it.

2) Use people around you to help bring about change.

3) Never lose site of your goal. This reminds me of Scott Barber’s email signature: “”If you can see it in your mind…you will find it in your life.”

I feel good. I hope my team does too. We faced a challenge. We examined it, questioned it and overcame it and we’ve all come out sharper, enlightened and positive about the changes ahead.

Now that’s what Exploratory Testing is all about.


6 replies on “Crossing the bridge over no-man’s land”

Thanks for sharing this Anne-Marie. Its great to read about good hints and tips, but even better to read it as a case study.
It would be great to work in the type of environment that you obviously create for you staff.

One day hopefully…

Hi David,
Thanks for your comments. In my experience change challenges us all in different ways. Some find it hard to see a better future (the pragmatist), some find it hard to implement the change (the dreamer). What I’ve learnt from this is that by working together we can make use of our different skills and create something special.

I read some of your blog, and I like your blend of CDT & communication. I’m sure with those goals one day you will achieve what you set out to do.

Hi Anne-Marie,very interesting summary about your work with your team.

I’m in the process of moving our team towards the ET route as well and found that change is scary. Some welcome it, some are more wary. I really like the idea of your four cornerstones.

With those some things are the same, others slightly different, probably enough for a new blog.

I’d be interested to know how you deal with the coverage question.

I’m not sure what you mean by the coverage question. Its a vast area, ping me on Skype and we can go through it. In brief though, its impossible to test everything, ergo coverage is a myth – don’t spread the myth

You mention your test case management system. I haven’t found a balance between classic test scripts and check lists that also satisfy the visibility part.

Anne-Marie: visibility of what? the results, of the testing done?

ET sessions are easier, we use templates that show what was covered, it get’s interesting when they’re NOT used.

Anne-Marie: ah, methinks this is the crux of the problem no?

We don’t use the test case management system do ensure we provide coverage. We have some tests that are just to darn important to be left out. We want to make sure that these areas are tested (yes tested, not validated) each time. The closes I can come to describing how we use the TCM is to call it a storage of really important ideas. We use it to store ideas for tests. So I guess we should really name it the “think tank” or something.

The parts that we always test I now approach through automation to free up testers from the repetitive tasks. We just started with that and so far it looks very promising.
But there are tasks that are always slightly different with each build that we need to cover and, equally important, make visible to show what we covered.

Anne-Marie: Why do you need to show that you covered them? What’s the purpose of the ‘showing’?

So two questions, how do you use your test case management system; to what level do you document your test cases/checklists/etc?

Anne-Marie: The philosophy is as follows: TCM is there to store key ideas for tests that are too important and/or too complex to be left to our memory. They’re not tests, they ideas for possible tests, a spring board into a pool of possible tests.

Remember the TCM is only one possible spring board, we have plenty of other sources of information with which to work with. The TCM to be honest is probably the 1 metre board, handy to learn on.

And how do you make visible the effort that isn’t specifically covered as part of the slimmed down test cases but isn’t yet ET?

What I have in the TCM are simply ideas, springboards for Exploratory Testing. I don’t distinguish between the two. We’re not “passing” or “failing” them, they are merely a source of ideas.

I’m dubious about the idea of ‘visibility’. I mean, just because you take a snapshot of a screen, why does that in anyway indicate a test has passed or has been executed? There’s is an assumption by producing a list of snapshots at the end of a testing iteration suggests that testing has been completed. I think its a bizarre concept.

it goes back to the basics. What do your stakeholders really want to see and why? If I were a product manager a snapshot or a “pass” in a TCM would do little to re-assure me that testing has been performed. What I would want would be a one or two page report on what was tested, how it was tested and why I performed those tests. Now that would be of interest to me. I’d probably create such a report based on some model they were interested at, either by feature or amount of time.


Anne-Marie: No problem Thomas.

Anne-Marie, I learned something new, there I was thinking what I wrote was clear and concise and on reading it again after your comments it’s actually not clear at all. So I can improve a bit…
I’m aware that we can’t cover everything and don’t attempt to do so. What I meant is that I’d like to make it visible to other people in the business what it is that we tested and highlight a few important (to me) things that we have not covered.

Your experience shows through when you nailed it by asking ” What’s the purpose of the ‘showing’?”

I could say that it’s the same reason you’d create a one or two page report “on what was tested, how it was tested and why I performed those tests”

I’m discovering more questions than answers though because it depends on the IT and testing knowledge level or the person reading the report to make a decision on it, i.e. do we ship now or not.

I believe that more often than not people want to hear that testing is “finished” and are not interested in the details which is a scary thought. So all the reporting is more out of professional pride and covering our backs, even if we do it “right”, i.e. don’t use meaningless metrics.

Can’t say anything further but we may continue this in private.

Feel free to release this section or just have it for your info.



Leave a Reply

Your email address will not be published. Required fields are marked *