How I use modelling in software testing

How I use modelling in software testing

My testing can look like  ‘fly by the seat of my pants” testing at times.  There’s nothing more I like to do than grab a new product and jump straight into testing. No namby pamby reading of requirements by me! No, I want to get straight to the source of truth. I want to find out what the product *actually* does. (This approach may seem seem rash, but it's not. Its a considered decision, read on).
But this type of testing only takes me so far. I get to the point where my testing starts to be limited. There seems to be nothing new about the information I’m getting and my learning about the product takes an exponential dive down.

I take this as a sign to stop and take a step back. I’ve obtained as much information as I can from playing around, but now its time to get my hands really dirty. Its time to start studying and researching the product. I start learning more about the products structure such as the database, the products architecture and the interfaces. I explore the intent of the product and find out who the users are. I start talking to product owners & developers to gain information about the product.

I then go back and test more, but this time my testing has taken a different turn.  With new information and ‘new eyes’ I’m looking at the product in a different way. I start learning new things again and the curve of learning and finding new information goes up again.

All this time I’ve been modelling and testing.  In software testing I model a product to understand it. This might be a little different to the way architects model a building. They model to demonstrate to others what the final product will look like, though I can imagine creating a physical model helps to clarify thinking.

Modelling isn’t always explicit. We all have a mental model – a representation of the world in our head and testing makes use of it heavily. Sometimes I find it helpful to make the models explicit though. I do this to help me reason through the information. Ordering  information through drawing it or writing it down seems to help me recognise gaps in my thinking.

When I jump in and test, I’m actually creating a model of what the product does. I prefer to model the product first *before* reading requirements etc so I have good understanding of what product really does. My understanding of the product is unfettered by any biases and assumptions that I might gain from speaking to people or reading requirements. Having a solid model of what the product does grounds my testing in the reality of what is. As a tester, I want to bring a different perspective to the table.

Once I’ve modelled what the product does, its time to find out more. I model how people perceive the product to be. I read the requirements and any other documentation I can find. I talk to people and create squiggly and messy diagrams on whiteboards that normally only I can read (and sometimes struggle to understand!)

All the time I’m modelling to understand the product better.

I’m still testing though. I don’t perform modelling in isolation to other cognitive activities. In fact, I test and model, model and test. This might appear counter-intuitive. After all, how can you test without knowing what people want? How will you know if there’s a problem without requirements?

That goes back to oracles (you know, those things that help you recognise problems). When I test, I purposefully use a diversity of oracles to help me recognise different problems. When I use “Plunge In & Quit”* heuristic, I am testing. My oracles of choice are: Previous Testing Experience, Knowledge of Similar Products, World Experience. You don’t have to have explicit requirements to recognise problems.

So I model and test, test and model.  For example, as I’m creating models, I’m testing them. Think of whiteboard scenario where you are formulating models with a developer. As the model is being created, it's being tested. That’s how gaps get recognised. When I’m testing, I’m  challenging my models to see if there’s a problem.

I’ve consolidated my thinking on models into this short video.

Here’s what I’ve learned so far:

1) Modelling is integral to testing regardless of it being performed consciously or unconsciously

2) You can have mental, formal and physical types of models.

3) Creating formal models can help reason through a product

4) Modelling a product by first “playing around” can help bias your testing in a good way

5) Modelling and testing take place simultaneously.

6) Different Models are a source of new questions to ask the product

I’m going to wrap up with George Box’s advice:

“all models are wrong, some models are useful”

Creating this model of models has proved useful to me, but it's not complete. What are your ideas on modelling and software testing?

*” The Plunge in & Quit” heuristic was identified and named by James Bach. It’s an approach many experienced testers use to quickly learn about a product.