The Experiment Result Canvas

Don’t get tricked by your own experiments

How startups and innovators can make sense of their experiment results in a workshop

Erik van der Pluijm
Erik van der Pluijm
2019-06-17 | 3 min read

It’s hard!

In theory, this makes total sense. In practice, however, it can be quite difficult to make sense of the results of your experiments. Especially in the early stages, when you are running qualitative experiments, it is hard. First of all, you need a way to collect and share the information you received with your team. And, even more important, how are you going to draw conclusions from the results?

One method that is used a lot to make sense of experiment results in a workshop setting is to use dot-voting (Dot voting explained on Wikipedia). To see how this works — or actually, totally doesn’t work — here is an example.

(Note: Dot-voting itself is arguably a completely flawed method, but that’s not what this article is about)

Example

A startup wants to develop a new app that will help self-employed people make sense of their finances. They just started out, and are in the early stages of their journey.

Although initial responses (from their own crowd of friends and acquaintances) are positive, they have barely begun formulating the problem they want to solve for their customers.

They are at a crucial stage for their startup: they are hard at work trying to move from a mindset where they just want to build the thing that is in their head, to something real customers actually want to pay for.

Following the lean startup, the best way to make this mental leap is to confront the ideas living in your head with real people out in the real world. Following this advice, the team goes out on an exercise where each team member interviews 10 self-employed entrepreneurs. They ask them how they deal with their finances, and how they keep tabs on things like their cashflow and invoicing. When they come back, they together have gathered around 50 interview results.

To save time, they decide to print the results, stick everything on a wall, and then go over it as a team. They plan to look at all the results and then use dot voting to mark what they find interesting.

On the surface, this looks like a valid approach. Reviews like this can work great in a workshop setting, where the information is on the wall, clear for everyone to see. The dot voting process is fast and gives you a clear result.

It’s broken!

However, when I used this and similar methods in the past, I noticed that it really doesn’t work.

People tend to focus on the interviews they have conducted themselves. They notice the things that they already agree with, or have noticed before. This leads to a strong confirmation bias.

With this method, you’ll most likely reinforce any bias that was there to begin with. Clearly, what’s needed is another approach. But there are some constraints.

Initial interviews are necessarily exploratory in nature. At this point, the team can’t really know in detail what they are looking for. There won’t always be a clear script to follow, and the answers are varied.

You simply can’t follow the approach that would be used for a large scale survey and use statistics. The low number of responses and the unstructured nature of the results make that impossible. This makes it very difficult to make a clear ‘validated or invalidated’ decision.

From a statistical point of view your results are a complete waste of time. But at the same time, they are a treasure trove of information about your customers.

How to fix it

So, what can be done? How can we explore qualitative interview results in a workshop setting in a meaningful way? A way that is as objective as possible? How can we extract as much useful information from the interviews as possible?

Homework

To do that, going over the results in detail is required. That takes more time than you generally have in a workshop. So, first of all, homework is required.

Doing time-consuming and sometimes expensive interviews and then avoiding to dive into the results is a total waste. If the team is prepared to interview 10 people, then they should also be prepared to read all the results before going into the workshop. Only looking at your own results means you’ll open the door to extra confirmation bias.

Framework

Second, a framework is needed to organize the results. The Experiment Result Canvas below was created as one (highly effective) way of doing that.

The Experiment Results canvas can be downloaded for free from WRKSHP.tools

Steps

The feedback you received from the interviews is split in four big categories (from top to bottom):

- Quotes and Stories

- Perceived problem, perceived needs, and behaviour

- Your observations and conclusions

- Next steps

1. Quotes and Stories

The first category is filled with raw quotes and stories selected from interviews.

2. Perceived problem, perceived needs, and behaviour

The second category splits results in information about the respondent’s perceived problem (how they experience the problem you want to solve), their perceived needs (what they tell you about what they think they need), and their actual behaviour (what they have already done in the past to deal with the problem).

This distinction is important, because it is so easy to pick up only on what you’d like the respondent to answer to your question. It’s so easy to hear that they like your solution, or that they really need it. But that information is close to worthless. (They’re probably lying — or being polite). Solutions and opinions volunteered by respondents, telling you how they might solve the problem in the future, are also close to worthless information. People don’t know what they will or won’t do in the future, and they have a very difficult time predicting their own feelings.

The thing to look for is behaviour. Have they actually experienced the problem in the past, and did it bother them enough that they actually found or tried to find a solution for it? That is what you need to hear if you’re looking for information coming from early interviews. It’s much harder to ‘be polite’ about actual behaviour. It’s the actions that count, not the words and opinions.

Sticking this information in separate boxes means it is all there, but it’s organized in an ‘evidence pecking order’. The behaviour box is the most important one. But, if a lot of people say the same things, or have the same opinions, you might want to run a separate experiment based on that and see if their actions reflect those opinions.

3. Your observations and conclusions

The third category can be filled with the interviewer’s notes. What observations were made by the interviewer? What conclusions did they draw?

It is important to keep this information separate so that it won’t get mixed with the results coming from the respondents.

It’s great to collect these observations, and they may help you a lot, but they are your observations. They are something that you added — and therefore, based on what you already knew before you conducted that interview. They reflect your view of the world and your biases more than anything else.

4. Next steps

Finally, there is space for next steps. What follow up questions would you like to ask? What other things would you like to know?

-- Keep experimenting!