- Qualitative / Exploration Experiments
- Quantitative / Validation Experiments
When you’re preparing to validate assumptions, and are looking for a good experiment to run that will give you the results you need, it can sometimes be difficult to make a good choice.
Should you create a landing page? Interviews? Or a detailed user test? Do you want to have quantitative data to make your next decision? Or are you looking for rich feedback?
In this post, I present the free Experiment Cookbook Cheat Sheet, which helps you to make that choice based on your startup stage, riskiest assumption, and what you are trying to validate.
(The Experiment Cookbook is an online repository of 25 experiment recipes with detailed step by step guides in the form of a 12 module online course.)
In essence, there are two groups of validation experiments to choose from, depending on what you are trying to achieve.
The distinction is made based on what you are trying to validate. Are you trying to validate a ‘known unknown’? Or an ‘unknown unknown’?
There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.
- Donald Rumsfeld
Especially in the beginning of the innovation journey, when you’re still exploring what your customers response to the problem is and have a lot of assumptions but very few facts, you should be looking for ‘unknown unknowns’.
In that stage, thinking you know anything about making your product successful is very dangerous. You’d be sailing blind on your assumptions. And although you may acknowledge you need more facts about e.g. your customers (a known unknown), you should also acknowledge there may be aspects of the problem that you haven’t even got on your radar yet.
That calls for explorative experiments: experiments designed to bring new and unexpected facts to the table.
Such experiments are often more difficult to define precisely, simply because you can’t always predict what kind of information you’ll find. You’re working with rich, ambiguous data, rather than clean numbers. You need to think like a designer more than like a scientist.
These experiments are usually qualitative experiments, They deal with language, stories, and open questions, and the outcomes are often not (easily) measurable and more difficult to interpret. You still benefit from a well-defined experiment setup, but you’re not so much looking to validate or invalidate your assumptions. Rather, you’re trying to uncover assumptions and biases.
Qualitative experiments can help you find out more about the assumptions you’re trying to validate. By asking open questions, being curious, and exploring, you can learn more about the world around you. This will lead you to new, hidden assumptions. Often, the findings of an explorative, qualitative experiment help define a subsequent quantitative experiment.
Quantitative experiments give you a clear numerical outcome based on an objective measurement. Think counting the number of people buying a product, clicking a button, or entering a store. Most of the more ‘scientifically’ based methodology to run experiments, with a clear, falsifiable hypothesis, is geared toward a quantitative approach. To run these experiments, think more like a scientist.
For quantitative experiments, it’s relatively easy to define how and what to measure, and it can be relatively easy to get a clear outcome. The catch is that it can sometimes be difficult to translate your assumption into a simple countable metric. To be able to define a quantitative experiment, you need to know very precisely what you need to know.
You’ll need to be very precise and strict about how you design, run, and interpret your experiment to get good quantitative results.
Quantitative experiments deal with the ‘known unknowns’. They can’t really tell you much about the ‘unknown unknowns’.
The Experiment Cookbook Cheat Sheet shows all of the experiment recipes in the Experiment Cookbook organised by innovation stage, Business Model Canvas building block, and Pirate Metrics stage (see below). It also gives typical Riskiest Assumptions for each stage and defines what you should be validating (problems, solutions, features, growth, and pricing).
Each recipe is colour coded as qualitative (red), quantitative (blue), or both (yellow).
During the problem-market fit or idea validation stage, most research will be qualitative. The nature of experiments here is more explorative, because you have less information to go on. You need to cast a wide net and get rich information, and the best way to do that is to talk to people in person.
If you are familiar with the Business Model Canvas, you’ll be validating your Customer Segment first in this stage.
During this stage, you’ll start to also use more quantitative methods. You’ll be able to use data and run your experiments with more people, testing your assumptions.
In terms of your Business Model Canvas, you are now looking at the Value Proposition for your Customer Segment.
Although you’ll still be running some qualitative experiments (think of User Tests for instance) to get rich information, running quantitative experiments is key in this stage. Data is the most important thing. As the number of (potential) customers grows, you’ll have more opportunities to present them with experiments and gather data.
For your Business Model Canvas, you’ll validate the Channels, Value Proposition (in more detail), Customer Relationships, and Revenue Streams in this stage.
When you are working towards Product-Market Fit, it can be very useful to use Pirate Metrics as a framework besides the Business Model Canvas. (See: the Pirate Metrics Canvas) In the Pirate Metrics framework, you are validating Acquisition, Activation, Retention, Referral, and Revenue.
Sign up for free now and receive a weekly email with one new innovation tool straight to your inbox. 🚀