Have you ever heard someone say: “Don’t just throw spaghetti on the wall and see what sticks.”?
Well, obviously that’s not a good strategy to understand priorities and inform a future course of action. It’s also messy and a little disgusting…
A much better approach is to understand problems and drill down to root causes, identifying cause and effect correlations, and then formulating a set of hypotheses on how to influence those root causes. But let’s start from the beginning…
The spaghetti approach
Here is how I know when a PM interview doesn’t go well:
Me: “Interesting problem. How did you find out how to solve it?”
Aspiring PM: “I did A/B testing and looked at the results.”
Me: “Sounds cool, how did you know what to test?”
Aspiring PM: “Well, we tried out a bunch of things, and then picked the one that showed the best results.”
That’s not experimentation, at least not in a scientific sense, that’s classic throwing spaghetti on the wall and seeing what sticks. It’s expensive. With this method, you will find the right solution only by brute force or sheer luck. More often than not, the true solution and needle mover will remain elusive.
If you want to get to the true best global solution through experimentation, you need to have a plan first!
Drop any preconceived notions of ‘the right solutions’. In fact, burn your list. Instead, start from identifying the root causes and focus your experiments on understanding what drives those root causes.
The scientific Method
Experimentation is like throwing pebbles. If you have a plan where to throw them, you will likely hit your targets with a few throws. If you don’t, you will need a LOT of pebbles to hit anything worthwhile.
Here’s how you develop a plan before you start throwing your precious stack of pebbles:
Step 1: Root causes – What is the problem?
Start with identifying the problem. Then ask yourself what causes that problem. List all the drivers that you can identify from the data and observations that you have available.
Check for causations. Are those drivers really causing the root problem, or are they just correlated? Drill all the way down until, based on the data you have available, you cannot draw clear cause-effect relationships anymore.
Step 2: Hypothesis – Enter the unknown!
Up to here, causations were directly supported by existing data and observations. Now they are no longer, and you need to find ways to fill your data and knowledge gaps. You start making a plan for throwing your pebbles.
Start to develop hypotheses for the cause-effect relationships for which you don’t have clear data. Check if there are any drivers that you might have missed. Where do you have hunches (informed guesses), but no data?
Step 3: Experimentation – Closing the data gaps.
You have several brilliant but untested hypotheses. Now is the time to come up with a plan to put those hypotheses to the test. It’s time to develop experiments that can validate your hypotheses and provide you with the missing data.
Be clear as to what data specifically you need to get from an experiment to validate your root cause hypothesis. You can get a lot of data out of experiments, but not all of it will truly correlate to the specific needle that you want to move.
Think creatively and broadly as you get into designing your experiment. Not every experiment needs to be a big engineering project.
There are many ways to get data. Experiments can be product implementations, but they can also be very simple initial and manual tests with small groups of users or user research studies. Of course, the closer your experiment is to a large scale production roll-out, the more precise your data will be. However, you don’t always need that precision for the initial validation of an idea that will inform the next steps in a project.
The faster you can get results, the better. Sometimes you need to build something out in scale to get the right data; more often, you don’t. There are no bonus points for expensive and slow tests.
Step 4: Refinement – What have you learned?
Look at the data. See what hypotheses are validated and which ones are not.
Don’t leave it with that simple checklist though. Reflect on how your cause-effect framework might have changed with the new data and insights. Does the experiment’s data indicate new root causes that you were previously unaware of?
Finally, ask yourself if you have answered enough of your root cause questions to build your MVP, or if you need more experimentation and data to ensure you will head out in the right direction.
- Experimentation is great!
- More specifically, targeted experimentation is invaluable to get missing data and understand your space.
- Just trying out stuff, on the other hand, is wasteful and will likely increase confusion instead of reducing it!
Did you like this article? Want to read more?
I will keep posting articles here and I have them lined up way into summer 2020. However if you want to get it all in one comprehensive, structured, and grammar-checked (!) view, check out our new book:
Put On Your Own Oxygen Mask First
A practical guide to living healthier, happier and more successful in 52 weekly steps
By Alfons and Ulrike Staerk
If you like what you’re reading, please consider leaving a review on Amazon. If you don’t like it, please tell us what we can do better the next time. As self-published authors we don’t have the marketing power of big publishing houses. We rely on word of mouth endorsements through reader reviews.