How to be Data-Driven: 5 Things Your Startup Should Start Doing
I originally wrote this post in 2017, on a different blog. I received a load of positive feedback, so I thought it would be worth republishing here.
The idea of ‘data-driven’ Product Development is one that crops up often in tech talks, books and blogs. We’re told that we need to look at evidence, to validate our assumptions and avoid relying on our gut-instincts. But for all the talk, very few startups seem to be doing this.
This article aims to be a clear and concise overview of the sorts of things data-driven companies do. (For the purposes of this post, the term ‘data’ includes both qualitative and quantitative data.)
There are five types of activity you should be doing:
Carry out analyses & experiments to assess whether the features you plan to build will have the impact you expect them to.
Build fewer ineffective features.
Developer time is one of your most precious resources. One way to use it more productively is to carry out preliminary work to assess whether you’re planning to invest in the right features.
The types of things you’ll want to validate include:
- Whether users actually experience the problem;
- Whether users find your proposition compelling;
- Whether your solution solves the problem satisfactorily.
- Data mining: Analyse your existing data to find evidence that indicates whether users need your proposed feature.
- 404 testing: Build a link to a non-existent feature and measure how many people actually click on it. (Case study: SongKick)
- Bare bones prototype: Build a feature-poor prototype of your functionality to test demand. Where appropriate, leverage 3rd-party services instead of coding it. (Case study: Green socks)
- Concierge prototype: Emulate your proposed feature set, by delivering a completely manual, hands-on service. This allows you to gauge demand (and learn about your market) without having to build anything. (Case study: Food on the Table)
- Wizard of Oz tests: Similar to the concierge approach — but disguised as a technical product. For example, build a website for your users to interact with, but manually handle all the functions in the background (to make your product appear automated). (Case study: Zappos)
- Presales: Engage with target users and see if any will pre-order your product before you even build it.
- Scored problem interview: Analyse interview responses to come up with a quantitative assessment for how painful a problem is. Questions you can assess using this technique include: Did your interviewees list this issue as one of their top problems? Are your interviewees actively trying to solve this problem, or have they done so in the past? Did your interviewees offer to pay you immediately for your solution?(More details here)
2. Feedback loops
Set up a framework that informs you when your product or proposition isn’t having the expected impact. Then investigate why this is the case.
Take every opportunity you can to learn from your errors-of-judgment. If your users aren’t responding to or interacting with your product as expected, there’s probably a good reason for this. Either something unexpected has happened. Or you’ve misunderstood something about your users and how they use your product. Both offer you an opportunity to gain a valuable new insight.
- KPI dashboard: Build a dashboard showing how your KPIs are performing and review every week. It should be set up to highlight when your KPIs perform differently from what you’d expect (e.g. big improvements or deteriorations). If your product isn’t performing as expected, there’s probably something you can learn from this. (Template: Andy Young’s KPI review template)
- Customer advisory board: Set up a group of passionate users to give feedback on your recent development and planned features. (Case study: IMVU)
- Success criteria: For each planned feature, assign a quantitative criterion against which you can assess the impact of the change. A good success criterion should — if unmet — help you learn something new about your users or your decision making processes. (More details here)
Find an element of your product that is underperforming. Then experiment to see if you can improve its metrics.
Target the ‘low hanging fruit’ — namely areas of friction within your product. If you have a lot of visitors or active users, then small incremental improvements can have a big impact. So you should focus on finding and removing areas of friction that hinder your users from taking important actions.
- Funnel Analysis: Investigate your conversion & drop-off rates to identify where the least efficient parts of your product are. (Case study: OfficeDrop)
- Usability tests: Ask target users to carry out tasks, while you observe or record them. The aim is to see where they encounter problems or experience confusion. (Case study: Airbnb)
- Onsite feedback: Ask visitors specific questions about the usability of the site. (Case study: OfficeDrop)
- Customer support analysis: Look through your customer support requests to find common complaints that relate to the usability of your product.
- Heatmaps and Visitor recordings: Use an insights tool to track how real-life users behave on your site or in your app. (Recommended tool: Hotjar)
- Split testing: Carry out controlled experiments to compare the performance of different variants of the same feature. (Case studies: 10 sites using VWO)
Carry out research & analyses to help you interpret why and how your users use your product.
Make decisions based not only on how your users behave, but also on why they do what they do. You won’t fully understand your users if you only know what they do. To make informed decisions, you really need to know (for example) who your users are, why some are more active than others and why some of them stop using your product.
- User interviews: Interview your target and actual users to understand their motivations, pain-points, habits and behaviours.You should be comparing and contrasting the responses from users in different segments and cohorts.
- Exit interviews: Interview users who have cancelled their subscription or just stopped using your product. The goal of the interview is to understand why they’ve left and what you could do to get them to return.
- Co-creation workshops: Work together with users to design mock-ups for how they would solve their own problems. (More details here)
- Shadowing users: Accompany a user to see how they behave in their day-to-day life and passively observe them using your product within their natural environment. (More details here)
Carry out research and analyses to help you uncover new opportunities and creative solutions.
Stack the odds in your favour by constantly innovating. To be really innovative you need to be making serendipitous discoveries. And to increase your chances of this, you’ll need to be carrying out activities that get you thinking about your users in new ways.
- Exploratory data analysis: Examine your existing data to get completely new perspectives on your users and their behaviour. Here are some questions you might ask: Are some user segments much more valuable than others? How does the on-boarding experience differ between active and inactive users? Are people using the product in any unintended ways? (Case study: Circle of Moms)
- “Mom test” interviews: If you ask your mother whether your business is a good idea, she’ll lie to you… But so will everybody else. “Mom tests” encourage you to ask questions that even your mother would be unable to lie about. So don’t talk about your product. Instead ask questions about their behaviour, past decisions and problems they experience. (More details here)
- Shadowing users: Accompany a user to see how they behave in their day-to-day life — with a specific focus on the problems and pain-points they encounter. (More details here)
- Customer support analysis: Review your customer support requests to identify previously unknown pain-points.
- Encouraging feature requests: Invite your users to suggest ideas for features they’d like to see. One way of doing this could be through a competition.