In this part of the Lean Startup in a Nutshell series, let us look at how to take the interactions between customers and our code and turn them into valuable data about these customers. Each technique described here is designed to help us become more data-driven and ease decision making by favoring facts over fiction.
Spli testing (or A/B testing) is the core technique required to learn about user behavior.
In a split test, we deliver a reference experience to some of our users and an alternative experience to the rest of our users—while measuring the impact of the change within one group as compared to the other.
Split tests should be micro in their scope but macro in their impact measurement. The former means that a split test should test an isolated aspect of the experience, such as adding a feature, changing the appearance of a button, consistently changing a design element across the site, etc.
The latter means that the impact of the change implemented within a split test should be measured in terms of the overall metrics relevant to the business—such as the global signup rate or the revenue per customer—and not in terms of a local click-through or conversion rate.
Eric Ries offers many examples for counter-intuitive—but extremely powerful—insights derived from split-testing. Let me cite only one: When a mere link indicating a premium experience (such as a V.I.P. club) was added to the navigational interface at Ries's company IMVU, this increased the overall revenue per customer even for those customers who never clicked on the link. The mere presence of the link primed the users to make them willing to spend more on the website.
- easy one-line implementation for developers
- easy reporting to render all test results understandable
- starting simple and getting more complex over time
- making concrete predictions and testing against these
They were adding new features, improving quality, and generally executing against the product roadmap. Each month, their gross numbers move up and to the right. So, they said, they must be on the right track.
Then I asked them this question: what would happen to the company if the entire product development team took a month off and went on vacation? The sales staff would keep signing up new customers. The website would continue to get new traffic from word of mouth. Could they be sure that they wouldn’t—as a business—be making just as much “progress” as they claim to be making now?
In one scenario, they’ve been working overtime, putting in crazy hours, and in the other, they’d be on vacation. If both scenarios lead to the same result, how can the product development team claim to be making progress? To be doing effective work?
Cohort analysis means looking at the new customers who join every day as a distinct group. This enables an organization to ask how today's customers compare to yesterday's, to ask whether measured improvements are not just a result of the already well-working system, but of the most recent changes. It also enables an organization to detect "fake improvements", features or changes which actually worsen the user experience.
For each cohort, you may ask:
- What fraction of users signed up?
- What fraction of users bought the product?
- What fraction of users upgraded to the premium account?
Cohort analysis is also perfect for killing features. Just remove a feature and see what happens. If the relevant overall business metrics don't change, you just improved the product by making it simpler.
Sales funnels and customer acquisitions funnels are old and time-tested concepts. Key to building meaningful and trustworthy conversion funnels are person-based analytics (or per-customer metrics) instead of global analytics (or vanity metrics) such as the total number of views or visitors.
There is a great talk by Mike McDerment explaining what he calls the Google Analytics line and how to go beyond it by connecting marketing and customer account databases. Going beyond the line of vanity metrics (such as the total number of page views) is a prerequisite for collecting the data necessary to analyze and build conversion funnels.
I highly recommend the above talk as well as the related article by Eric Ries in order to get started with person-based analytics. Always remember, "metrics are people, too".
Net Promoter Score and product/market fit
NPS is a methodology that comes out of the service industry. It involves using a simple tracking survey to constantly get feedback from active customers. It is described in detail by Fred Reichheld in his book The Ultimate Question: Driving Good Profits and True Growth. The tracking survey asks one simple question:
"How likely are you to recommend Product X to a friend or colleague?"
The answer is then put through a formula to give you a single overall score that tells you how well you are doing at satisfying your customers.
The Net Promoter Score is a powerful concept to get a birds-eye holistic view of your business. It is designed to measure the overall customer satisfaction a product or service yields.
For Sean Ellis, a slightly different question is at the root of determining whether a startup has reached product/market fit:
"Would you be very disappointed if you could no longer use the product?" or
"Do you consider the product a must-have"?
Sean Ellis supposes that only when 40% answer "yes" to the above question, the startup has reached product/market fit.
Measuring the NPS and the product-market fit constantly ensures that a startup never forgets the overall picture of improving customer happiness while split-testing and working on conversion optimization. These holistic metrics are probably also the best way to measure product development progress in the long run.
If Product Development is simply going to start building the product without any customer feedback, why have anyone talk to customers at all? Why don’t you just build the product, ship it, and hope someone wants to buy it? The operative word is start building the product. The job of Customer Development is to get the company’s customer knowledge to catch up to the pace of Product Development—and in the process, to guarantee that there will be paying customers the day the product ships.
User testing is a cheap way to "get out of the building" and talk to customers. User testing means having a bunch of individual users interact with your product or service, giving you qualitative feedback. There are a number of user testing service providers on the web, usually providing a screen-sharing video and a written report for each user doing the test.
Good measurement relies on good and reasonable metrics. It requires an actual understanding of what constitutes progress and how to document it. It puts science ahead of vanity. And it recognizes that metrics are people, too.
The Lean Startup in a Nutshell series
If you liked this post or if it gave you new food for thought, then please be so kind to leave a comment below (no registration required) or share it with your network. Your feedback is what keeps me going. Thanks!Thursday, September 15, 2011 at 03:00PM | David Link