Product management expertise for ambitious tech companies. Based in Stockholm, available everywhere.
contact@sthlmproduct.se
+46 8 505 424 12
When you have heaps of data available at your fingertips, it is tempting to start asking every question there is. Even more so when you can outsource providing the answers to an analytics team who run the data for you. But, looking at data that is irrelevant to the goal you are trying to achieve won’t get you anywhere, and a dashboard isn’t much of use if you don’t know where to look. As it turns out, prioritizing data is as important as prioritizing features to build.
When launching a new feature or experiment, it is essential to understand what changed behavior you intend to drive. What do you want users to start doing? What do you absolutely not want to achieve by launching your feature? Outlining core metrics, supporting metrics and counter metrics help you define the questions you need answered to understand what success looks like and when you have achieved it.
Core metrics? It had probably been better to just state “core metric” - because it should preferably only be one. This metric will tell you whether it is a success or not and might even be the reason you launched the feature or experiment in the first place.
The selected core metric should have an immediate impact on your business. “Time on page” could be a good metric if you’re business is ad supported - but it would only act as a proxy for success in a subscription model or in e-commerce.
Examples of core metrics are % of users signing up, % of users upgrading to a paid product and cart value.
Supporting metrics help describe why the core metrics were impacted or not. Here it’s more than ok to list several metrics, as you will want to track how users move - or don’t - through your product.
If your core metric is % of users upgrading, you would want to measure how many who viewed the upgrade form, how many who clicked the upgrade button, how many who cancelled immediately and so on. These metrics will help you understand why or why not the core metric was impacted.
But if you add a big banner telling users to upgrade, they will get annoyed and stop using us completely! It’s to counter these arguments, we use counter metrics. Counter metrics are used to keep track of unwanted changes. They are not intended to be impacted by the change (if they are improved, it’s a pleasant surprise), but rather in place to make sure nothing breaks.
In this example, we want to make sure people keep using the app even though we add new touch points for upgrading. The counter metrics could be session time to help us understand if people spend less time in the app after having seen the banner - which would be an indication of annoyance - and retention to understand if people are less likely to use the app again after having been shown the banner.
If the counter metrics are negatively impacted, it does not necessarily mean it’s a no go. If the value of a subscriber uplift in our example is worth more than the users we lost who didn’t upgrade, it could be worth rolling out the banner anyway. It all depends on your long-term growth strategy.
It’s easy to believe that the more we measure and monitor, the more we can learn and improve. However, finding yourself launching an experiment or feature with a long list of metrics you consider important to track is a sign of not really knowing what behaviour you want and expect to drive. Prioritize your data and metrics to avoid distraction and to clarify what you intend to change. Less is more and relentless focus is everything.