In my last newsletter I wrote about the most common trap data teams fall into in an attempt to deliver value.
Any and all analytics focused on just answering questions or building dashboards is doomed to becoming a low value cost center in the organization, which eventually leads to the dreaded "what's the ROI of the data team?" question.
But answering questions isn’t the only one. Another more insidious trap is focusing on “insights” especially “actionable insights.” Insights are often seen as la crème de la crème of value for the data team, but unfortunately they’re also a trap.
Let me show you why.
The problems with insights
Insights are subjective. What’s insightful for me could be common sense for you and vice versa. We could be even looking at the same exact results and still interpret them differently based on your knowledge, experience, mental models, etc.
Insights don’t automatically lead to decisions even if you think they’re actionable. Due to their subjective nature, what I consider actionable might not be actionable for you and vice versa.
The value of insights is often unknown. Say you work for an insurance company and you notice an anomaly in driving data in a particular area. Is it a temporary blip or will it require adjusting the premiums for drivers who live there? One would be considered an insight but not the other.
Many so called insights end up confirming existing biases or reinforcing existing assumptions and processes. Say you discovered that the sales conversion funnel is too long. Is that an inherent behavior of your leads or just how the funnel was designed? You don’t know and you can’t even find out without running an experiment with a shorter funnel and gathering fresh data.
Just because you think an insight is actionable doesn’t mean action will be taken. You spend days cleaning the data, exploring it and finding some interesting or even insightful things to present. You deliver the insights in a beautiful presentation and recommend the best course of action. The executive team thanks you but they don’t follow through.
Imagine a manufacturing process that randomly produces sometimes very profitable gadgets and sometimes duds. Who would want to own that plant? Even an engineering team that sometimes produces beautiful code and sometimes messy code would be booted.
So why does this happen?
The root cause of the problem has to do with a fundamental misunderstanding of the value produced by data teams.
Understanding the nature of value for the data team
For the longest time the work of data teams has been treated like production and engineering. They’re taught to think of their work as a product and they utilize agile methods to deliver value.
This causes a lot of problems because very little of the work of data teams is product-like. In order for something to be considered a product there needs to be an element of predictability thrown in.
Initially this predictability element is the value delivered by the product. Very easy to see with physical products (e.g. smartphones, laptops, TVs, etc). Also easy to see with many software products (Excel, Google Chrome, email, etc).
After the value has been determined, the next element of predictability focuses on quality. Once the value of smartphones was established, we now expect them to work correctly and consistently.
So what does value mean for data teams?
I’ve mentioned it before that the real value of the data team lies in driving operational performance that can be measured directly in the bottom line based on actions taken. But how do you do that?
You have to understand that there are three tiers of value for data teams:
Measuring current performance accurately
Research & development
Data driven automation
Tier 1 value - Measuring performance
Just like you go to a doctor for tests to assess your health and they measure various key indicators like blood pressure, cholesterol, etc. there’s incredible value in assessing the performance of a business.
Upon joining a company as head of data, you’ll feel an immense pressure to answer questions but as I’ve said before, this is a trap. So the first thing you should focus on is measuring the performance of the business. Say for example you join a SaaS company.
Start by making a list of all the metrics that a SaaS company needs to measure. SOMA has a very comprehensive list you can use. Use this as a checklist to interview all the executives and figure out what exists already and what needs to be built. For the metrics that do exist, check their definitions and formulas against SOMA.
Next prioritize the metrics that your executive team needs built out first. These could be financial metrics, sales metrics, marketing metrics, etc. This will give you a roadmap. Get buy-in from them and hand off the roadmap to your team to start building.
Keep in mind that the number of metrics while large is limited, which means the artifacts (like dashboards or metric trees) will also have a limited scope and shouldn’t be changing constantly. This will allow you to build a data infrastructure from scratch or take over an existing infrastructure and improve it.
This type of value is very well understood, predictable, tangible and fits in nicely with the production paradigm, so you can use agile methods to deliver it. The artifacts could be curated data sets, dashboards or metric trees.
Tier 2 value - Research & Development
This is where the nature of value for data teams diverges completely from the production metaphor. R&D by its very nature is unpredictable so the moment you frame it as production, you doom your team.
Many of the questions you get from stakeholders are answered through the metrics and data sets delivered as Tier 1 value, but there are some questions that are best framed as research.
This is a key distinction because it sets up the proper expectations with stakeholders. The moment you reframe questions this way, any stakeholder will understand that the work might not deliver anything significant.
So what fits into R&D?
Firstly, discovering causal or correlational drivers to key output metrics from Tier 1 should occupy a good portion of this work. Amazon calls these controllable input metrics.
For example you might discover that reducing the time to contact a lead from the moment they submit a request to 1 hour increases conversion rate by 10%. That conversion rate can be directly tied to revenue and your team must take credit for that.
Here are a few other types projects that fit this category:
LTV modeling
Multi-touch attribution modeling
Marketing mix modeling
Lead scoring
Identity matching from anonymous data
Marketing segmentation
Recommender systems
etc.
Each one of these projects has very tangible value that could be measured in terms of bottom line results, but while the value is tangible, they don’t fit into the production paradigm as easily so be careful about shoving this type of work into the agile framework.
Tier 3 value - Data Driven Automation
The third type of value delivered by data teams fits into a framework I call “data driven automation” and is often not considered as value delivered by data teams. I think this is a missed opportunity.
There are a many projects that take metrics or other artifacts the data team produces and feeds them into certain business processes in order to automate actions. Even if the data team doesn’t build the actual automation, producing the modeled data or the statistical model output definitely counts.
For example if you’ve done lead scoring or LTV modeling and then used that output as input to a process that decides whether to call them or email them, you’ve done data driven automation.
Let me mention a few types of projects that can fit here:
Sending automated emails to for abandoned shopping carts
Sending automated emails with customized promotions
Sending automated reminder emails/texts for appointments
Automatically sending high/low LTV leads into a separate funnels
Raising an alert when sales are too low for the day
Generating a mailing/email list of customers for a special promotion
Anomaly detection automation
etc.
I hope that helps you clarify the value of your data team. If you enjoyed it let me know so I’ll write more about this topic in the future.
Until next time.
Hi, thank you for the article, I enjoyed reading it.
I'm glad to share that my data team is on the way to create values in all the 3 types, especially the Tier 3.
However, I still find Tier 2 project sometimes hard to manage both progress and stakeholders expectation. You said that "be careful about shoving this type of work into the agile framework.", so is there any other methods that you recommend for this project type? (I'm working on something similar to Marketing media mix)
Thanks for the article Ergest. It reflects well how I think about value.
To part 2, i.e. R&D, I would like to add the terminology “hypothesis testing” in which you might test (weak) relationships in your KPI tree(s) - which you described in terms of finding causal relationships. Together with the organisation’s strategy it should inform what to work on next.
Finally I would not disregard the value of “answering questions”, in case you find value in them. I like to see my colleagues as subject matter experts with good ideas and questions. Also, you can decide whether you deliver them a workable prototype with reduced effort or a production-ready “data product”.