Hello and welcome to the latest issue of Data Patterns. If this is your first one, you’ll find all previous issues in the archive.
In one of the last issues we tackled the problems of analytics through a logical tree diagram to reach down to the root cause. By using simple cause and effect relationships we could connect them to find the core problem.
That gave us a clearer picture of what the current reality looks like and answered the question of “what to change” The next logical question is “what to change to?” What does it look like when analytics is used effectively?
In the title of this post I made a pretty bold claim. There’s a “secret” weapon in analytics that we all know about but very few can actually wield. I further claimed that this “weapon” would help analytics win.
Have you figured it out yet?
Let me ruin the surprise and tell you. It’s the law of cause and effect.
We all know that the “holy grail” of analytics is when you discover direct cause and effect relationships in data. I’m not talking about correlations here. Correlations are useful but they will not help analytics win.
When analytics is used effectively, the discovery of cause and effect relationships can drive incredible employee engagement and satisfaction, add millions of dollars to the bottom line and make for very happy data analysts and scientists.
I will illustrate this claim with an example from the book Smarter Faster Better by Charles Duhigg.
“In 1997, executives running the debt collection division of Chase Manhattan Bank began wondering why a particular group of employees in Tampa, Florida, [led by Charlotte Fludd] were so much more successful than their peers at convincing people to pay their credit card bills”
“Chase knew from internal surveys that debt collectors didn’t especially like their jobs, and executives had grown accustomed to lackluster performance”
“Employees were sent to training sessions and given daily memos with charts and graphs showing the success of various collection tactics. But almost none of the employees, Chase found, paid much attention to the information they received”
“However, Fludd’s group was collecting $1 million more per month than any other collection team, even as they were going after some of the most reticent debtors. What’s more, Fludd’s group reported some of Chase’s highest employee satisfaction scores. Even the debtors they collected from, in follow-up surveys, said they had appreciated how they had been treated”
What was going on there? Why was this group of debt collectors not only very satisfied with their jobs but highly engaged and highly effective in collecting $1 million more per month than other groups? What was their secret weapon?
As it turns out it had to do with our old friend, the law of cause and effect.
It all started with a very clear and simple objective (collect more money from credit card debtors) and an even simpler metric (collection rate). Fludd’s team would come up with hypotheses. In fact they’d get together during lunch and kick around ideas to test.
“One day, I came up with this idea that it would be easier to collect from younger people, because I figured they’re more eager to keep a good credit score,” she said.
Then they’d put the ideas to the test immediately.
“So the next day, we started calling people between the ages of twenty-one and thirty-seven.” At the end of the shift, employees reported no noticeable change in how much they had convinced people to pay.
Now here comes the key piece. The next day they changed a single variable. Age range. They didn’t change the script, didn’t change when they called, didn’t change the balance on the card or anything else.
“So the following morning, Fludd changed one variable: She told her employees to call people between the ages of twenty-six and thirty-one. The collection rate improved slightly. The next day, they called a subset of that group, cardholders between twenty-six and thirty-one with balances between $3,000 and $6,000. Collection rates declined. The next day: Cardholders with balances between $5,000 and $8,000. That led to the highest collection rates of the week”
At the end of the day they’d all get together to review the results and speculate why they succeeded or failed. They’d share ideas with other workers on what worked and what didn’t. Soon they started to notice certain patterns they hadn’t seen before.
What does this do? It triggers human curiosity and powers the desire for exploration and learning. It’s the same feeling inventors get when they finally figure out how to get their machine working or scientists get when they discover something new. It’s the same feeling you get when you finally solve a difficult puzzle.
That feeling of discovery is the most powerful positive motivator in existence. With a tight, daily feedback loop, clear objectives and easily measured outcomes, Fludd turned all her employees into scientists and explorers. No wonder they became enthusiastic.
But we all know this right? We know that the secret to using analytics effectively is to find cause and effect relationships. So why do I claim that very few can wield this power?
Discovering causal relationships between key metrics is not easy. As Abhi Sivasailam claimed recently you need to have enough of the right metrics in place, you need a well established weekly business review process and you need to be very careful about how you run experiments.
Charlotte’s peers would generally change multiple things at once,” wrote Niko Cantor, one of the consultants, in a review of his findings. “Charlotte would only change one thing at a time. Therefore she understood the causality better.”
I’ve seen this story play out more than once. Impatient managers would set up multiple experiments, instead of just one.
The objectives would be unclear and the target metric would often have no correlation to the objective. Then, they’d look at the results before enough data had been collected and finally they’d change multiple variables at once for the next round.
Instead of triggering feelings of curiosity, discovery and learning they’d trigger feelings of frustration both in themselves and in the data team who ends up feeling powerless to drive any meaningful impact on the business.
It doesn’t take much to get started. To illustrate, let me give another example from early on in my career.
About a decade ago I was working as a business analyst for a travel company. They designed, marketed and operated their own tours across the world.
The executive team was on a mission to improve the repeat rate, which is the rate at which travelers completing their first tour would take another tour in the future. This was desirable of course because it reduced customer acquisition costs and increased revenue.
We had discovered that the post tour NPS (net promoter score) was a key driver for the repeat rate, satisfied travelers are more likely to repeat, so the next logical question was “what drives NPS?” After weeks spent doing correlational analysis, the best driver I could find was the quality of the hotel where they stayed.
Since the finding was not causal, we decided to run an experiment. We took several departures of some popular tours and changed the hotels to better ones. These weren’t bad hotels to begin with, they were all 3-4 stars, but there are variations amongst them so you can always do slightly better for a slightly higher cost.
When we got the data back, we noticed a decent increase in overall tour NPS, but not large enough to warrant the increase in costs. Keeping the tours affordable was important for overall growth, but at least we knew what it would take.
That’s it for now, I’ll write more about this topic because it’s of very high interest to me and I hope to you as well.
Until next time.
Love the examples. And in the current AI frenzy it is a good reminder that while “the machines” will soon give everyone the mechanical skills to analyze data our role is (and has always been) to apply our innate curiosity to determine why things are the way they are and how they could be different. To explore through our curiosity.