A few years ago, we started an initiative to track corporate and region goals. Our corporate scoreboard and weekly accountability meetings ultimately helped us better understand the value of sharing the right measures, regardless of how clear and consistent we think our message is.
One of our key initiatives was to increase the number of customers actively using our new website by 100. The goal was to sign up new accounts and keep them engaged to ensure adoption. If a customer went 30 days without a login it would fall off the standing and we would no longer consider that customer an active user.
With six weeks left to reach the goal, our net score began to drop. We shared this decline on both the corporate and all region meetings. We seemed to be better at signing up customers than we were at sustaining adoption, as new customers started to abandon use of the site.
With our goal date fast approaching, and a financial incentive on the line, our message was clear; we needed to help trailing customers better adopt.
On the data visualization side, we drilled down to a lower level of detail by reporting our net scores to highlight change in customer count by salesperson. The feedback was the same over all regions, “We don’t feel we can control if customers use the site”.
Closing in on the goal date, we decided to add a secondary measure that connected customers to the average number of days without a login. This measure was shared across all existing, trailing, and abandoned customers.
This measure sparked new interest; it was like a light bulb went on. Region managers and salespeople started asking for a list of customers sorted by days since last use and felt confident that they could contribute toward meeting the goal.
All of a sudden it became clear that big numbers that are separated from the individuals fall flat. People don’t take ownership over a number unless their name is next to it. Also, details matter. Sometimes upper management wants to see the overall scoreboard, but those that get the work done often need specific data in order to enact change.
How did this one measure add urgency to our data story?
I’ve heard of the fallacy that data should speak for itself, but this time it seemed our fallacy was thinking that our weekly dialogue was enough to promote action.
Our objective was clear, we are losing website customers and need to help them adopt, but our data visuals fell short of supporting this message and prescribing a starting point.
We knew the outcome, we even knew how to help, but in this case the elephant in the room was “Which customers first?” It was the missing link, and during the group calls no one would raise a hand and go on camera to admit they were unsure of where to start.
Rather than pose the question, it might have been easier to mentally fill in the gaps with personal assumptions and bias. This pattern can overshadow the need to seek more information, and end up as a lost opportunity.
Good data stories should be descriptive, build momentum, and a desire to participate. They also should align facts with a feeling of empowerment, but in this case that only works if we share information that qualifies our results and helps guide us to the next step.
When we brought in the second measure, we peeled a layer back from the customer total, and revealed specific and attainable opportunities; leaving less to the imagination and a motivation to act.
We learned to highlight the big totals for upper management and be more thoughtful about how we detailed out and shared at the region and salesperson level.
We were fortunate to discover this in time and in the end exceeded our goal. We learned not to overly depend on our dialogue to uncover all necessary questions and insights. Investing in measures that answer the unknown or unspoken questions can help fill in the gaps and sometimes make the difference between a silent lost opportunity into a reason to act.
Thanks for visiting and have a wonderful Thanksgiving! – Mark