Be Aware of the Power You Wield as a Data Person
Photo: Randy Au)
I oftentimes joke that working at a startup, especially an earlier stage one (first ~100ish employees), is like being in an F1 car race where we’re building the car as we’re driving. And if that’s not crazy enough, there’s constant debate over whether we should build a steering wheel, seat belts, or racing stripes next sprint.
But one thing that’s not quite accurate with the analogy is that it implies you know where you’re going, which is debatable. Sure, the founders have a vision that’s on par with “we’re going to get to that finish line over there by diving fast along this track”, but there’s lots of more tactical decisions that need to be made to prevent crashing.
The situation is more like what I imagine navigating a submarine would be like. You’re in an airtight cylinder, and the only way to orient and steer involves using tools and technology as your eyes. If the tools are broken and flawed, you’re dead.
And that’s where you, the Data Person, come in. By defining metrics, dashboards, reports, and doing ad-hoc analysis to answer questions people have, you are providing a view into what your organization is doing that is more comprehensive than anyone else has access to. Effectively, you are the eyes.
This is a very unique position to be in, and we should really understand what we’ve been entrusted with.
Your view isn’t the only view of the world, but it’s the one of the broadest
Let’s be clear here — you’re not the only person with a view of the world. Everyone has a window to the world. Salespeople will be speaking to people outside the company, executives will be hearing from investors and hearing from teams, marketers will see how ads perform and what resonates with users, support folk will hear feedback and see bugs, accountants will be seeing the detailed financials. Everyone has a detailed view of their specific realm of responsibility.
The one key difference is that, being the “Data person” of the organization, ALL of the above are data sources that are within your purview, if you choose to leverage them. No one else (except perhaps the executive team) usually has a job that gives such broad access.
If you take the position that you are a force multiplier that will have the most impact by making everyone around you more effective, providing keen insight into what they’re doing and how they are improving should be one of your top priorities.
Building Eyes for Seeing the Bigger Picture
One reason the data person in an org often has the more comprehensive view is because they have the skills to use disparate data systems and bring them together. Very few people can simultaneously work with data from Salesforce, the production database, server logs, accounting records, spreadsheets from management, and customer reports. It’s a unique skillset to translate both quantitative and qualitative data into business-relevant insights. It may be painful and a huge amount of work, but at least it’s possible to do so.
Even if you ultimately only need a single data source to define your success metrics, the fact that you have access to all these other sources means you had more opportunity to evaluate and triangulate during the creation process. That should make for a stronger world view overall.
The trick is to not invest too much into automating the use of a particular data source until you are confident that it will be useful. It is usually much cheaper to do a one-off analysis to check for usefulness/feasibility. It also gives you a chance to experience the quirks of the data source first-hand, which you will need to effectively automate it anyway.
Defining Metrics is a Big Responsibility
Data scientists and related functions are often asked to define metrics, they take input from other stakeholders and then create measurements that best approximates those goals. The whole organization will then make decisions and judge themselves by how that metric performs.
I often half-joke that part of a data person’s job is to literally define what reality is for the org.
Take a moment to reflect on how metrics are used in an org. If the metric is very flawed (all are flawed, but some are worse than others), then the org can waste time and resources in trying to affect a metric that doesn’t actually help the anyone accomplish their goals.
What doesn’t get measured and reported on won’t get talked about and changed. Metrics that are measured poorly will lead people astray. And unless you have sharp, skeptical people looking at the data, these issues can go unnoticed until something horrible pops up.
It’s all a surprising amount of responsibility to give what could be a very junior team member. When I was just a couple of years out of grad school, I was helping to define critical metrics for product launches that were viewed throughout the org. I, of course, had help and mentoring from more senior folk, but the work (and mistakes) were largely mine to own.
Some ways to avoid disaster
Metric setting is an art backed by science, so there are no firm rules to follow that will guarantee correct results. But here are some strategies to employ to lower the chances of creating an incorrect and potentially dangerous view of reality.
Avoid vanity metrics, prefer ratios
There are lots of charts that get shown at board meetings that go up and to the right, e.g. total users signed up, etc. They often don’t offer anything besides making people feel good with big numbers. The problem is these vanity metrics tend to mask a lot of issues underneath because they typically don’t (or can’t) go down.
Most good metrics are in the form of ratios or durations, revenue per user, cost per sale, time to purchase, etc.. These sorts of metrics are more useful because they are capable of changing over time, they go up, down, or sideways and people often want to move them in a particular direction.
Show potential levers to be pulled
A good metric should be something that can reasonably be affected by the org, it can’t be stuff that is completely external like the weather. People should be able to look at the metrics and think about how their individual efforts may help move that needle.
For example, imagine if your main metric is “number of people who are signed up for our newsletter” because you’ve proven it helps the business somehow. A developer can decide put signups more prominently on the site, a marketer can run a campaign around it, support folk can try to encourage people to sign up (or not leave) during their interactions. Maybe there are some layers of abstraction between the teams and the metric, but if you lay them out, the teams can use them to generate ideas to try.
Check for second, third order effects
If your metric is a good one, a change in it should reflect in changes in many other parts of the system. If the top of your acquisition funnel goes up 5000%, you’d expect conversions, revenue, even renewals and support tickets to go up too.
When a metric doesn’t do this, there’s a problem either in the metric, or your understanding of the business. It’s a major red flag.
Name things sensibly and clearly
People don’t read manuals, and they definitely don’t read 50 caveats on a slide deck about what exactly a metric means. So whatever metric you name needs to be reasonably clear what it is from just the name. Don’t ever say “Users” if you actually mean “Users who paid within the past 90 days”. People can’t remember that and they will jump to the wrong conclusions.
Make sure people are aware of important blind spots
There’s never enough time and resources to measure and report on everything, there WILL be blind spots, things that aren’t measured or visible for all sorts of reasons. Depending on how critical they are to how the org operations, people may need to incorporate these “known unknowns” in their decision-making process. Make sure to be explicit about them when that happens.
Iterate
You’re almost guaranteed to get metrics wrong on the first try. Be prepared to iterate. Make sure everyone else knows that this is a process. Oftentimes it takes a few cycles to realize that a number that seemed important is actually quite boring and useless.
Ultimately, some metrics HAVE to be chosen, even if they are flawed and imperfect. The organisation needs to achieve its goals regardless of whether it is flying blind or not. Once you accept that nothing can be perfect, it is easier to have the conversations required to keep improving over time.