I wish all my dashboards could run on split-flap displays
Dashboards aren’t supposed to be a big part of my job. It seems impossible to fully get away from them, but they’re not supposed to even come close to becoming the main focus. Most days, it works out that way, but some days, it feels quite the opposite.
I remember one of the first career conversations I had soon after I took up my current position as a Quantitative UX Researcher. Among the many bits of advice the research director gave, one of the bigger ones was how I was not supposed to spend all my time making and maintaining dashboards. Coming from a long history of analytics, I of course knew what that sort of workload looks like and was more than happy to be given the mandate to avoid such a situation.
The reasoning for calling out this specific anti-pattern is sound—there’s constant pressure from all sorts of people for more access to data and dashboards, especially if I were the only data person available for a team. It’s easy to be accommodating and create whatever dashboard gets requested.
But at the same time, if the majority of my time is taken up with handling dashboard work, there wouldn’t be room to work on bigger, more impactful research questions that would not only help the company accomplish its goals, but also help my long term career. I’d just forever be the guy stuck maintaining a massive array of dashboards that people may not even be looking at.
This trap is also amplified because there aren’t that many Quant UXRs around. Higher UX management hadn’t specified a need for research skills, programming, and statistics knowledge in the job description just to hire someone to babysit dashboards, there are very knowledgeable folk from analyst/BI backgrounds that can do this without strong research backgrounds. It would be a massive waste of money, which wouldn’t bode well for my continued employment.
What do you do instead of dashboarding?
Mostly two things. First is ad-hoc research work answering the most pressing product questions of the moment. These come and go with the dev cycle. Sometimes the questions are simple and just take a quick(ish) query on exist data. Other times we find out that the data we want flat out doesn’t exist and now we need spend eng resources to collect it. Once in a while, these ad-hoc analyses hit upon something we find useful and want to run on a more regular basis.
Another large chunk of work involves setting up processes for teams to follow so that they can have good metric definitions and data collection in place for their product work. While it sounds like abstract work, but a large component is hand-holding the team in actually executing the process a couple of times so that they incorporate it into their habits. Usually, the metrics defined at the end wind up in one or multiple dashboards =\.
There’s other random things that pop up to fulfill immediate needs. One funnier example, I was once asked to gather up every dashboard that our teams were using, and I somehow wound up with a a huge list of dashboards made by various people. A handful were in continuous use, and about half of those had become broken and due to a recent major database change. But no one apparently noticed the damage because the dashboards weren’t being used.
So how do you get stuck with dashboarding?
Sometimes, dashboards are the right tool for the job. =\ Even the tasks mentioned above sometimes involve materializing some way for people to know what a given metric value is, and a dashboard is often good enough for that.
For example, helping make sure teams have good metrics/data setups for their work means showing them how to figure out what their key metrics should be. That usually involves a lot of ad-hoc work on my part to pull and twist candidates to examine. But after all that exploration, we finally agree on a set of numbers. The team needs a way to refer to the numbers without bothering me, hence, yet another dashboard is born.
Exactly who winds up implementing the dashboard involves a certain amount of negotiation. I don’t really want to build and babysit yet another dashboard, even if only for a month or two, but the alternative is the team needs to find someone willing to devote time to doing something they’ve never done before. Maybe that person knows a bit of SQL, maybe not, maybe they’ve created data pipelines before, or not…
You can probably guess how painful such an indirect process can be sometimes.
Despite often having to set up the initial dashboard, it’s important to note that it takes significantly less work to maintain a dashboard than it does to spec, analyze, and set one up. So in the name of getting things done, I’ve had slightly better luck making the things and handing it over than forcing teams to do it on their own.
The key point is that while dashboarding is largely an inevitable result of the work that’s done, the actual most important result of the work wasn’t the dashboard itself. It was the whole process of teaching and making sure the team knows what their metrics are measuring, what it means for them, why those numbers are important. The dashboard request was just a forcing function, a side effect of doing things.
If teams understand all those details that surround their dashboard, they’re more likely to pay attention to it. It means something to them besides just “number goes up and to the right.” They’ll also be more skilled at interpreting the numbers because of the familiarity they have with how it was defined and settled upon.
Dashboarding as Conversation Starting Point
Every researcher I know who works with product teams has learned to ask “so what are you actually trying to do?” when someone approaches them for a study request. In order to maximize their own effectiveness, they use the request as an opportunity to start a conversation. It’s very rare that someone who comes with a research study request in mind has a fully-baked, well-designed idea. They’re not an expert in doing research, so it is expected.
Dashboards can be considered the same sort of work process for analyst-type roles. People who come looking for “a simple dashboard of X” usually hasn’t thought through all the important decisions that go into the dashboard. All they know is that they want to (or have been instructed to) keep track of a couple of numbers over time. ]
This is the perfect opportunity to try to steer them away from making Yet Another Dashboard No One Looks At™ by checking their actual goals first. Just that process alone winds up setting a relatively high bar for doing work—by dashboard-making standards anyways. It gives room to say “No”, and direct attention to something related but more in line with what people actually want to know. Anything that actually winds up clearing the bar probably needed enough work to be done surrounding the dashboard that it’s a decent investment of time.
By consistently applying this process of adding “useful friction”, it becomes easier to avoid becoming trapped into dashboard creation/maintenance. It still won’t completely solve the eternal problem of dashboards multiplying like rabbits, but it helps.
“With this one clever trick”
So you may have noticed that my primary strategy for not working on dashboards is to… work on tons of the stuff that would normally build up to a useful dashboard. So depending on how much sophistry you’re up for swallowing today, I have either fulfilled my goal of avoiding dashboarding work as much as possible, or I have utterly failed.
It’s not that I inherently abhor the work necessary to keep people informed about the key metrics of a system they’re responsible for. Being their eyes and ears in a giant opaque system is very important work. It just can’t be my only work. So this is the funky balance I wind up striking.