Since Quant UX Con is coming up next month, and I honestly don't write about quant UX topics as much on here, I'm going to take a couple of weeks to expand on the very limited writing about my job role.
I get asked fairly often “what sort of problems do quantitative UX researchers work on?”. The answer is, of course, it depends on all sorts of stuff. But speaking in extremely broad strokes, one chunk of the work will look very familiar to someone with a data science background, while there’s another chunk of work that would be familiar to people from various branches of the social sciences. The specific details of which would depend largely upon what the teams you’re support on need at the time.
In the end, since UX is interested in the intersection of users and our products. There’s lots of ways our quantitative methods can help out with that overarching mission. In that spirit, I’m going to run through a bunch of problems that I’ve either worked on myself, or have seen others working on. It’s not going to be an exhaustive list, but hopefully gives a good taste of the breadth of questions involved.
The data science-y types of work
This batch of work will be largely familiar with people doing data science work in industry.
Tracking, instrumenting, data pipelines
There’s a lot of data collection infrastructure that needs to be in place for user research to happen, much of it will seem familiar to a data scientist. You’ll need various tracking tools, a data warehouse to run analysis from, frameworks for running surveys and experiments, the list goes on and on.
For the most commonly used tools like having a data warehouse, putting instrumentation into the product, maintaining the experimentation frameworks, engineering teams also want the benefits and can share in the work. But there’s going to be bespoke research oriented stuff that we wind up taking on solo.
Who is using our stuff?
While it often seems obvious at first, I’ve noticed that every product has to re-ask this question of themselves on a fairly regular basis. User needs change, market and competitive environments change, technology changes. Understanding this often requires a wide mix of methods that include both quantitative and qualitative aspects.
But for the quant side, there’s lots of work that needs to be done around clustering and segmenting users into meaningful groups. Sometimes this winds up becoming a data mining exercise to generate personas (except there are people who have issues w/ how personas are used). Other times you’re running a big survey and analyzing that data to figure this out.
What, exactly, are users doing?
Yeah, we know anecdotally that our users come to the website, browse around, put some stuff into a cart, and then give us money to buy the things in their cart. But what are users actually doing?
It’s extremely useful to be able to break that process down in detail to understand things. Knowing that 15% of users go straight to their item and never browse, while 40% seem to search around and never find the thing they want (even though we should have it) has massive implications.
So working with qualitative researchers to understand what users are actually doing, and to what extent, is very important. Handling this usually involves making sense of a ridiculous amount of data points, as well as stretching your analytical and visualization skills.
Launching stuff, A/B Testing, experiments of all sorts
The original posterchild of Data Science before ML/AI started taking over the whole conversation. The basic premise of “we want to see if this new thing is an actual improvement than the old thing”. There’s also variants of “we want to know whether this completely new thing is actually being used”.
There’s a lot of basic work to be done in this area, making sure tracking is in place, designing the experiments, and finally evaluating the results. But there can be a lot of unexpected depth when you start diving into doing whole research programs trying to use experiments to find generalizable truths about your product.
Then, if you get bored of the basic stuff, you can take experimentation all sorts of lengths as you go and search for causality within noise. There’s more than enough room here to build an entire career on.
Did we do the thing?
It’s typically hard to judge success. Imagine you built a product that helps people do their work faster. How do you know that you’ve done it? How do you set up all the measurement and metrics and definitions to allow you to determine if you’ve achieved the result or not.
But not only is it hard to judge the initial success, but you also have to understand enough about humans to potentially deal with unexpected side effects. Imagine if your tool was so effective at helping people work faster that… they wind up spending the same amount of time but completing more work. Is that a successful and desirable outcome?
Post-hoc analysis work
Sometimes there’s a need to understand our users and products in less-than-ideal situations.
For example, oh no, we introduced a bug and 50k users had horrible recommendations for a week. Who was affected, what was the actual effect, can we learn something from the whole situation?
Other times some external event happens like our product gets mentioned on TV and we suddenly have a ton of new users. Or sometimes we just botched an experiment and need to salvage things as best we can.
Working with text
While we spend a lot of time collecting data from people that’s numeric in nature, there are also lots of places where text is a major factor. Most surveys include at least one open-ended question for people to write in their thoughts. Customers often send in messages to customer support folk. Or there might be an online forum of users, or there’s a significant social media presence. Whatever the source, there’s usually so much text around that no human can handle all of it. Therefore, using tools to analyze and/or summarize large amounts of text can be important.
And don’t think that it’s as simple as pulling a tool off the shelf and applying it. Usually, due to the unique context where the data was collected, a lot of the well-studied methods fall apart.
The more social-science-y stuff
This class of stuff involves understanding more intangible aspects of user experience. I don’t normally see these topics covered in data science discussions.
What people think about something
How satisfied are users with our product? Do think think it’s fast or slow? Do they find it useful? Do they think our ad is disruptive? Do they think our recommendations are on point? Is this video “fun”? There’s endless questions we can ask about the perception of users. More importantly, some of these perceptions can be correlated with important things like purchasing decisions. This creates an appetite to know what these important constructs are.
But there is a ton of work involved in figuring out a definition of an abstract concept, developing a method for measuring such a concept and validating it, and then finally correlating that concept with a desired end state.
It’s a lot of work that traces its methodological roots back to social sciences like psychology, sociology, and econometrics.
Eye tracking stuff
Back in the early/mid 2000s, HCI was hot stuff in academia and eye tracking was on the cutting edge, and while some of that has cooled over the years as methods for tracking usability metrics within software matured it’s still relevant in certain areas. When studying how users use certain types of user interfaces, sometimes they’re indispensable.
Imagine trying to study how air traffic controllers read their complex equipment as a non-expert without the help of some of those techniques. Humans are pretty unreliable at expressing what they’re looking at when in the middle of a complex task.
What’s important to users?
Every product gets feature requests from users. The problem is figuring out which feature to work on next given limited resources. Sometimes, people within the org have an idea or vision for what should be next, but other times (especially for fairly mature products) it’s not entirely clear which properties users place importance on.
There are methods like conjoint analysis that are used to help tease out user preferences that users themselves are unable to clearly articulate. Depending on what you find, these sorts of analyses can kick off a new round of high level strategy work that pushes the product forward.
How can anyone know all this stuff?!?
You can’t!
The breadth of relevant questions are so much broader than any single person can hopefully master in a lifetime. What winds up happening is that you eventually specialize in a couple of skill sets while picking up bits and pieces of the others. So long as you’re aware that these other methods exist for certain questions, it’s not too hard to look up how to execute a specific method later when the need actually arises.
For example, I obviously lean heavily into the logs data analysis/engineering side of things thanks to my data science background. This means I tend to liberally copy ideas and methods from my more survey/academic-oriented colleagues when I need to understand what users are thinking. If I were alone and didn’t have my colleagues as resources, then I’d either ask my network of researcher friends, or drag up my old social science methodology textbooks for ideas.
What’s actually interesting is that people don’t cleave cleanly into data science-y and social science-y groups. Many people in the field actually straddle both areas. Instead their individual experiences and preferences wind up tuning a unique toolbox that they use to approach problems in certain ways, while another researcher will use entirely different tools for the same problem. In the end, the multiple approaches usually doesn’t wind up being an issue because the “truth” (if such a thing exists) will triangulate across multiple methodologies.
Standing offer: As I mentioned before. If you created/wrote/built/filmed something and would like me to review or share it w/ the broader data community in this newsletter (or in private) — my mailbox and Twitter DMs are open.
About this newsletter
I’m Randy Au, Quantitative UX researcher, former data analyst, and general-purpose data and tech nerd. Counting Stuff is a weekly newsletter about the less-than-sexy aspects of data science, UX research and tech. With excursions into other fun topics.
Curated archive of evergreen posts can be found at randyau.com
All photos/drawings used are taken/created by Randy unless otherwise noted.
Supporting this newsletter:
This newsletter is free, share it with your friends without guilt! But if you like the content and want to send some love, here’s some options:
Tweet me - Comments and questions are always welcome, they often inspire new posts
A small one-time donation at Ko-fi - Thanks to everyone who’s sent a small donation! I read every single note!
If shirts and swag are more your style there’s some here - There’s a plane w/ dots shirt available!