Learning to push back
This weekend I'm fighting off the very unpleasant effects of undercooking some meat in haste. Heat your meats properly people.
It's been a heck of a week, and it’s only MONDAY. The world continues to be a slowly unfolding disaster movie. The US in particular is a raging dumpster fire right now. Things are moving too fast for me to even process, and I’m liking almost none of it. I simply hope everyone is as safe as can be during this painful but apparently necessary part of the process for finding true justice for all. I also hope there’s a coming dawn at the end of this night.
Holiday decoration lights, somewhere in Tokyo, 2010
A common blind spot to a common situation
Recently I was speaking to a junior analyst about quant UXR work and we got around to talking about random situations that come up on the job. Among the topics discussed, a classic hypothetical case came up — what do you do when you have a Project Manager of a brand new product come to you and they want to run an A/B test to optimize some issues, despite there being only a few hundred people seeing the page a week.
I'm willing to bet money that ALL researchers and analysts who've worked for a 1-3 years in the field can think of a similar scenario in their career. People come with preconceived solutions/methods and ask for help executing them, but don't have the understanding that their solution is inappropriate for their specific case.
In this hypothetical, an A/B test is usually not appropriate. For small changes, it would take weeks or months to get enough of a sample to find a statistical difference. For large changes, you might find a difference sooner, but still take weeks of time. If you try to find a way to juice the sample size (by say, running a giant ad campaign or something), you skew the sample and ruin the whole point.
The best response to such a request is some form of pushing back on the actual request to change what is actually done , but would accomplish the PM's actual goal.
The PM doesn't (shouldn't) care that an A/B test was used, all they want is to get more signups, more revenue, better whatever. They want to make the page better so the business does better. You can do all sorts of things, like usability testing to observe users, or interview potential customers, or send out surveys to marketing targets, etc. It’s up to us experts in research methodologies to provide the best methods for the situation at hand.
We can, of course, be nice about it when we push back. Many people I’ve spoken to integrate a diplomatic “so what are you actually trying to accomplish?” when someone comes to them with a request that has a method already picked out. This is a pretty well-known issue in this space, and it seems most people will eventually wind up adopting similar strategies.
But the junior analyst I was talking to didn’t have the same reaction. Their first instinct was to think through what would happen if they ran the requested A/B test, realize that there would be sample size issues and take too long. Then they started brainstorming on ideas on how to increase the sample size in an effort to make the A/B test work out. It never really occurred to them that they should be saying “no no, let’s do something else!”. Once I brought up the possibility, they started thinking of ideas along those lines, but it was an obvious blind spot.
So how did I learn this?
I’m fairly sure that I learned how to do this via three major vectors.
Through error
First, the actual experience of just how painful it is to blindly agree to do such work, only to realize weeks later that things aren’t working out and we need to completely rethink what we’re doing because it’s hopeless to continue. As a junior analyst, I didn’t know better and deferred to the relatively more experienced PMs. Except the PMs also weren’t as experienced as they thought (maybe they came from a place where that method worked but moved to a new area), which led to disaster.
On the bright side, being the junior analyst and deferring more experienced people meant that I didn’t have to take much heat when things went wrong. But the company and everyone involved paid a cost of time and resources for my failure to stop the problem early on. Sure, there were structural failures around that allowed such a situation to happen, but it would’ve been entirely within my power to put a stop to it if I had the experience to do so.
With experience, mostly from making this mistake a bunch of times, I had a strong enough prior in my head to sound an alarm bell. That would be then give me motivation to put on the brakes before things went too far. Since I also had to come up with the “Plan B” alternate solution anyways, I also had practice thinking through some methods that worked under the constraints.
Obviously, this could be considered a form of “learning the hard way”. The whole point of education however, is to minimize this sort of slow learning where possible, so I can’t exactly say “go do it this way”.
From watching others
The second thing that contributed to learning how to handle these situations was being able to watch other people push back. I got to see managers question why something was done and get everyone to agree to alternate methods. I also got to see other researchers redirect conversations and tease out what people really wanted to achieve than not. That’s where I picked up the more diplomatic ways of saying certain things.
This of course requires having either good managers, or other researchers to watch and learn from, as well as time to see examples come up and be dealt with. That’s something of a luxury, and involves being lucky enough to have such people in your life. I’ve had horrific managers before, so it really can be the luck of the draw.
Learning new tools
Finally the third thing that helped me figure this out was getting exposed to more methods over time, especially qualitative methods, but having more methods in my toolbox in general helped.
If we’re facing a sample size issue, the running a handful of usability tests or interviews might uncover enough issues to find useful product improvements without the need for statistical significance. Learning how to use fat confidence intervals as bounds for describing metrics for example, if the lower bound of 95% confidence interval says it takes 25 minutes to make a sale, we probably have a problem.
The more tools and hacks I became familiar with, the easier time I had with suggesting alternate methods that could yield useful information. It’s not enough to just know the names and situations where the method is most commonly used. You ideally need to have some experience using it, or at least know how enough to be able to specify the costs, timing, and potential outputs ahead of time.
So how should someone new to the field learn this stuff faster?
Ask someone for advice once in a while
For various personal reasons, I’m not good at asking people for things, but the data community is very open to interaction, so even a recluse like me actually highly recommends talking to some more experienced folk about when you hit situations that don’t feel right. One great resource is this big list of helpful people:
https://www.datahelpers.org/
Every so often if a #datarant event goes up, or there’s a data related meetup going on, it’s good to get together with more experienced folk and talk shop.
Learn (of) more methods
It’s not easy to learn new methods of doing things, especially if you’re not sure what the name of the appropriate method is. While you can definitely ask a more experienced person, you can also search online for common methods used at various parts of a development process.
If the junior data analyst realized that 3-5 moderated user testing session might uncover enough big issues that the PM could fix to improve the page, there wouldn’t be a need to wait 4 weeks to get a “no difference between versions” result. Making that recommendation takes knowing about the method before hand.
Starting out, you can look up just lists of research methods and get overloaded at just how many there are. Then you can look up research methods in other fields, like UX, HCI, anthropology, econometrics, sociology, etc. to get even more overloaded. See what hose methods are used for, what they get you. Just from scanning those lists, you can sometimes find a few that seem they might work for your situation. Look into those a bit more, and if it sounds doable, try using it! Here’s a quick overview of common methods used in quant-UXR work by Nielsen Normal Group.
What’s important to remember is you want to learn the easiest, basic, most established methods first. The stuff that’s in the “Intro to the research - grad 101” textbook. Older stuff is usually easier to understand and adapt to your existing situation. They became the old standards for a reason. These methods usually work in many situations and are fairly robust to broken assumptions, and are often rooted in familiar things like t-tests/ANOVA/ethnography/etc.
You want to actively avoid bleeding-edge fancy stuff because those were always developed to overcome specific problems found in older methods, problems which probably don’t apply to your situation. Save it for when you master the basics (if you ever even need to get that far).
And while you don’t have to be an expert user of all these methods, you should have a working idea of how to execute them. I don’t have great confidence in my UX interview moderation skills, but I’ll reluctantly do them if no one else can. I also haven’t written a survey in years and am likely to make mistakes, but it’s still better than letting a complete random person with no experience at all do it.
It’ll take time to broaden your toolbox, but as you put more in there, it gets easier to push back because you’ll have some ideas for what might be usable.
There’s no need to confront, ask and listen
While “stopping a bad experiment plan from happening” sounds like a confrontation in the making, it almost never should come to that.
The most important thing to remember is that people are usually bad at explaining their thinking processes and goals, so when they say “I want to run this experiment” it’s usually code for “I want to accomplish a thing, I heard experiments can accomplish this, so I want to do one of those”.
This is why it’s very common to hear the sentence “So what are you actually trying to accomplish, what problem are you trying to solve?”. It doesn’t take any magical expertise to ask this question, just listen and work with them to articulate what they actually want to do.
Usually, that conversation will provide you with ideas for better ways to help them get an answer, even if only an approximate answer. So if you put nothing else in your early toolbox, put this conversation into it.
I’ve already spent a bunch of hours on this and the steady drumbeat of horrific news is just sapping my energy to write more on this topic tonight. Stay safe. Good night.