Last post of the year! This year I went from 52 posts a year to… even more and I’m surprised that everything is just working out. 2022 has been a wild ride, and all I can hope is that 2023 is simply less eventful for everyone. I could do with a few years of peaceful calm. In the meanwhile, I’m writing this when I really should finish packing for a brief trip with the family, so we’re gonna end this year with an (editorially) YOLO post! 🙃
I wish everyone the best 2023 that we can get!
Looking at my work email, I've noticed that I'm getting more cold emails from various data and research vendors lately. I guess I've been put on some kind of list. I normally just skip over them, but then subject line caught my eye.
“Get ahead in 2023 with 100% verified insights”
Wat?!
I remember stopping a moment and just trying to figure out what was the sentence trying to say. What the heck is a “verified” insight? Whatever it is, how someone even claim 100% on it? Where are the error bars?!
Yes, I get that I'm completely overanalyzing marketing BS — “puffery” as the lawyers call it. But at the same time, I'm curious to what degree is that ridiculous statement the result of some non-technical copywriter typing nonsense, or of there's some obnoxious half-truth of a badly named “method” involved. It certainly seemed like they’re trying to get inattentive/uninformed readers to think that their insights are 100% correct, but what was their out? What weird linguistic loophole would get them out of promising the utterly impossible?
So I quietly visited the vendor's web site, which I'm obviously not going to link to. They're a market research firm based in NYC trying to pitch me to give them market research projects. I assume they do UX research related work too? It appears that the “100% verified” schtick is because one of the things they do is build panels of people for focus groups and other studies, and they “100% verify” that the people on the panels are relevant to the study. The implication being that if you're trying to sell to people who read data newsletters, they build a panel that is supposed to be 100% comprised of people like you and I. I guess it's a jab at how sometimes panels might be formed by intercepting people on the street or recruited from Craigslist by people cutting corners on recruiting requirements.
In addition to the wildly misleading framing of “100% verified” in the email, it annoys me that in any decent research bidding/proposal process, I’d be specifying the acceptable types of people who should be recruited are in the contract. If I want to research 50 people who work in the fast fashion purchasing industry, I’d damn well get it put in the contract. There shouldn’t be a big difference between major research vendors in this basic aspect.
I guess, if that's all you got to differentiate yourself from what other market research companies do, then, sure?
Note: If anyone happens to have interesting/bewildering marketing copy from places offering to do research/analytics work for hire, send them my way! I’m very curious how ridiculous it gets.
Appropriate-level Bragging vs Complete BS
The marketing example from above is silly in its own right, but since we’re at the end of year, a lot of people have been going through their annual review process at work. In fact, the two final talks of Normconf, “Data-driven promotions” and “Don’t do invisible work” both touched upon the topic of keeping track of your work and achievements over the course of a year because if no one, not even yourself, remembers the work that you did during performance review and promo time, you’re setting yourself up for disappointment.
Central to both of those talks, and a general good idea for solving this problem, is to create a form of “brag document”, a term coined (I think? most people think so) by Julia Evans. The basic idea is simple, keep a document somewhere where you regularly (often daily or weekly), write down things that you did or achieved. That way, when review season comes along and you need to list out all the work and achievements you’ve made over the years, you have a record to reference and objectively base your answer on.
At Google, most of us were more or less trained to keep some form of brag doc early on in order to ease the rather painful process of going through the Perf review system. It was a MASSIVE undertaking involving listing our significant achievements in the past 6 or 12 months, completely with links to supporting artifacts. It gets EVEN MORE ridiculously intense if you’re submitting a giant portfolio packet of work to go up for promotion.
I regularly tack completed work items onto my doc at the end of the week and this year I was completely floored that some projects that I felt had been done years ago was actually completed in February. Memory failures and salience bias are powerful things.
But despite years of training and experience with brag docs and summarizing all those work items into a compact list of achievements with demonstrated impact and links to work artifacts and such, there’s something I still massively suck at — making it sound good.
How bad I am with that? Here’s an example:
[Very lightly dramatized for effect]
Me: Walked through a product, found a bunch of UX issues and presented findings to the PM team.
Manager: Uh, you learned Kubernetes from scratch, found stuff, showed the whole PM team and like half of the unmuted to get more details out of you.
Me: Walked through k8s, found a bunch of UX issues, presented findings to the PM team and they were really interested in the feedback?
Manager: ... [facepalming ensues] ...
As a researcher and analyst for my entire career, and generally a very “See the flaws in everything to improve next time” person, I’m TERRIBLE at putting focus on the positive aspects of stuff. We’re not even talking about getting to the level of “borderline deceptive marketing BS” here, but merely “here’s the list of things that objectively went well” is already a big stretch.
It’s something that I’m actively, albeit very slowly, working on improving. I don’t expect I’ll be very good at it for a very long time.
As researchers and analysts, we’ve all learned to be very careful with our results. We’re very clear about the caveats and assumptions and what situations our findings will and will not apply. It’s part of the nature of this work. It also ingrains these bad habits that come to bite us come performance review season time. Because someone out there doesn’t even have to lie or BS to look better than the rest — much of the field is unconsciously depressing their own scores.
We need help calibrating our achievements
Amongst all the talk about brag docs and listing out achievements, this was one point that was sorta mentioned but not particularly emphasized. Everyone needs help calibrating their achievement scale to other people’s scales. I might think my work is boring and uninteresting because I pulled some simple data in SQL, threw it into Excel and made a chart about some user behavior. That same chart could have been a massive eye-opening insight that changes the whole company’s 10 year strategy. But unless I knew that was the end result, and how it’s been perceived by the stakeholders downstream, I would likely rate it as a quick and uninteresting work item.
Our own perceptions of the value of work have nothing to do with anyone else’s perceptions, and any performance review situation needs to reflect that. I frankly get confused when sometimes people tell me I did cutting-edge work on a thing, simply because I had a problem I hadn’t seen before and merely took some reasonable steps to solve it. It just so happened that no one (publicly) has talked about working on it before.
How do you get this calibration? Well, it sucks, but you have to ask people. Ask them what what did they use the data you gave them for? Did they like it and need more work in that area? Who’s using it? Then, asking managers and skip managers, how to accurately describe the work in a way that realistically reflects what it was for.
While most of us want to avoid making unsupported BS about our work, I don’t think it’s possible to find a good balance without some outside calibration. It’s just like how it’s not possible to tell how accurate your ruler is without comparing it to another one.
If you’re looking to (re)connect with Data Twitter
Please reference these crowdsourced spreadsheets and feel free to contribute to them. Given the recent scare over the weekend, it’s good to have a backup plan.
A list of data hangouts - Mostly Slack and Discord servers where data folk hang out
A crowdsourced list of Mastodon accounts of Data Twitter folk - it’s a big list of accounts that people have contributed to of data folk who are now on Mastodon that you can import and auto-follow to reboot your timeline
Standing offer: If you created something and would like me to review or share it w/ the data community — my mailbox and Twitter DMs are open.
New thing: I’m also considering occasionally hosting guests posts written by other people. If you’re interested in writing something a data-related post to either show off work, share an experience, or need help coming up with a topic, please contact me.
About this newsletter
I’m Randy Au, Quantitative UX researcher, former data analyst, and general-purpose data and tech nerd. Counting Stuff is a weekly newsletter about the less-than-sexy aspects of data science, UX research and tech. With some excursions into other fun topics.
All photos/drawings used are taken/created by Randy unless otherwise credited.
randyau.com — Curated archive of evergreen posts.
Approaching Significance Discord —where data folk hang out and can talk a bit about data, and a bit about everything else. Randy moderates the discord.
Support the newsletter:
This newsletter is free and will continue to stay that way every Tuesday, share it with your friends without guilt! But if you like the content and want to send some love, here’s some options:
Share posts with other people
Consider a paid Substack subscription or a small one-time Ko-fi donation
Tweet me with comments and questions
Get merch! If shirts and stickers are more your style — There’s a survivorship bias shirt!