10x data scientist is luckily not "a thing" let's all work to keep it this way

We seriously don't need that sort of nonsense

This weekend, Medium's Byzantine article recommendation algorithm decided to recommend me an article about being a "10x data scientist". My initial reaction was "what in the absolute f---?". I'm not linking to it since that would be rewarding the bad behavior of a self-aggrandizing twit (it takes a whole new level of hubris to essentially declare yourself to be a 10x anything on the open internet). Search and use the Google cache if you must read it.

I later realized that the article was published in October of 2019, thankfully not super recent but still less than 5 months from this writing. It’s also months after the initial viral spike of “10x engineer” in July. Searching for the phrase on Google actually came up with an relatively small amount of posts on the internet that use the term, scattered throughout the past 3 years or so. That was something of a sigh of relief. It hasn't metastasized in the community at large, just some blogger trying to score cheap hits.

The whole 10x engineer myth, as understood in the 2019 zeitgeist because the meaning has evolved over time, is a crystallization of the toxic stereotypes that can exist within software engineering and we need to keep that crap out of data science. If stupid myths go viral and take hold outside of the field, like with prospective students, recruiters and hiring staff, we're all going to suffer for it.

For those who aren't as familiar with this 10x thing, we'll get into the history and detail later in this article. The gist of it is that some very old studies that observed that some programmers performed orders of magnitude better than others in some measures like execution speed of code written, lines of code produced, etc. That observation was then simplified in following studies to being roughly "10-to-1" where high performers will produce 10x the amount of code as the worst performers.

By itself, the observation that there are higher and lower performers within agroup of people isn’t particularly controversial, power law distributions are a dime a dozen. What set the whole thing viral was how the concept got associated with the caricature of the "lone asshole programmer that is better than everyone else". To make things worse, it was declared by some that these “10x engineers” were desirable to hire and should be actively sought out. Things went crazy from there.


The core concept of deliberately seeking out toxic personalities because they supposedly can code better while effectively making every single person around them sacrifice all sorts of things is mind boggling enough. It makes EVEN LESS sense in the context of data science because the field is so ridiculously cross-functional. A typical data person will be working directly with people all across an organization, often multiple levels above and below. You definitely do not need to have someone with that kind of broad reach be a toxic individual. It's already an atrocity to inflict such a person on a relatively isolated dev team where you could theoretically seal them away in a team of 1. But to put them in close contact with an entire org? Might as well just toss a bomb into the open office plan.

To be clear, the base truth of the original 10x concept, that there are large individual differences in performance between engineers (and data scientists), isn’t really disputed. Some people will inevitably perform an order of magnitude better than the worst performers on any metric you care to measure.

What the internet latched onto and rejected was the idea that everyone else in an organization should put up with any toxic crap that comes along with these more-productive individuals simply on the basis that they’re high-producing.

Hint: there are highly productive individuals who aren’t jerks, go hire those people.

The data community is awesome, it's our shared responsibility to keep that way

The data community has been one of the most inclusive and positive communities I know of. We have people participating from from every walk of life, from PhDs to high school students. The arts, social sciences, and the hard sciences are all represented. There's hard core engineers, hard core researchers, jacks/jills-of-all-trades, and casual weekenders and students all interested in making sense of the world with data. Just take a look at all the great people on datahelpers.org.

The cluster of data folk on Twitter is generally amazing and open. You can easily strike up conversations with people practically anywhere in the field. There's plenty of good-natured joking around, links and discussions of papers and topics.

There's also the wonderful broader communities. There's the gigantic community built around R, from user groups, meetup groups, conferences. I don't even know how to use R to any real degree and still think highly of the amazing people in that community. Remember that R-Ladies was instrumental in bringing to light the giant DataCamp scandal in 2019. It was a credit to the community that we were united and loud enough to stand up and say we weren't going to accept this behavior.

That's not to say everything's perfect either, but you'd be very hard pressed to find a better group in tech. There's still plenty of gatekeeping jerks that seem more interested in declaring what "real" data science is to them than being helpful to anyone. I'm also certain that somewhere out in the big, ugly world, sketchy harassment issues are happening right now that doesn't have whatever unique circumstances allowed the DataCamp scandal to go viral.

It's up to us to be loud and clear about what is unacceptable. Social norms are one of the best ways for a society to maintain steady control and moderation of itself. We've got a good thing going and can't take it for granted. This stuff falls apart fast.

Now, onto poking at the whole 10x thing

References into the 10x engineer concept seem to attribute the origin to a 1968 paper by Sackman, Erikson, and Grant, "Exploratory experimental studies comparing online and offline programming performance". (Communications of the ACM, January 1968) It's a fairly short paper, and you have to wrap your brain around the historical context. The main motivation for the study given in the intro was this: there's hot debate over whether traditional batch, or time-share systems were better. Would programmers become lazier when they had access to a computer all the time to debug their code iteratively on more expensive hardware that enabled online access? Shouldn't they reason-out their code by hand on paper first before submitting their programs?

“KIDS THESE DAYS!” —Eventually Everyone

The paper describes how a time share system works, which sounds utterly fundamental to users of more modern computers:

User programs are stored on magnetic tape or in disk file memory. When a user wishes to operate his program, he goes to one of several teletype consoles; these consoles are direct input/output devices to the Q-32. He instructs the computer, through the teletype, to load and activate his program. The system then loads the program either from the disk file or from magnetic tape into active storage drum memory. All currently operating programs are stored in drum memory and are transferred, one at a time, in turn, into core memory for processing. Under TSS [Time-Sharing System] scheduling control, each program is processed for a short amount of time (usually a fraction of a second) and is then replaced in active storage to await its next turn. A program is transferred to core only if it requires processing; otherwise it is passed up for that turn. Thus, a user may spend as much time as he needs thinking about what to do next without wasting the computational time of the machine.

The experiment was a 2x2 design. Subjects (12 experienced programmers) were asked to write and debug 2 problems—the "Algebra" problem where you write a program to solve algebraic equations input into the teletype, and a "Maze" problem where they had to write a program that printed out the single route through a 20x20 grid maze. All subjects were "referred to a published source for suggested workable logic to solve the algebra problem.

The problems were given under an "online" condition for debugging where they used the TSS system with full access to everything on the system. There was also a simulated "offline" condition which used the same TSS system, but users had to submit their work in batch and wait 2 hours for their work to come back. The study noted that 2 hours was shorter than a typical batch system, but long enough that most subjects complained about the delay. Six subjects were given the Algebra problem under "online" conditions, then the other problem under "offline" conditions, the remaining 6 were given Maze online.

The results of the test showed that time shared systems were favored in reducing man-hours used for debugging. Interesting to note that the algebra problem had a 34.5 hr mean (30.5 SD) for online, vs 50.2 hr mean (58.9 SD) for debugging. An average of 35 man-hours to work on a single tough problem! Also note the gargantuan standard deviations.

Here’s Table III from that paper, the one that would plant the original seed that would lead to the 10x nonsense:

It’s an observation that the difference between the best programmer and the worst could be massive. as much as 28-to-1 in terms of hours spent debugging the Algebra problem. The researchers also found similar large individual differences between 9 “trainee” programmers given a similar experiment with simpler problems.

Then in “Rapid Development” Steve McConnell in 1996, says the following:

“Since the late 1960s, study after study has found that the productivity of individual programmers with similar levels of experience does, indeed vary by a factor of at least 10 to 1 (Sackman, Erikson, and Grant 1968, Curtis 1981, Mills 1983, DeMarco arid Lister 1985, Curtis et al. 1986, Card 1987, Valett and McGarry 1989).”

The book itself positively hammers on the 10-to-1 idea, trying to convince the reader that they too can move their software development team from being a 1x productive group more towards the 10x side of the spectrum. The book’s entire reason for existence was to teach readers (managers of large software projects) how to run their teams to be less like a 1x team and perform better.

The most recent cited paper from that paragrah, McConnell, Valett and McGarry, 1989, looked at tons of project data collected by NASA engineers for various projects, and shows (roughly) a 10x difference between the worst performer and best performer. But note, the metric is pretty sketchy by today’s standards: Source Lines of Code/hour. So, to sum up, it was found that some engineers could churn out 10x the number of lines of code as another engineer. I’ll have to look deeper into the history of this SLoC metric in a future article because it’s too big to cover here.

The history to the whole 10x thing has more twists and turns than I can even comprehend as researchers published paper after paper over 30-odd years. You can check here, and here, for some more overviews and citations to chase, I initially used them as my jumping off points.

From dry academic/management concept to ridiculous meme

Every so often since 2010 (according to Google Trends), the phrase “10x engineer” would pop up, it wasn’t a very popular term. I guess it just floated around quietly in software management literature and didn’t make its way outside much.

Knowyourmeme links to a number of instances where the concept of 10x engineer had been written or mentioned about. Hell, here’s an article I found saying the concept of 10x engineer is (happily) going away… published in 2014.

Until one fateful day in July 11, 2019 :

An investor tweeted that startups should hire 10x engineers to increase their odds of success… How do you find this 10x engineer? Apparently you’re looking for some kind of 90’s cartoon sketch of an antisocial tech-savant that: hates meetings, work strange hours, use black screen backgrounds (?!), know every line of code in production, are full-stack, can bang out code in hours, rarely use documentation, learn new frameworks/languages, are poor mentors, don’t hack stuff, rarely switch jobs… It’s almost insane enough that it would think it was satire, except there aren’t any signs that the tweet wasn’t made in all seriousness.

The sarcastic and derisive replies came fast and furious. Then the fire kept burning as Shekhar doubled down:

Bloggers of course couldn’t resist the controversy and started writing their own takes on 10x engineers, some going into the history and such too. Pretty soon it had spread far and wide, with it finding its way onto hacker news, etc.

The situation has cooled off now, settling into a thoroughly ridiculed meme. But even now the concept exists out in the broader world. People who are less experienced and informed, people not in on the meme still ask questions about it. I searched on Quora for posts in the past month and there are still a couple of 10x engineer questions:

Data science has a chance to learn from situation. We really should listen.