I surprisingly don’t have any useful photos of Ikea… so my own hilarious creations must speak for themselves. For those wondering, this was testing the side leg frames w/ a scrap center.
It’s 11:45pm on a Friday, and much of my body aches because I slapped together 2 narrow Billy bookshelves (which ALWAYS are so much heavier than they look), and 2 small 2x2 Kallax storage shelves in the span of about 2 hours… Then a ton of cleanup and moving of stuff around in a fit of reorganization. All this because the growing 1yo kid likes books and Mom has obliged by ordering approximately 60kg of kids books from China.
This of course meant that the assembly of Ikea furniture is dominating my tired thinking processes right now. There’s plenty of jokes on the internet about how Ikea is the place where relationships go to die, and the frustrations of flat-pack furniture is a time-honored meme that I’m pretty sure predates the internet.
Luckily, I don’t seem to have much issues assembling their stuff except for the heavy lifting part. So I started pondering about the UX research that must have gone into creating those famous line drawing instructions. The amount of money and research that must have went into those instructions over the years must have been staggering, and yet a seemingly large group of people hate those instructions with a passion.
Searching around on the internet, a ton of UX articles have examined various aspects of Ikea, but very few involve people from Ikea talking. I found this one interview here. (I find it fascinating that the instruction designer role is called a “Communicator”.) The UX and industrial design worlds are obviously fascinated with the most meticulously engineered collection of dowels, screws, chip board and laminate on the planet. If they hadn’t done so much work on the instructions and design, your assembly nightmare probably would have been much worse..
The Ikea Effect
In 2011, a study was published that established what the authors themselves called the “Ikea Effect”. It’s actually a really nice and clean paper to skim through, so I highly recommend you flip through it.
The gist of the findings is that they found that students were willing to bid a higher amount of money for a piece of origami they folded themselves, or an Ikea box they assembled, than if similar students were simply presented with the completed item. In the case of origami, students actually bid about the same amount (20-30ish cents, give or take) as other students would bid for expertly created origami pieces by skilled folders.
The paper goes through a bunch of other experiments to pin down the actual effect. They specifically checked to see if this effect appeared only for people who like DIY (because people usually self-select into buying unassembled furniture) and found the effect holds even for people who said they’re not very interested in DIY.
They also found that building to successful completion is necessary to see the effect. Failure to complete, either because the task was too difficult, or because they were stopped by an experimenter mere steps before finishing, would NOT cause people to value the object more.
Since this paper got released and essentially went viral, people have been trying to take advantage of it in various ways. Behavioral economics people added it into their list of various irrational cognitive biases that people exhibit. People who think about workplaces and organizations started applying it to how to structure work.
Meanwhile, UX designers and such try to apply the Ikea effect in certain situations by getting users to do some work up front before making a purchasing decision. You may see it as being asked to meticulously put in all the details of your vacation itinerary before being asked to pull out the credit card to book the entire trip. The idea is by the time you’re done, that labor you did during setting helped you become invested in the completed trip and you’re more likely to pay for it.
I suspect this sort of effect had been hinted at before in various sales strategies, but I’m not familiar enough to know how to prove it =\
So what’s this nonsense about dashboards in the post title?
Over the course of my career, I’ve been asked to make a lot of dashboards. Especially during my earlier analyst days, though even now I’ll make a fair number of them when the situation calls for it. But if you make enough dashboards, you’ll soon realize that a horrible fate awaits many dashboards—many get ignored.
I was once asked at work to find all the metrics dashboards that existed, just to get a sense of what the company was doing with metrics and whether we should consolidate anything. I wound up finding over 50 different dashboards hanging around in various places. At the same time, a large data warehouse schema update had recently gotten through that caused breaking changes in a large number of those dashboards… Only a handful had been updated to a functional state.
So I started wondering, what causes certain dashboards to be used over and over, while others languish in obscurity within a matter of days?
My cold analytic brain would say that it’s merely a function of utility. People have no reason to return to a dashboard if it’s not useful to them, so the ones that fade into obscurity are the ones that hold no real relevant information. Or the dashboard does hold useful information, but it’s hard to use, and people don’t want to spend the energy unpacking it.
But humans aren’t purely rational beings, so all this talk about the Ikea effect has started to make me wonder. What other factors are at play here?
As someone who actually has the skills and knowledge to make “official” dashboards whenever I need to (unlike many colleagues), I know that my preference for my own dashboards stems primarily from having the memory that I made them. I honestly don’t want to make any dashboards at all, because I don’t want to maintain them. I mostly wind up making duplicate dashboards at times because the company is so huge, I‘m not aware of all the dashboards that exist.
Many dashboards that I’ve made that are used for more than a week seem to share (in my confirmation-biased memory) an interesting property — I built them in conjunction with the end user, oftentimes they’d be sitting next to me at my desk. We would go back and forth over multiple iterations and feedback cycles. Eventually we’d create something that gets used for a fair amount of time, often until what we’re measuring stops being relevant.
There’s a couple of potential explanations I can think of for why those particular dashboards wound up being used more often than others.
First and most obvious, since there had been feedback and multiple iterations, the dashboard wound up fitting the user’s needs better. Things that were unclear would’ve had a chance to be fixed. Useless things would be removed. Items would be shuffled around until they made narrative sense to the user. They would have time to be familiar with the dashboard, know the details involved, how to interpret the numbers. They’re also starting the process of habit formation by repeatedly visiting the dashboard.
One could also argue that dashboards that were created in a semi-vacuum, that didn’t involve the end user so much, were ignored because the end user never expended the time and energy figuring out how to use the dashboard because it didn’t match their mental models.
But in addition to being involved in the feedback loop, could the toil of having to work with me to build the new dashboard caused a little bit of the Ikea effect? It would be really interesting if this were true. The question is whether there is a way to tease the effects apart.
I suspect that any Ikea effect that does exist for self-serve data analysis is fairly small, but it could have interesting implications for “democratizing data science” efforts, including building out self-service analysis and dashboarding tools like with Looker or Tableau. If anything, such large scaled BI programs would probably be the best place to test out this hypothesis, since they’d probably have data on whether people tend to use dashboards they make themselves or not, even if viable alternatives exist. Detecting “viable alternatives” will be left as an exercise to the reader ;).
While the effect can be a positive force if people keep returning to a dashboard to use its data, it’s also possible that the effect can cause people to over-value their own dashboards, which may contain mistakes, over other “expertly created” dashboards. This is one of the fears of broad data access projects, where an inexperienced analyst may use the data to draw the wrong conclusions. It’s one thing if people are willing to switch to using dashboards created by data experts, but another entirely if they keep using broken ones because they’re attached to them.
As with many psychological effects that are found in lab situation, it’s very hard to tease it out from all the simultaneous effects that exist in the real world. It’s probably best to keep it on the list of cognitive biases that we deal with every day, like confirmation bias, and try to take it into account when you work.
Though, if anyone out there manages to come up with a more robust experiment that reconfirms it for dashboards, I’d be super interested =)
What a nice aproach to dashboards, i can relate people using stuff they build even when deep inside they know its worse than some other one made by other people.