Profile picture of Daniel Schwarz

7 min read

A list of cognitive biases to watch out for in user research

Understanding the psychology behind our decision making can lead to more informed choices. Here are top cognitive biases to avoid.

An illustration of a human figure with floating blobs surrounding it, and text reading “cognitive bias”

Illustrations by {name}

Stay informed on all things design.

Thanks for submitting!

Shaping Design is created on Editor X, the advanced web design platform for professionals. Create your next project on Editor X. 

Cognitive biases are thought patterns that cause errors in judgment and rationality, and may cause us to behave nonsensically. Being aware of these biases and how they work can help us as designers create better user experiences.


However, designers can be victims of bias too, especially when it comes to user research. This article will take a look at a few biases that might cause designers to make bad design decisions.


Before we dive in, let’s first mention the status quo bias. While not the most important bias, if we fail to recognize it then it could hinder all our other efforts.


The status quo bias is a reluctance to explore improvement or change. This bias can occur when the current state of affairs doesn’t seem all that bad, but it can also happen when we’re demotivated due to a design brief, a certain working environment, or are simply feeling design burn out.


If you’re ready to uncover your own cognitive biases (yes, we all have them!) and make your research more reliable, then read on.



Bandwagon effect


The bandwagon effect is the tendency to do or believe something simply because others do.


In order not to fall victim to this effect, it’s best to avoid agreeing with stakeholders vocally. Instead, let research findings do the talking and allow stakeholders to express their stance via anonymous voting.



The bandwagon effect: A single person with arrows pointing in the direction of a large group of people.
The bandwagon effect is the tendency to do or believe something simply because others do.


Groupthink


Groupthink is a similar but more dangerous effect where the desire for harmony results in the suppression of dissenting viewpoints. The result is often an awkward mishmash of viewpoints that doesn’t actually represent anyone’s honest, independent opinion.



Ambiguity effect


The ambiguity effect leads us to avoid ambiguity.


It’s natural to favor investing our time and resources where we believe the risk will be worth the reward. When the likeliness of a reward is more ambiguous, we become more hesitant to investigate.


However, in design, the majority of outcomes are unknown simply because we’re not our users. This means that almost every decision we make involves taking a leap into the unknown. It’s scary and there can be a lot of dead-ends, but luckily there are things we can do to lower the risks when deciding which opportunities to explore.


  1. Stakeholder voting: More votes means more reliability.

  2. Strategic research: Start off with research methods that formulate speed over accuracy (for example, test low-risk or low-fidelity mockups before testing high-fidelity mockups).



The ambiguity effect: three amorphic shapes labeled as "ambiguity" and one perfect circle marked as "clarity"
The ambiguity effect is the tendency to avoid the unknown and prefer the familiar.


Curse of knowledge


The curse of knowledge is when those who are better informed find it difficult to empathize with the less informed.


One scenario that comes to mind relates to user experience. As designers, we often assume that our experience with the product mirrors our users’ experience. When in fact, we have to remember that we’ve spent hours and hours inadvertently mastering it. So, what seems well-executed to us, might actually be unfamiliar to users initially.


In order to work around the curse of knowledge and to truly validate a solution, we must take our own personal experience with a grain of salt and listen, empathize, and test.



Hard–easy effect


The hard-easy effect is when we overestimate the user’s ability to carry out difficult tasks and underestimate their ability to carry out simple tasks. This relates to the curse of knowledge, where we’re at a disadvantage by being the masters of our own designs.


Similarly but more specifically, the planning fallacy is the tendency to underestimate the time required to complete a task.


We can overcome these cognitive biases by using a type of UX research called performance testing, where users are assigned specific tasks and a mix of quantitative and qualitative insights are yielded from the results. Maze and Useberry are two useful tools for conducting performance testing.



Additional reasons to conduct UX research:


  • Overconfidence effect: When we feel highly confident in our answers, we actually tend to be wrong a lot of the time. Logic can be deceptive and dangerous!

  • Reactive devaluation: There’s always the risk of devaluing ideas because of how we feel about the person who came up with them.


Adapting our mindset for research means being objectively mindful about things that we’ve learned and encouraging open, honest and free-thinking dialogue, rather than steered.


Next, let’s discuss biases to avoid during research.



Cognitive biases to avoid during research



Insensitivity to sample size


Insensitivity to sample size causes us to underestimate variations in small samples.

Over the years we’ve become accustomed to the notion that the minimum sample size for research is five, and that “ten is better.”


While it’s totally true that a larger sample size means more reliable data, we have to factor in the fact that we also need to find the optimum balance between reliable data and budget constraints.


It takes a mixture of intuition and experience to know how many user testers to recruit on a test-by-test basis, but considering that we tend to underestimate variations in small samples, it’s always worth recruiting a few more testers just in case.



Insensitivity to sample size: Graphs marking the number of users who are being researched, and an enlarged selection of just a few select users within that group
Insensitivity to sample size is the tendency to underestimate variations in small sample sizes.


Observer-expectancy effect


The observer-expectancy effect is when our own expectations of an experiment influence the actions of recruited research subjects.

  • Example: “We think you’ll do this and that.”

  • Outcome: With this expectation in mind, the recruit is then (subconsciously) influenced to carry out the task in this way.


If we’re biased to our own beliefs, then we actively look for information that confirms these beliefs while disregarding contradictory information (also known as confirmation bias). In addition, we also run the risk of unintentionally influencing the results.


To avoid falling victim to these effects, refrain from setting expectations onto research subjects.



Anchoring/focusing effect


Anchoring is the tendency to over-focus on one snippet of information (usually the first snippet we come across) and frame all subsequent snippets in relation to this anchored one. This can then cause us to misconstrue the value of all snippets of information.


When we over-focus on any snippet of information, we run the risk of failing to understand the information holistically.


Although we should be mindful of the things that we learn while conducting user interviews so that we can ask the right follow-up questions, we should also avoid overthinking the feedback until we’re ready to synthesize it objectively.


A nice way to avoid falling down this rabbit hole is to set an agenda — script the interview beforehand and follow it loosely.



Gambler’s fallacy


Gambler’s fallacy is the dangerous mistake of thinking that past results affect the probability of future results. For instance, if five out of five research studies lead to an unanimous result, that doesn’t mean that five additional studies won’t completely turn the tables.


Never quit a research study prematurely. If you recruit ten test subjects, make sure to test all ten of them.



Compassion fade


When dealing with user feedback, large amounts of anonymous feedback can seem less important than small amounts of feedback from identifiable users. This is known as the compassion fade.


While it’s important to empathize with identifiable users, all data should be observed objectively. Otherwise, we might end up spending irrational amounts of time solving issues that affect only a small number of individuals while neglecting to solve those affecting a larger number of people.



Law of the instrument


The saying goes, “If all you have is a hammer, everything looks like a nail.”


The law of the instrument refers to the bias towards familiar tools or methods, even if they aren’t right for the task at hand.