9 biases to keep in mind when conducting user research

Loïs Gauthier
October 25, 2022

When conducting research, we must always remember that we are human beings studying other humans’ attitude and reactions. In this context, our biggest enemy is ourselves: our biases can impact our results negatively. In addition to our own, we have to study our testers’ to understand their decision-making process and drive better insights.

Creating consciousness about our own biases is one of the first steps we can make to reduce their impact on our research.

To build your ability to recognize them, we will first identify the most common and share examples of how they can impact your work in the context of UX research.

Then, we will provide some tips on how to effectively reduce bias in your work and enhance your ability to overcome it.

There are tens of different ones that have been studied. In this article, we share the most common and the ones that can impact your UX research the most.

For each bias, we provide a definition, an explanation, a daily-life example followed by 2 UX examples.

Biases that can negatively impact your UX research

Confirmation bias

Probably one of the most common and dangerous bias, it consists of our tendency to remember, favor and interpret information that confirms our existing beliefs.

It means: we like to be right and our brain will trick us into thinking we are.

Also called cherry picking, this is what pushes you to read and like articles that showcase the success of the candidate you voted for during the last election and ignore articles portraying your favorite singer as a businesswoman with no artistic vision.

Examples:

  1. You are testing a new homepage for your company’s website and you worked really hard on a new hamburger menu that you like very much. You are convinced it is what your users need and you read an article that said it worked great for many websites. When testing the menu with your testers, you are sensitive to users saying it felt easy to navigate through the menu. 3 users had no particular comments about it but needed your assistance to navigate to their personal account through the menu. You decide this is because they are the least digital natives in your users’ pool and validate the current design based on these tests. However, once the feature is live, you see usage of the users’ personal accounts is going down - and you realise, a little bit late, that you paid more attention to feedbacks that agreed with your work than those challenging it.
  2. You work for a nursery product website and are conducting an exploratory study with young moms to identify their main pain points in their search for baby formula. You are yourself a mother of 2 and always favored organic formula when your girls were still newborns. During a focus group, one mom talks about this specific criteria and how important it is to her. You find yourself nodding and verbally acknowledging her opinion before realizing other moms felt uncomfortable because they didn’t agree. As a result you have a hard time getting them to open up about their own point of vue.

False consensus effect

The false consensus effect consists of thinking that our beliefs are shared by others.

It means: we think we are like everyone else, and we think everyone else is like us.

This is why you probably have this one friend who’s always giving you advice because “you’ll see, it works wonders!” - even though it might not apply to you at all.

Examples:

  1. You are creating a new banner to increase Newsletter subscription. You personally dislike banners with a picture of person inviting you to subscribe because you think “it’s too fake”. Instead, you decide an icon of an email and a catchy phrase will be better. You could have tested it, but you decided to use your preference as a reference.
  2. You are studying completion rates of a lead generation form and notice that step 4 has the highest drop rate. You are not surprised because you always thought it would be better to use a dropdown field here instead of a radio button field. You decide to transition to that new field type to increase completion rates. Asking users or A/B testing doesn’t even cross your mind, as you are convinced you have the solution.

Primacy and recency biases

Primacy and recency biases describe our tendency to remember more about the first and last piece of information we have been in contact with.

It means: if your memory cannot store everything, it will focus on the beginning and the end.

This is when you remember precisely about the first and last slides of the presentation, the first tip in a list of 5 or the last ingredient on your shopping list while the rest of them disappear in a blur.

Examples:

  1. You are conducting a round of evaluative interviews with business developers to test the prototype of a prospecting tool you have been working on. You test with 7 users over 4 days. When it’s time to consolidate the results of your research, you have a very clear idea of the the insights you drove from the 2 tests that you conducted on the last day. You have a harder time remembering those you conducted on the first day of interviews.
  2. You are conducting usability tests with several users. The first one has a very fluid experience. The second one has a harder time and struggles to finish the task. You think to yourself that since it was easy for the first one, the second tester is not really skilled. However, if you had met them in the opposite order, how would that have changed your impression of them?

Implicit bias

Implicit biases are the ones we are the least conscious and aware of. They can even go against our values because they are similar to reflexes in the sense that they cannot be controlled.

It means: they are unconscious prejudices that mostly only other people can detect.

They can pertain to age, gender, ethnicity, weight, sexuality, social background etc.

Examples:

  1. You are conducting tests on the accessibility of a scheduling feature within your app. You have gathered a wide variety of testers that should represent your audience. After the test, you realize you are surprised that people over 60 years old have a similar success rate than other age ranges. You had assumed they would be much less tech savvy than younger people.
  2. You are conducting an exploratory survey about driving. When analyzing results, you see that, on average, female respondents have self-rated their driving ability to be similar to men’s. On the other hand, when rating the opposing gender, men rated women to drive less well, whereas women rated men better. There is an implicit bias at play here - how will you take it into account in your results?

Sunk cost fallacy

Sunk cost fallacy describes our tendency to keep engaging with projects that are failing because we have invested resources in them and feel giving it up would be a waste of resources.

It means: We’d rather keep wasting resource rather than admit that we wasted resources.

This is what pushed people to watch Game of Thrones until the final episode, even though they started hating the show as early as the end of season 5.

Examples:

  1. You and your team have been working on a new feature for the last 2 months. You were very excited about it but all tests conducted so far came to the conclusion that it is not wanted by users. You still decide to dedicate another run to it and launch it in the next release.
  2. Your are conducting moderated tests and during a session, you realize a user has been dishonest about his ownership of a juicier, a criteria that is very important for your research. You still decide to finish the test because you are halfway through it already.

Anchoring bias

The anchoring bias describes our tendency to take as a reference the first piece of information we have been in contact with.

In the context of UX research it means: we cannot forget what was said first and will unwillingly react to it when making decisions.

This is why in negotiations, the one who states their price before the other party has the advantage.

Examples:

  1. You are interviewing 5 users about their experience of an events booking website. The first one talks extensively about how long and discouraging it is to register for an event. None of the following interviewees have similar feedbacks, they are actually very positive about their booking journey. However, when aggregating your results, you can’t help but focus on the negative experience the first one shared.
  2. You conduct two focus groups with 6 users each and ask each participant to estimate how much they would be comfortable paying for a money management app you just presented to them. In the first focus group, the 1st participant to answer the question said he would be comfortable paying 5€/month for the service. The average of the 6 answers of this focus group was 6,4€/month. In the second focus group, the 1st participant to answer the question said he would be comfortable paying 8€/month for the service. The average of the 6 answers of this 2nd focus group was 8,7€/month. In the end, there is no way to know what each participant of the focus group would have really thought on their own.

Peak end rule

The peak end rule describes our tendency to rely mostly on the peak and end of a specific event to forge memories and opinions.

It means a great show is all about a big surprise in the middle and another one at the end.

This is what makes you give a mediocre rating to a restaurant where you really enjoyed the starter and main course but were disappointed with the dessert.

Examples:

  1. You conduct usability tests on a movie rating app. Overall, testers are lost in the app, have a hard time doing the actions they are asked to and seem pretty upset at first. However, they really like the end of the rating process which allows them to rank the movie with others of the same category. When asked to assess their overall experience, they give relatively good ratings and comments, mostly talking about that ranking feature.
  2. You conduct usability tests for a plant buying app with 8 testers. Most of them go seamlessly from one step to the other. In the last screen however, most testers wonder if they have completed the transaction or not. They complain about how deceptive it is and rate their overall experience badly.

Social desirability

Social desirability bias describes our tendency to present ourselves in a way that will positively affect other people’s perception of us.

In the context of UX research, it means your user might lie to you because they want to be nice and/or be a “good person”.

Examples:

  1. In a screener, you ask potential testers if they are eco-conscious. 85% of respondents answer Yes to this question. During the interviews you conduct later with them, you realize many of them do not show characteristics of eco-conscious people (all they do is recycle, and they only do it because it’s mandatory).
  2. You conduct evaluative tests on an app you are developing. When asking 5 testers who needed extensive help in the navigation if they find the user flow intuitive, 3 answer it is “Very intuitive”. You realize they feel uncomfortable telling you the truth.

Clustering illusion

The clustering illusion creates a wrong perception of statistical data. We expect random data to look one specific way although at a small scale, most configurations have similar likeliness.

It means: we find patterns where there aren’t any.

This is what makes us perceive animal shapes in clouds or in stars.

Examples:

  1. You conduct tests on low-fidelity wireframes for a food ordering app. The 3 first testers have a very smooth experience and rate the usability of the app positively. You have 12 other testers left to work with but you have already told your team the user flow won’t need any major changes.
  2. When chronologically reviewing the results of a survey answered by 230 users, you are surprised that 12 users in a row chose the same answer to a yes/no question.

Reduce bias with hands-on tactics and experiments

When preparing for user research

  • In the context of your past work, make a list of what you think your most common biases are and examples where they might have impacted your judgment
  • Make a list of what you think you will find/what you would like to find /vs what else you might find (why would it bother you?)
  • Carefully write your screener, phrase questions to make them as neutral as possible and avoid hinting on your preferred answer (we have full guide on how to achieve that result here)

During the research

  • Test with enough users to increase statistical significance and reduce bias in the process (🇫🇷 read our article about the 5 testers rule for more info on how to choose the number of testers you need)
  • Test one persona at a time
  • Stay neutral, work on a reducing your body language during tests and interviews - avoid showing off your company’s logo, avoid agreeing or disagreeing with participants etc.
  • Be very attentive to non verbal cues when meeting participants for tests or interviews: furrowed brows, long pauses, eyes clearly looking all over the page to try to find the information, etc.
  • Have more than one person conducting interviews/tests.

After the research

  • Use a combination of qualitative and quantitative data to make decisions (🇫🇷 For instance, the way Payfit uses both quantitative and qualitative insights is very inspiring)
  • Let your team get access to the raw data, and as much as possible, when giving examples extracted from research, give them verbatims. The less of your own interpretation you add, the less bias you introduce.
  • Don't analyse the data alone, do it together with other members of your team.
  • Learn to read your participants’ body language to ensure you do not miss any non-verbal cues, that might reveal your participants’ true feelings.

Unlock your user research superpowers with Tandemz!

Get Started

Never miss a ressource - stay up to date with our newsletter!

You have successfully signed up to our newsletter!
Oops! Something went wrong while submitting the form. Try again later, or send us a message on support@tandemz.io