Biases. Biases are everywhere. I have many biases. So as you and your customers. How could you limit cognitive biases' influence on your user research?
Take customer interviews. Great method of finding out the thoughts of your customers. It's relatively easy to perform, and you could get much useful feedback. However, this method is full of biases.
The most non-obvious one is the method itself. When you ask someone a question - this person feels obliged to give you an answer. Even if this person would never have thought about the thing you asked. What's wrong with that? An importance!
You talk about something important to you, right? Even if no one asks. You care about the topic enough to express your opinion. Especially when it comes to using a product or a service. But when you don't have strong feelings about a particular subject, you still might give your opinion on it, when being asked. A whole lot of psychological pressures might influence a person during an interview.
- No one likes to appear stupid
- No one wants to admit to not having an opinion
- Very few people would give a stranger an honest, tough feedback
It becomes apparent when you think about it.
- If I've been asked this question - it must be important, - a person thinks, - I am a kind of person who has an opinion on important subjects.
Such thinking leads to the feedback that appears valuable but actually isn't. Or at least not as important as it looks.
How about other types of studies?
Not much better. For example, focus groups could be much more biased than individual interviews. In focus groups, all personal biases are completed with group pressures.Surveys? Tough! Hard to control the quality of answers. To get the proper quality, you need to ask lots of questions. No one likes long polls.
Data analysis? Better on paper. Tells you how someone uses your product. Doesn't tell you why. And without explicit why your own biases would influence the conclusions you'll draw from data.
What else is there? Observations. At first glance, this method is too similar to data analysis. Surely, you see how someone uses your product. You still don't know why. Yet this "how" makes a difference. It's just a bit harder for biases to harm your outcomes when instead of numbers you see a real person using your product. You see how they click through, what takes time to read, how certain features get misused. It could be a real eye-opener for you, your design and dev teams.
Observation technique has drawbacks as any other method. The most important one is whether the observation is informed or hidden. When people get observed, chances are high, they would behave differently than when they know no one watches them. This is a widespread problem for many research efforts. It's not the problem that could be solved easily. Not informing users they're being observed while using your product raises all sorts of ethical and legal concerns. While informing them might alter their behaviour and mess with the learnings. A real vicious circle.
How do you break this vicious circle?
One way is to combine the research methods. Your data gives you leads and shape your questions. A survey might quantify the importance of a problem. An observation might show you if you've found the right solution.The takeaways
- Relying on just one research method is risky due to multiple unavoidable biases any method carries
- Combining and cross-referencing feedback from different methods might significantly improve the information quality you receive