Are surveys effective for collecting user research data?
Surveys have always been contested in the field of user research. Despite it being ubiquitous, there are quite a few voices that caution you against using/over-using them. Given that your research is only as good as your data, it is important to establish the effectiveness of the tools you use to collect it. So let's talk about the tool that each of us have probably used at one time or another — do surveys universally suck? Or can they actually be used to collect actionable, unbiased data?
There's no denying that surveys have several advantages over other data collection methods. These qualities are what makes them so pervasive in research gathering. Let's examine the most obvious ones:
While innately surveys suffer from sampling bias (which can be fixed via correct incentives, more on this shortly) it is actually less prone to Hawthorne effect or observer effect compared to some other forms of data gathering.
The Hawthorn effect is a type of reactivity in which individuals modify an aspect of their behavior in response to their awareness of being observed.
As researchers, we have probably all seen this in action — when users are being observed, they are more likely to give us answers that they think we want to hear. In the case of surveys (especially anonymous surveys), individuals feel less pressure to modify their responses because in most cases they're responding from the comfort of their environment without the fear of being judged.
Perhaps the main reason why surveys are so ubiquitous is that they are among the cheapest ways of gathering user research data. Most surveys cost a grand total of nothing to send out. Even if you incentivize the surveys (we'll discuss later as to why incentivized surveys are preferred), the relative cost is minimal compared to what it takes these days to recruit participants and conduct interviews or contextual inquiries. As a researcher, being able to gather data without allocating a big part of your budget to it is always good.
This is another area where surveys outshine most of the other research data collection methods. Active research data collection methods like interviews and ethnographic studies get exponentially more expensive both in terms of time and resource as you start scaling them. Imagine that you were synthesizing just 20 interviews with 20 freeform answers each. Creating an affinity diagram to bubble up important concepts from 400 responses may itself become incomprehensible. Unless of course you're using UserBit's tag analysis. 😉
Surveys on the other hand, can easily be sent out to a larger audience and if used appropriately, their responses can be quantified quickly. In fact, most survey tools these days automatically create charts for your data as the responses come in.
So why do so many people complain about surveys as a valid user research method? It is time now to look through some of the shortcomings of this data collection method:
Fact of the matter is, no one likes user research surveys. Unlike Buzzfeed quizzes, there's no instant gratification at the end of a product survey telling you what Harry Potter character you are. This means, by default survey responders fall into 'love you' or 'hate you' buckets. You will get proportionally higher number of responses from people who really like your product, or really hate it. In statistics, this is called Sampling Bias. This is a problem. You need your data to be unbiased so the synthesized insights don't lead you towards creating products that only a small subset of your users want or don't want.
The fix - Incentivize the survey with a reward. A gift card or some form of credits go a long way in getting people to take your survey not because they feel a certain way about you, but instead for material benefit.
A lot of surveys contain questions with free-form answers. The hope here is that the user will take time out of her busy life to write essays about how she feels about your product. I hope that sounded as ridiculous as it was for me to type it. In truth, you will most likely get one word answers (one sentence if they really like you) which gives you little to no qualitative data.
The fix - Don't do it. Don't use surveys for collecting qualitative data. There are other methods much more suited for that (in-person interviews, contextual inquiries, ethnographic studies). Surveys should be used to get short quantifiable responses via Likert scale or binary answers.
Creating surveys is easy, which would explain why we encounter a ton of terrible surveys. Bad questions in a survey is just as problematic as asking bad questions in an interview, in fact even more. Every leading or biased question in a survey, is compounded and more damaging because of the scalability that surveys offer. And yet, since creating surveys is so inexpensive, they don't go through the same scrutiny as interview questions.
The fix - Surveys should be subjected to the same level of scrutiny as interview questions. Don't ask leading questions, don't ask biased questions, don't expect your users to write essays, etc.
Now that we've examined both the good and the bad of surveys, let's talk about when it makes sense to use them during our research process. Due to the fact that surveys are so easy to create and use, it might be tempting to send them out every opportunity we get. However, due to the aforementioned shortcomings, surveys are not a good fit for all situations. For example, they cannot be relied upon to get good qualitative feedback, and therefore, they should not be used as primary data collection method. On the other hand, surveys make one of the most effective supplemental tool for corroborating existing hypothesis.
Let's say you wanted to know if your target audience would pay for the app you're making. You'd start off with conducting user interviews as your primary method of validation. You would ask your interviewees things like — what was the last app they paid for? How much did they pay for it? etc.
The idea is to validate that the given problem is something users would indeed pay to solve and if yes, how much would they be willing to pay. You've gone ahead and conducted your interviews and now you want to corroborate your findings. A survey can help you do this at scale! Specifically, you can send out something like the Van Westendrop's price sensitivity analysis questions as a survey to your bigger user pool.
Not only does this help you validate your hypothesis, but also quantifiably corroborate it at scale.
If you do use surveys as supplementary data collection method, you are in good company. Siddhi Sundar, Sr. UX researcher at Netflix also talks about how triangulation of data using surveys helps them uncover users' needs.
\Survey research and behavioral data, combining the results from a comprehensive, multimarket attitudinal and behavioral survey with analysis of Netflix behavioral data.\
It is important to remember that surveys are just tools. They are not intrinsically bad or good but like any other tool, if they are used incorrectly, you can expect suboptimal results. They do have shortcomings when it comes to collecting qualitative data, but nothing beats it as a scalable supplemental method to corroborate your insights.
UserBit is a new platform for user research and team collaboration. Your home for interview management, text analysis, personas, journey mapping, visual sitemaps and more.