Guidelines for Designing Effective Customer Surveys

By Jeff Sauro

Measuring customer attitudes usually means writing a lot of surveys. Although nearly anyone can pose a question to a customer, crafting the right questions with the right types of response options takes time.

When creating surveys, keep the following in mind:

  • Pretest: Instead of relying on a single grand planned survey, consider several iterations that collect data on a few questions. Instead of waiting until the end to find that a question wasn’t helpful, or was redundant or misleading, a few smaller surveys can help identify problems before time runs out.

  • Think shorter: Instead of launching a bloated survey with complex questions annually, it’s better to ask a smaller core set of questions related to satisfaction, loyalty, and few other items. Be sure to put as much effort into fixing issues identified in surveys as you do in crafting your surveys.

  • Customer centric, not company centric: Surveys are usually commissioned when there is a clear and present need for a decision in a business or organization. In addition to having good hypotheses and clear research questions that you want answered, the survey questions should not reflect the organizational plans. Customers don’t think in terms of sales cycles, marketing funnels, value propositions, unique selling points, or content hierarchy. They think more in terms of necessary steps to obtain more stuff for less money with less effort. It’s your job to ask the questions in your customers’ language and to translate those into actions for the business.

  • Focus more on whom and how you survey than how many you survey: As with sample sizes in general and survey sample sizes in particular, there are a lot of misconceptions about what “small” and “large” sample sizes can tell you. There is a perception that a small sample size (which could be 30, 100, 500, or 2,000, depending on whom you ask) isn’t “representative.” While representativeness is different from precision, the concern is that the results from a smaller sample size are misleading, biased, or just plain useless.

    The inaccurateness of the response is much more likely to come from surveying the wrong people rather than not enough of the right people.

  • Remove confusing and challenging questions: Forced-rank questions help identify what really matters to respondents. However, ranking more than a few items gets challenging fast — especially when respondents don’t have strong opinions about more than a few items. A little pretesting with qualified participants will help identify the challenging questions. For ranking more than a few items in particular, consider using the top-task approach.

  • Don’t make it too hard: Using a mix of open-end comment boxes helps provide some of the “why” between predetermined rating scale questions. However, open-ended questions take more effort and time to respond to. Surveys shouldn’t bring back memories of college exams. If you must have several open-ended questions, consider making them optional or at least help focus questions so respondents aren’t being asked to write an essay on customer satisfaction.

  • Watch for non-mutual exclusivity: When creating response choices in which respondents are asked to pick only one choice, be sure each one really means one choice (mutually exclusive). For example, with age and income brackets, it’s easy to overlook overlapping categories, as the following example demonstrates:

    Please select your age:

    • 20-30

    • 30-40

    • 40-50

    • 50+

    This is an example of non-mutually exclusive age groups. If a respondent is 30, he could pick one of the first two groups. Options should read 20-30, 31-40, 41-50, and 51+ instead.