Skip to main content

Surveys in Evaluation

Using open-ended questions in a survey

Surveys are often a missed opportunity for gathering useful qualitative data. Some surveys conclude with a ‘catch-all’ question that asks respondents to provide ‘any other thoughts’ they might have about an issue. Such questions rarely produce data of sufficient quality, consistency or focus to be used in an evaluation.

A better approach is to ask one or two well-worded open-ended questions at the start of the survey. Respondents can give their views before they have tired of the survey, and before they have been primed by the rest of the survey and the fixed-option responses that make it up.

For example, a survey looking at barriers and enablers to people switching to active transport options in peak hour will ask about factors such as the availability of end-of-trip facilities and the potential use of dedicated infrastructure for bike routes. But there might be factors you have not thought about and therefore not provided as choices in the fixed-option questions.

To deal with this, you might ask an opening question such as:

Can you list 1 or 2 things that would support you to walk or cycle to work during the morning peak, rather than drive? If walking or cycling would never be an option can you indicate why?

This kind of question can draw to your attention factors that are newly emerging and therefore have not shown up in previous research or considered as options.

But as with other types of qualitative data, such questions can be difficult and time-consuming to analyse.

Box 1: Do’s and don’ts in survey research
  • Understand the language of your survey target groups. Don’t assume that your survey respondent will understand terms and concepts in the same way that you do, or know what technical terms mean. You should try to clarify terms in language familiar to them. 

  • Get demographic data from other systems if you can link them to your survey respondents. You can minimise the burden on your respondents if you can link individual survey responses to existing systems or project data that you are also collecting. Sharing the ‘data load’ across these sources can ensure your survey is a short as it can be. For example, if you are already collecting lots of demographic information about people through project records, and you can include a question in the survey that can link each respondent to their individual project records, then you don’t need to again ask for the demographic information in your survey. 

  • Ask demographic questions at the end of the survey. These are usually relatively easy for people to answer so are best left to the end. The more difficult questions which you are more interested in should come first before the respondent gets fatigued or is interrupted. The exception is where you need to ‘screen’ respondents based on a characteristic (e.g. type of transport they regularly use to work) to determine the survey questions they are subsequently asked. 

  • Choose the right mode of administration. Think about whether it is best to have surveys administered by an interviewer, either face-to-face or by phone, or self-completed by the respondent. Sensitive issues are best handled in self-report surveys, which are also cheaper and quicker. But interviewer-assisted surveys can help the interviewer explain difficult questions, or provide complex travel scenarios to respondents for which you want their views. 

  • Use more than just words. Surveys don’t just have to be words; with digital survey tools you can provide other material, such as images, audio, video, and websites to stimulate responses. 

  • Avoid agreement/satisfaction scales. One of the most widely used response option in surveys is to ask people how much they agree with a statement, with the options ranging from Strongly Disagree to Strongly Agree. A similar scale is Satisfaction, ranging from Strongly Dissatisfied to Strongly Satisfied. There is almost always a better way to ask such survey questions. These scales have a number of serious methodological limitations that are well-documented in the research design literature, not least of which is that they lead to social desirability bias, whereby people agree with a statement because they don’t like to disagree!  

  • Don’t use neutral mid-points unless it is meaningful to do so. Many survey questions automatically include mid-points that transition the scale from negative to positive responses. Examples include ‘Neither agree nor disagree’ or ‘Neither satisfied nor dissatisfied’. Unless such scale-points have a meaningful interpretation for your evaluation, they should be avoided. Ask yourself ‘what does it mean to be neither satisfied for dissatisfied’? If you can’t make sense of these responses in terms of the concept they are trying to measure, then leave them out. 

  • Don’t force people to make up an answer. You need to provide respondents with ‘non-responses’ where appropriate. These can be options such as Don’t know, Can’t say, or Not applicable. 

How to select a sample 

If you don’t get responses from all your population of interest, then your survey has gathered data from a sample.  

There are two broad approaches to taking a sample: random and non-random selection.  

Random sampling is usually associated with large surveys. In simple random sample, each member of the population has the same chance of being included in the sample as any other member.  To achieve a random sample, you need to avoid sampling-related errors that will cause your survey to systematically under or over represent some groups within the population of interest. For example, if you do a community survey by door-knocking during the day, your results will be biased by the fact that the type of person who is likely to be home during the day will be skewed toward certain groups. Similarly, specific groups, such as people who do not speak English, will have higher non-response rates to online surveys. If we again administered the survey in the same way we would get the same kind of bias in our results. The only way to deal with these kinds of bias is to take a different sampling strategy, or else accept these problems and discuss how they might limit the evaluation findings. 

If you do eliminate sampling-related error, your results may still be affected by random sampling error. The operation of chance can always mean that even a well-designed sample may not reflect what you would have obtained had you surveyed the whole population. Your sample can by chance randomly include some people who are not typical and thereby ‘throw out’ your results. Because this kind of error is random, we would not expect another survey administered the same way to have the same bias. The only way to limit this kind of error is to take a larger random sample. 

Non-random sampling approaches come into play when a random (or probability) sample is not appropriate or feasible. This is especially relevant when you are gathering qualitative data. Table 1 gives a brief description of such non-random sampling approaches.

Table 1: Non-random/purposeful sampling
Sampling approach 
What it is 

Opportunistic 

Following new leads during fieldwork, taking advantage of the unexpected, flexibility, includes snowball sampling 

Non-selection based on overburden 

When you have a number of equally valid options, prioritising sites or communities that have not recently been consulted for other evaluations 

Selection based on convenience 

Going with whatever is quickest and easiest to access: saves time, money and effort; low credibility for findings, but good for pilot testing your instruments  

Typical case 

Selecting ‘average’ cases to describe the most common position or experience (e.g. a typical household in a typical house on a typical street for that area) 

Extreme or deviant case 

Unusual or special cases, e.g. outstanding successes, notable failures, award-winning services, extreme cases, exceptions to the rule 

Maximum variation 

Purposefully picking a wide range of variation on dimensions of interest, e.g. lovers, haters and people who really don’t care 

Homogeneous groups 

Matching like with like in each group, to reduce variation within the group. Still allows for plenty of variation between groups. Simplifies analysis 

Discontinuity groups 

Intentionally bringing together groups where people have had different experiences or are likely to disagree