Questionnaire designKeep It Simple Stupid and short

General Considerations

The first rule is to design the questionnaire to fit the medium. Phone interviews cannot show pictures. People responding to mail or Web surveys cannot easily ask "What exactly do you mean by that?" if they do not understand a question. Intimate, personal questions are sometimes best handled by mail or computer, where anonymity is most assured.

KISS - keep it short and simple. If you present a 20-page questionnaire most potential respondents will give up in horror before even starting. Ask yourself what you will do with the information from each question. If you cannot give yourself a satisfactory answer, leave it out. Avoid the temptation to add a few more questions just because you are doing a questionnaire anyway. If necessary, place your questions into three groups: must know, useful to know and nice to know. Discard the last group, unless the previous two groups are very short.

Start with an introduction or welcome message. In the case of mail or Web questionnaires, this message can be in a cover page or on the questionnaire form itself. If you are sending emails that ask people to take a Web page survey, put your main introduction or welcome message in the email. When practical, state who you are and why you want the information in the survey. A good introduction or welcome message will encourage people to complete your questionnaire.

Allow a "Don't Know" or "Not Applicable" response to all questions, except to those in which you are certain that all respondents will have a clear answer. In most cases, these are wasted answers as far as the researcher is concerned, but are necessary alternatives to avoid frustrated respondents. Sometimes "Don't Know" or "Not Applicable" will really represent some respondents' most honest answers to some of your questions. Respondents who feel they are being coerced into giving an answer they do not want to give often do not complete the questionnaire. For example, many people will abandon a questionnaire that asks them to specify their income, without offering a "decline to state" choice.

For the same reason, include "Other" or "None" whenever either of these is a logically possible answer. When the answer choices are a list of possible opinions, preferences, or behaviors, you should usually allow these answers.

On paper, computer direct and Internet surveys these four choices should appear as appropriate. You may want to combine two or more of them into one choice, if you have no interest in distinguishing between them. You will rarely want to include "Don't Know," "Not Applicable," "Other" or "None" in a list of choices being read over the telephone or in person, but you should allow the interviewer the ability to accept them when given by respondents.

Question Types
Researchers use three basic types of questions: multiple choice, numeric open end and text open end (sometimes called "verbatims"). 
Rating Scales and Agreement Scales are two also common types of questions that some researchers treat as multiple choice questions and others treat as numeric open end questions.
Questions can take many forms:

  • Open-ended: Designed to prompt the respondent to provide you with more than just one or two word responses. These are often "how" or "why" questions. For example: "Why is it important to use condoms?" These questions are used when you want to find out what leads people to specific behaviors, what their attitudes are towards different things, or how much they know about a given topic; they provide good anecdotal evidence. The drawback to using open-ended questions is that it's hard to compile their results.
  • Closed-ended (also sometimes referred to as forced choice questions): Specific questions that prompt yes or no answers. For example: "Do you use condoms?" These are used when the information you need is fairly clear-cut, i.e., if you need to know whether people use a particular service or have ever heard of a specific local resource.
  • Multiple choice: Allow the respondent to select one answer from a few possible choices. For example: "When I have sex, I use condoms... a) every time, b) most times, c) sometimes, d) rarely, e) never." These allow you to find out more detailed information than closed-ended questions, and the results can be compiled more easily than open-ended questions.
  • Likert scale: Each respondent is asked to rate items on a response scale. For instance, they could rate each item on a 1-to-5 response scale where:
    • 1 = strongly disagree
    • 2 = disagree
    • 3 = undecided
    • 4 = agree
    • 5 = strongly agree
    • If you want to weed out neutral and undecided responses you can use an even-numbered scale with no middle "neutral" or "undecided" choice. In this situation, the respondent is forced to decide whether he or she leans more towards the "agree" or "disagree" end of the scale for each item. The final score for the respondent on the scale might be the sum of his or her ratings for all of the items.

Question and Answer Choice Order
Two broad issues to keep in mind:

  • How the question and answer choice order can encourage people to complete your survey.
  • How the order of questions or the order of answer choices could affect the results of your survey.

Ideally, the early questions in a survey should be easy and pleasant to answer. These kinds of questions encourage people to continue the survey. In telephone or personal interviews they help build rapport with the interviewer. Grouping together questions on the same topic also makes the questionnaire easier to answer.

Whenever possible leave difficult or sensitive questions until near the end of your survey. Any rapport that has been built up will make it more likely people will answer these questions. If people quit at that point anyway, at least they will have answered most of your questions.

Answer choice order can make individual questions easier or more difficult to answer. Whenever there is a logical or natural order to answer choices, use it. Always present agree-disagree choices in that order. Presenting them in disagree-agree order will seem odd. For the same reason, positive to negative and excellent to poor scales should be presented in those orders. When using numeric rating scales higher numbers should mean a more positive or more agreeing answer.

Question order can affect the results in two ways:

  • One is that mentioning something (an idea, an issue, a brand) in one question can make people think of it while they answer a later question, when they might not have thought of it if it had not been previously mentioned. In some cases you may be able to reduce this problem by randomizing the order of related questions. Separating related questions with unrelated ones can also reduce this problem, though neither technique will eliminate it.
  • The other way question order can affect results is habituation. This problem applies to a series of questions that all have the same answer choices. It means that some people will usually start giving the same answer, without really considering it, after being asked a series of similar questions. People tend to think more when asked the earlier questions in the series and so give more accurate answers to them.

If you are using telephone, computer direct or Internet interviewing, good software can help with this problem. Software should allow you to present a series of questions in a random order in each interview. This technique will not eliminate habituation, but will ensure that it applies equally to all questions in a series, not just to particular questions near the end of a series.

Another way to reduce this problem is to ask only a short series of similar questions at a particular point in the questionnaire. Then ask one or more different kinds of questions, and then another short series if needed.

A third way to reduce habituation is to change the "positive" answer. This applies mainly to level-of-agreement questions. You can word some statements so that a high level of agreement means satisfaction (e.g., "My supervisor gives me positive feedback") and others so that a high level of agreement means dissatisfaction (e.g., "My supervisor usually ignores my suggestions"). This technique forces the respondent to think more about each question. One negative aspect of this technique is that you may have to modify some of the data after the results are entered, because having the higher levels of agreement always mean a positive (or negative) answer makes the analysis much easier. However, the few minutes extra work may be a worthwhile price to pay to get more accurate data.

The order in which the answer choices are presented can also affect the answers given. People tend to pick the choices nearest the start of a list when they read the list themselves on paper or a computer screen. People tend to pick the most recent answer when they hear a list of choices read to them.

As mentioned previously, sometimes answer choices have a natural order (e.g., Yes, followed by No; or Excellent - Good - Fair - Poor). If so, you should use that order. At other times, questions have answers that are obvious to the person that is answering them (e.g., "Which brands of car do you own?"). In these cases, the order in which the answer choices are presented is not likely to affect the answers given. However, there are kinds of questions, particularly questions about preference or recall or questions with relatively long answer choices that express an idea or opinion, in which the answer choice order is more likely to affect which choice is picked. If you are using telephone, computer direct, or Web page interviewing, have your software present these kinds of answer choices in a random order.