Stage 1: Research Design

1.0 Choose an appropriate data collection method

Rationale

Choosing the most appropriate data collection method is central to attaining a good response rate. The merits of each method must be considered within the context of the target population, the survey objectives, the type of information to be collected, the research budget and the time constraints.

Best Practices

1.0.1 Select the most appropriate survey method.

Because this document focuses on the telephone as a survey method, the question here will be limited to the following: When is a telephone survey the most appropriate data collection method? Here are a few guidelines to consider.

  • Target population: Consider conducting a telephone survey when the sampling population is the general public–that is, the national adult population of Canada. Online data collection has presented a few problems for surveys of the entire Canadian adult population, particularly coverage and sample frame issues.Footnote 2 Internet access is widespread but is still not as widespread as telephone access, and Internet use among certain types of audiences is more limited than among others. In addition to these coverage problems, there is no complete sample listFootnote 3 available for Internet users (see BP 1.1.1). While the Internet is an excellent data collection method for certain audiences and research objectives, at the time of this report (March 2007), it has greater coverage and sampling limitations than some other methods.
  • Sample frame: The completeness of the sample frame is critical to limiting coverage error. Telephone surveys currently provide the best coverage of the general population, but organizations surveying smaller target populations may wish to consider the quality and composition of available email lists. For example, online data collection might be appropriate and advisable when surveying special audiences, such as scientists and academics. Their email addresses are attainable, and it is often difficult to contact such people by telephone because they travel, rely on voice mail or have gatekeepers, for example.
  • Size of budget: Collecting data by telephone typically costs more than using self-administered data collection methods. When the budget is limited, consider using online, mail, fax or email surveys rather than telephone data collection.
  • Length of time available for fieldwork: If data are required very quickly–that is, overnight or within a few days–and the response rate is not an issue, consider using telephone or online data collection. However, use the latter only where coverage issues are not factors in the decision-making.
  • Level of data precision needed: A telephone survey is appropriate when a high degree of accuracy or statistical reliability is required for general public surveys.Footnote 4 Currently, there is no method for selecting random samples from general email addresses, which means that probability sampling is not yet possible using online data collection. This concern may not be important if the research sponsor only requires "directional" information–that is, data that provide approximate magnitudes, rather than precision. While there are studies that show Internet panels have produced results similar to those of comparable telephone samples (Berrens et al., 2003), online surveying is a contentious issue among survey researchers. It is evolving quickly, based on new data and learning.
  • Research objectives: Telephone surveys are appropriate for questionnaires shorter than 20 minutes and studies where respondents can respond to questions quickly. Avoid conducting a telephone survey when the questionnaire lasts over 30 minutes; when respondents need to check things, such as financial information in their files; or when interviewers need to use visual aids to explain concepts or complex scales. Under these circumstances, a self-administered survey might be more appropriate.

While many factors will influence the choice of an appropriate data collection method, choosing the most suitable method will increase the likelihood of achieving a higher response rate.

1.0.2 Consider alternative methods to contact hard-to-reach respondents.

Examples of hard-to-reach respondents
  • Corporate executives
  • Elected officials
  • Physicians
  • Farmers
  • Technology executives

After selecting the data collection method, consider strategies for contacting "hard-to-reach" respondents. Depending on the target audience and subject of the survey, some respondents may be much harder to contact than other segments of the population. These people include members of low-incidence populations–those defined by quite narrow demographic (or other) specifications. Instead of relying solely on the telephone, consider using a mixed-mode approach to contact or obtain data from hard-to-reach respondents. In fact, survey organizations are increasingly using mixed-mode survey designs to maximize response rates.

Possible mixed-mode approaches
  • Telephone plus mail, fax or email
  • Telephone plus online
  • Telephone plus in-person

A mixed-mode approach increases the likelihood of contacting hard-to-reach respondents and can offer them response methods they might find more convenient than the telephone. Use of a mixed-mode approach assumes that alternate contact information is available for the target segment of the population. A mixed-mode approach may increase the cost of data collection and the length of the data collection period. However, it can also shorten the time required to conduct the fieldwork and can reduce the costs of achieving the target number of completes (such as the costs needed to make numerous callbacks or refusal conversions to complete interviews with hard-to-reach respondents). Impact on cost and timing aside, a mixed-mode approach does tend to yield higher response rates for studies.

The impact on survey accuracy of using a mixed-mode approach must also be weighed against the potential bias of not hearing from these respondents. For instance, the use of different data collection methods can result in data that are not entirely comparable, depending on the types of questions asked.

Consider a question with a long list of responses. In a telephone survey, the interviewer reads the list to respondents and can rotate the possible answers to account for primacy/recency effects–that is, the tendency of respondents to pick the first or last response presented. However, it is not as easy to vary the order of answers when using a paper-based, self-administered questionnaire. Multiple versions of the questionnaire with randomized ordering are needed.

As another example, in a telephone survey, interviewers may ask respondents an open-ended question and use a pre-coded list of answers (which are not read to the respondent) and the "other/specify" option to record responses. This approach facilitates coding and data comparability. However, this type of question does not work at all in online surveys. Replacing it with a truly open-ended question is not a good option because of the high non-response rate for open-ended questions in online surveys.

In short, a mixed-mode approach may introduce a new variable that must be considered during the analysis: whether people responded differently to self-administered questions than to interviewer-administered ones. Consider a mixed-mode approach when the potential for non-response error outweighs concerns related to measurement error.

1.0.3 Consider allowing proxy respondents.

Proxy respondents

Data are collected from one person who acts as a proxy for another individual or the entire household.

There is a general consensus in the research literature that proxy respondents should not be used when the research is designed to measure attitudes, opinions or knowledge. Current evidence suggests that data from proxy respondents sometimes differ systematically from data obtained from respondents (Groves et al., 2004). Nevertheless, under the right circumstances, proxy respondents can increase response rates by enabling survey organizations to reach respondents who otherwise would not be able to take part in the survey. For some studies, using proxy respondents is better than obtaining no response at all.

Develop a clear set of criteria to determine which sorts of studies are suitable for the use of proxy respondents. Proxy respondents can be viable for surveys that collect factual or experience-based information. If the information being collected is not opinion-based, it is reasonable to assume that people other than the intended respondent could answer, as long as they possess the needed information. Proxy respondents may also be useful when the respondent speaks neither official language or has a relevant disability, such as a hearing impairment. Interviewers should clearly identify proxy interviews in the data set to ensure that tests can be run during the analysis to look for variations between proxy and non-proxy interviews.

1.0.4 Collect the data at the most appropriate time of year.

Examples of time considerations

Surveying the general public in July and August, when Canadians typically take vacations, will generally result in lower response rates due to these absences. Likewise, avoid surveying accountants during tax season, or public servants during the March 31 fiscal year-end period.

Ideally, data should be collected at the most appropriate time of year to achieve the highest response rate possible. Avoid surveying the target population during times of the year when members are hard to reach or less willing to participate in research. Such times will depend on the specific audience, but try to avoid interviewing during major holidays, audience-specific events, three-day weekends and vacation seasons. After a specific number of callbacks–attempts to re-contact people who were not available when first called–these telephone numbers will be retired and new telephone numbers attempted to achieve the required number of completed surveys. Retiring valid numbers–for example, those where the interviewer got a busy signal, no answer or an answering machine–and adding new contacts will decrease the response rate.

If one cannot avoid collecting data during these times, build a longer field period into the project timelines. Unless a longer interviewing window is scheduled, the response rate is likely to be lower and the sample of respondents might be biased (if survey respondents differ systematically from non-respondents). To make sure the sample is representative of the target population, the interviewing invariably will take longer to complete. That is the trade-off for conducting POR telephone surveys at less appropriate times of the year.

1.0.5 Allow adequate time to collect the data.

The length of the data collection period can have a direct impact on response rates. It will depend on the sample size, interview length and interviewing supplier capacity. Such factors aside, the field period should be sufficient to achieve a good response rate. A general rule is that the longer a study remains in field, the higher the response rate (although there is a point when the return on invested time and budget will diminish).Footnote 5 Telephone surveys with short data collection periods tend to suffer from lower response rates because the telephone numbers may not receive as many callbacks before being retired, or the callbacks are not as varied in terms of time of day or day of the week. As well, a person refusing one day may be in a different situation or frame of mind a few weeks later, and more amenable to being interviewed.

The length of the interviewing period is an essential factor in maximizing response rates to telephone surveys (Halpenny and Ambrose, 2006).Footnote 6 A longer field time increases the chances of reaching a respondent and improves the chances of finding that respondent in a situation conducive to taking part in the survey.Footnote 7 The following table provides an approximate indication of the range of response rates that can be expected from a general public RDD telephone survey, depending on the length of the field period.

Table Summary

The following table provides an approximate indication of the range of response rates that can be expected from a general public RDD telephone survey.

Response rate Field timeFootnote *
7% to 15% 2 to 6 days
20% to 35% 1 to 4 weeks
35% to 60% 6 to 12+ weeks

Footnotes

Footnote *

Times assume sufficient field resources are available, such as budget, computer-assisted telephone interviewing stations and interviewers.

Return to footnote *

The time allotted for data collection should also reflect incidence level, target audience and research objectives. All things being equal, a survey of a low-incidence population or one of hard-to-reach elected officials will require more time to complete than a survey of the general population.

In addition to these considerations, the type of information being collected can influence the field time required. Should it be necessary to capture a reflection of the target population's attitudes or behaviours at a specific moment in time, a longer field period might compromise these objectives. An example of this type of study is a "recall" survey following an event such as an advertising campaign. If the organization does not take measures to mitigate the effects of the time lapse, prolonged data collection may not yield accurate data. As time passes, the likelihood of respondents recalling the advertisement decreases. Other types of POR studies where this might apply include mailout recalls, assessments of recent service interactions and time-use studies, such as diary studies where respondents must record an activity or behaviour at a specific point in time).

1.1 Ensure adequate population coverage

Rationale

The response rate is one indicator of survey quality. Sampling and non-sampling errors can also affect the quality of a survey. No research design is perfect, but efforts should be made to minimize sources of error, independent of the response rate.

Best Practices

1.1.1 Define the research population.

Topic interest plays a role in achieving high response rates. Generally, the more interesting the topic, the more likely people will be to respond (Groves, 2004).

In survey research, the population or universe refers to the target audience or the group of people of interest–for instance, the general public, private sector executives or seniors. The population to be included in the survey must be relevant to the research objectives. Properly defining the population will determine who should be included in the sample and who should not. This step is essential to conducting good quality research and has an indirect impact on response rates. The more important that potential respondents perceive the research to be, and the more relevant it is to them, the more likely they are to respond and take part in the survey. When the target population has no direct link to the survey topic, an effective introduction is critical. Consider how best to frame the research as relevant to these potential respondents (see BP 1.3.2).

1.1.2 Select an adequate sample size.

Select a sample size that relates to the target population, the research budget, the intended data analyses and the required degree of accuracy. Sample size does not have an impact on response rates. Rather, it affects the accuracy of the results. The larger the sample size, the smaller the margin of error and the more reliable the results. Choosing the right sample size will help minimize unnecessary sampling error. It will not help to increase the response rate per se.

To determine the appropriate sample, consider the following factors.

  • Target audience: The size of the survey population, in part, will influence the sample size. Typically, the marketing research industry uses a 10:1 sample-to-completion ratio as a guide. In other words, contact information for 10 potential respondents is needed to achieve one completed interview.
  • Budget: Conducting telephone interviews costs money. Data collection costs are based, in part, on the length of the interview, the number of completed interviews and the incidence level of the population.
  • Data analyses: If there are sub-groups that require analyses, the sample size needs to be large enough to accommodate these analyses with enough reliability.
  • Accuracy: The larger the sample size, the smaller the sampling error. The intended use of the data will help guide the size of the sample. If the results need to be highly accurate, a smaller the margin of error will be required.

1.1.3 Reduce coverage error.

The sample frame is like a map that determines who is eligible to participate in the survey–for example, members of the general public, Ontario teachers or users of a certain government program. It is important to put in place a sample frame that effectively corresponds to the population of interest. After doing so, develop a sample list that includes all elements of the research population and constitutes the source from which survey respondents will be drawn. Coverage error occurs when this list does not include all segments of the target population. Consider a telephone survey of the general public. RDD samples generally include only landlines, not cell telephone lines. In Canada, approximately 94% of households have landlines, 4.8% have cell phones only, and 1.2% do not have any telephone. As a result, an RDD survey of Canadians will have minimal, but still some, coverage error. Minimizing coverage error will increase the likelihood that the information collected accurately reflects the target population.

A high response rate to a survey based on a flawed or incomplete sample frame may not produce valid data. Consider a telephone survey of the general population that results in a high response rate but uses local telephone directories as its sample frame. Given that approximately 10 to 20% of the population has an unlisted or newly listed telephone number, not everyone has an equal chance of being contacted for the survey. As a result, the survey data may not reflect the opinions or attitudes of the segment of the population with unlisted telephone numbers. If this segment of the population differs demographically or attitudinally from people with listed telephone numbers, the survey findings may not be valid.

Common sampling methods for telephone surveys include RDD, sample lists purchased from list brokers and in-house lists (such as lists of clients, members or employees). Regardless of the sampling method used, survey organizations should consider the following.

  • Ensure the sample frame and sample lists are appropriate and relevant to the survey objectives, questions and areas of investigation.
  • Try to obtain a good sample–for example, a sample that has been pre-screened for out-of-scope telephone numbers. A poor-quality sample can have an impact on the field budget because interviewers will spend time trying to call inappropriate numbers (such as not-in-service, fax or modem numbers) or non-eligible respondents (such as business numbers for a household survey).
  • Make sure the sample frame and lists are up to date and accurate. Consider how and when it was updated. To illustrate this point, take a sample frame developed from the previous year's client list for a government program. Not capturing the entire target population–that is, new clients–may introduce non-response bias into the data.
  • Try to include as much demographic data as possible when pulling a sample from established lists. Not only will doing so reduce the length of the interview (see BP 1.1.2), but it will also provide information for non-response analysis, if necessary (see BP 3.0.3).
  • Identify under- or over-represented population segments in your sample lists before fieldwork begins. For instance, studies indicate that telephone coverage rates tend to be lower among low-income households and young people. Use of quotas or targets is a common way to address any such deficiencies. In other words, organizations make more of an effort to reach under-represented groups and set limits related to over-represented groups.
  • Ensure that the sample does not include duplication. People may appear on more than one program or client service list, for example. Cross-check lists before beginning fieldwork.
Cell telephones

The increased use of cell telephones among some segments of the Canadian population presents a growing problem. As more households rely only on cell telephones, telephone coverage error may increase. In December 2005, Statistics Canada reported that 4.8% of Canadian households have only a cell telephone (as compared to 1.9% in 1993). That number rises to 7.1% in B.C. and to 7.7% among low-income households (Statistics Canada, 2005).

Research undertaken in the United States has found that cell-only Americans differ from those with a landline (Purcell, 2006; Pew, 2006; Tuckel et al., 2006). Cell-only users tend to be younger (18 to 29 years) and are more likely to be single, lower income and renters (rather than homeowners). Currently, evidence suggests that the cell-only phenomenon has not undermined national polls (Pew, 2006). Nevertheless, as the proportion of cell-only households increases, it may become prudent to augment RDD samples with cell samples to provide a more representative final sample (Purcell et al., 2006)–for example, one that includes young people, who tend to be underrepresented in RDD surveys.

Research suppliers typically handle sampling issues. Research clients might consider asking their suppliers the following questions.

  • What is the source of the sample frame and when was it drawn? This information should be included in the methodological section of the final report.
  • Was the sample pre-screened for out-of-scope telephone numbers and checked for duplication?
  • Does your RDD sample frame include unlisted or cell telephone numbers? Currently, RDD sample frames do not typically include cell numbers. Interviewing people on a cell telephone presents several unique issues. There are safety risks–for example, a respondent might be driving while completing a survey. Respondents might be distracted or in a public location, which may limit their attentiveness and candour, leading to data quality problems. Also, it may not be ethical to conduct surveys this way, since cell telephone users will be required to pay their provider for the air time needed to conduct the interview. If cell telephones are included in the sample frame, the research supplier should note this fact in the methodology section of the report.

1.2 Minimize respondent burden

Rationale

The response rate is one indicator of survey quality. Sampling and non-sampling errors can also affect the quality of a survey. No research design is perfect, but efforts should be made to minimize sources of error, independent of the response rate.

Best Practices

1.2.1 Keep the interview as short as possible.

The longer the survey, the less likely people are to take part or to complete the full interview.

Response burden is an unavoidable part of survey research, but efforts to limit it can help maximize response rates. Shorter questionnaires can improve response rates, particularly if interviewers inform respondents that the interview will be short.Footnote 8 In practical terms, surveys of 10 minutes or less are considered relatively short and not overly burdensome. Surveys of 15 minutes are common in federal government POR and do not tend to place an undue burden on respondents. Telephone surveys of 20 minutes or more are less common and can be expected to result in lower response rates, other factors such as survey topic and target audience being equal. Unless necessary, avoid interviews longer than 15 minutes to help to maximize response rates. Keeping the length of the questionnaire to a minimum, while still achieving the research objectives, will help yield higher response rates for studies.

Before designing a questionnaire, it is a good idea to review what is already known about the target population in relation to the study objectives. Assess current information needs, determine whether some of this information is available elsewhere, and prioritize issues and questions to make it easier to manage questionnaire length. This review will help to ensure that departments and agencies collect essential information only. The result will be a focused questionnaire. Not only will a shorter questionnaire increase the response rates of individual studies; limiting response burden will also help to cultivate more favourable perceptions of survey research generally and may increase the likelihood of Canadians agreeing to an interview when contacted for future surveys.

1.2.2 Design a well-structured questionnaire.

A well-structured questionnaire ensures that the data collected satisfy the objectives of the research and minimizes the burden placed on respondents. A good questionnaire collects only the information that is essential to the survey objectives. Consider the following guidelines when developing the questionnaire for a study.

  • Make the survey content as relevant as possible to the respondent.
  • Ensure the introduction is well written (see BP 1.3.2). The introduction is the only opportunity interviewers have to get a potential respondent to agree to an interview. Most people decide in the first seconds of a telephone call whether they will respond to the survey.
  • Pay careful attention to screener wording when interviewers must establish people's eligibility to participate in the survey. Research has shown that when potential respondents know the eligibility criteria, response rates decrease. In other words, people report themselves as ineligible when external eligibility data, such as census data, suggest that they meet the inclusion requirements (Shreffler et al., 2006).
  • Include definitions, explanations and instructions for respondents and interviewers, if necessary. If respondents are unlikely to understand a term used in the questionnaire, give them the definition, if doing so will not compromise the data.
  • Frame each question so that it is as relevant to the respondent as possible. Check question relevance by pre-testing the questionnaire (see BP 1.2.3).
  • Ensure that questionnaire transitions are well positioned and sufficient to guide the respondent through the survey. Unlike online or paper-based questionnaires, telephone surveys are not affected by page layout. The only guides for the respondent are the interviewer and the topic transitions.
  • Write questions that are clear, simple and free of jargon. Ensure the language is appropriate to the target population. Try to keep questions short, replace long words with shorter ones and make sure questions are as direct as possible. Questions that are difficult to understand when read out loud will compromise data quality and make it harder for the respondent to answer. Anything that increases the response burden may decrease response rates.
  • Make sure respondents can answer the questions. Pay attention to skip patterns. An interviewer should not ask respondents questions they cannot be expected to answer. For example, if respondents have not used a service, the interviewer should not ask them to rate their satisfaction with the service. Such questions will frustrate respondents, which can result in respondents terminating the interview before it is complete.
  • Keep to a minimum the number of repetitive questions, including long batteries of questions (such as lists in which a respondent is instructed, "Please rate the extent to which you agree or disagree with the following 20 items."). While it is tempting to try to cover as much content as possible, doing so can result in respondents providing automatic or less thoughtful responses that do not discriminate among the issues being explored. This tendency may compromise data accuracy. In addition, respondents may be more likely to terminate the interview before it is complete if they view it as repetitive.
  • Avoid repetitive response options, for similar reasons. If the same rating scale is used for each question, do not repeat it at the end of each question unless the respondent asks. Repeating scales after each question lengthens the interview and frustrates respondents who can remember the scale.
  • Use scales that respondents can easily understand. Measurement error can result when each respondent interprets the scales differently.
  • Avoid repetitive question patterns. For instance, avoid using a series of "yes/no" questions that lead to additional questions if the respondent says "yes." Respondents may quickly catch on to this pattern and might begin to say "no" to move through the interview more quickly.
  • Avoid overusing open-ended questions, or consider conducting qualitative research first to address this information need. Closed or semi-closed questions are easier for respondents to answer, require less coding, are easier to track over time and typically provide more meaningful survey data.

1.2.3 Review the translated questionnaire.

Closely review the translation of the questionnaire. The language must be as clear and simple as that in the original document. Pay particular attention to the accuracy and appropriateness of the translation. The "correct" translation of a text might not always reflect the popular vocabulary of the target audience. Efforts to produce a well-designed questionnaire in one language will be undermined if the translation is not subject to the same level of scrutiny.

1.2.4 Pre-test the questionnaire.

A well-designed survey will reduce the response burden, which can improve the response rate.

Pre-testing the questionnaire is an excellent way to work out any potential problems with the research instrument before the fieldwork.Footnote 9 A pre-test will help determine the length of the survey and ensure that the questionnaire is measuring what it is designed to measure, that respondents understand the questions and can provide the information requested, and that interviewers understand the questionnaire and the computer-assisted telephone interviewing (CATI) programming. In short, it is an important step in the development of the research instrument.

Consider the following when pre-testing the questionnaire for a study.

  • Pre-test the questionnaire in English and French, if the survey will be conducted in both languages. Not only are the words used in the English and French versions of the questionnaire different, but there can also be differences in the way anglophones and francophones interpret information. If the translated version of the questionnaire is not available when the fieldwork starts, conduct the pre-test in one official language, and then re-do the pre-test later in the second official language, once that version of the questionnaire is available.
  • Listen to the pre-test interviews and report any concerns to the research supplier, who should also be listening to the interviews. Pay attention to the flow of the interview, respondents' comprehension and feedback, patterns in respondents' requests to have some questions repeated, and the interviewers' technique and ability to pronounce words in questions. Adjustments to question wording may help elicit the right information from respondents, and make their experience easier and more pleasant. If the research instrument is adjusted significantly, think about pretesting it again.
  • Ask the research supplier to debrief the interviewers after the pre-test. This step can serve two purposes. First, it is an opportunity for interviewers who conducted the pre-test to state whether they noticed any additional issues–such as those related to comprehension, wording and survey flow–that did not arise in the pretest interviews.Footnote 10 The interviewers are the front-line delivery staff for the survey and are closest to the respondents, so they might have additional insights to share. Second, the debriefing is an opportunity to provide interviewers with direction that might help them take respondents through the interview more professionally or efficiently, based on adjustments desired by the research client and supplier. Pretests often result in adjustments to the way in which interviewers phrase questions or code answers. As noted, efforts to reduce the response burden can improve the response rate. Data resulting from the pre-test should not be used as part of the sample if substantial changes are made to the questionnaire.
  • For federal government projects, the current standing offer requires pre-tests with 15 interviews in each language. Additional pre-test interviews may be required if substantial changes are made to the questionnaire.
  • For lower-incidence populations–such as smokers, who comprise 20% of the population–it can be a more efficient use of time to record the pre-test interviews and then distribute the recordings to research team members for their review. This approach eliminates the "down time" otherwise required to reach another respondent on the telephone.

As an additional quality control measure following the pre-test, but before the survey gets well into field, it can be helpful to have the top-line frequencies run after 50 to 100 completed surveys. Reviewing the frequencies can help determine whether people are being routed through the questions they should be asked. This approach not only helps ensure that respondents are asked the questions they should be asked, thus minimizing potential frustration; it is also an excellent check on data quality, conducted at a time when adjustments to the questionnaire are still possible.

1.3 Incorporate methods to encourage participation

Rationale

Reaching a potential respondent is just the first step in the interview process. Once a potential respondent has answered the telephone, he or she needs to agree to take part in the survey. Incorporating strategies designed to encourage participation is critical to achieving high response rates.

Best Practices

1.3.1 Notify potential respondents in advance of the fieldwork, where possible.

Advance notification is more common and practical in special-audience research than in telephone surveys conducted with the general public.

Using an advance information letter can help enlist survey participation and improve response rates (de Leeuw et al, 2006;Footnote 11 Link and Mokdad, 2005). An advance letter explains the background of the study, encourages participation and legitimizes the research. It can be used to position the research as a consultative exercise, particularly among special audiences such as stakeholder groups; respondents may be more interested in participating in a consultation than in simply doing a survey. Likewise, giving clients advance notice of a survey shows respect for clients, which can improve the response rate.

List-based sample frames–such as client, employee or other types of stakeholder lists– provide the best opportunity to send advance notification. Advance notification is less feasible in RDD and most other telephone surveys of the general public, when an accurate list of respondents is not available to the researcher. In some instances, only a subset of potential respondents for whom mailing addresses are available will receive a letter before the telephone call. While the overall response rate may benefit, the subset who received the advance letter may be over-represented in the final sample, which may introduce bias into the survey data.

If the research design incorporates advance notification of the target population, the federal department or agency should send the letter on official stationery, where possible, rather than having the research firm send it on their letterhead. This approach can increase the credibility and perceived importance of the research. A good letter should be kept short–no longer than one page as a general rule–and include the following elements.

  • A personal salutation, when possible: Studies indicate that addressing potential respondents by name will help increase the likelihood that they will agree to an interview.
  • Information on the background and objectives of the research, including the way the results will be used: State any direct or indirect benefits the respondent might receive as a result of the survey, such as improved service. In some instances, it might not be possible to reveal the objectives and goals of the survey, because doing so might influence potential respondents before the interview (see BP 1.3.2).
  • A department or agency contact person whom respondents can call to verify the legitimacy of the research: Not only does this lend credibility to the study; it also helps to minimize any concerns people might have about agreeing to participate (see BP 1.3.6).
  • Assurances of anonymity and confidentiality: When survey responses will remain confidential, it is important to make potential respondents aware of this fact. The knowledge that their privacy will be safeguarded if they participate can encourage those who would otherwise decline participation to take part in an interview (see BP 1.3.3).
  • A senior official signatory: Similar to including the department or agency contact for surveys, including a senior official's signature on a letter will emphasize the importance of the research and encourage participation. The level and position of the signatory should be decided on a case-by-case basis and will vary according to the research objectives and the target audience.
  • A one-sentence introduction to identify the research company conducting this research.
  • Encouragement to participate in the research and thanks for considering doing so.

When it is not possible to use advanced letters, consider the following strategies, only on a case-by-case basis.

  • Promote awareness of the telephone survey through department and agency Web site notices: This approach could be as simple as including a brief notice about the upcoming survey in the "What's New" (or equivalent) section of the Web site. Doing so will help lend legitimacy to the study, as well as prepare potential respondents for the telephone call inviting their participation. Use of this type of approach is most often reserved for special POR initiatives. Time does not typically permit this level of advance coordination.
  • Advertise the survey in local government service centres, if the target population is program clients or benefit recipients (such as Employment Insurance recipients being asked to participate in a client satisfaction survey): While not all clients or recipients will visit a service centre, the department or agency could use the same type of advertising on its interactive voice response (IVR) system. This type of publicity can help to raise awareness among these Canadians that this type of research is taking place and that they might be called to take part.
  • Work with department and agency stakeholders, and other interest groups, to obtain their endorsement or support before the survey: Use of this type of strategy depends on the nature of the research subject and target population. This caveat aside, response rates for research among special audiences can be expected to improve if the key association endorses the survey and encourages members to participate. For example, a telephone survey of family physicians would likely benefit from the support of the Canadian Medical Association.

1.3.2 Use effective survey introductions.

Survey research needs to distinguish itself from telephone marketing and solicitation calls. Professional survey introductions can help accomplish this goal.

Effective introductions are necessary to increase the likelihood that the person will take part in the research. Since most telephone refusals occur before the interviewer has an opportunity to request an interview, an effective introduction should be short and appeal directly to people. Studies have found that the majority of refusals occur during the first minute of the call (Groves, 1990). While survey introductions need to convey a number of points, they should try to do so in the most efficient manner. In the introduction, the interviewer should do the following:

  • properly identify himself or herself;
  • address the respondent by name, if that information is available (consider this approach on a case-by-case basis–it can help the interviewer bypass the gatekeeper, but it can also cause potential respondents to question their anonymity);Footnote 12
  • identify the organization sponsoring the survey, when this information can be revealed (see BP 1.3.5);
  • briefly describe the survey, when the research objectives will not be compromised by revealing this information;
  • inform potential respondents that their help is both important and useful to achieving the research objectives;
  • mention that confidentiality and anonymity will be protected (see BP 1.3.3), if this is the case, but do so quickly and in plain language (otherwise, this information may make people feel less comfortable, not more comfortable, about taking part in the survey);
  • refer to any incentive available to respondents (see BP 1.3.4);
  • estimate the interview length, particularly for short surveys of 10 minutes or less, since this can boost response rates (if interviewers do not provide this information in the introduction, they should be instructed to offer it to the respondent if asked); and
  • ask whether this is a convenient time for the respondent to conduct the interview and, if it is not, ask for a good time to call him or her back.

Informing potential respondents of the topic of the survey may increase response rates in some instances–for example, when it is an interesting topic or one especially relevant to the respondent. People cooperate at higher rates on surveys of interest to them, and this tendency is most evident when the topic is stated in the introduction. However, revealing the topic may compromise the research objectives. For example, in policy studies, an organization may want to hear from all aspects of the general public, not only those most interested in the specific policy area. Identifying the topic can sometimes lead to unwanted self-selection, where certain types of respondents opt into a survey and others opt out. If unsure about the impact on the research objectives of identifying the topic, use topic-neutral language in the survey introduction. For example, the interviewer could say he or she is "calling to discuss current issues of interest to Canadians."

1.3.3 Offer assurances of confidentiality.

Example of privacy language

Your participation in the survey is completely voluntary and will not affect any dealings you may have with the Government of Canada. Your privacy is protected by law.

Assurances of confidentiality may allay concerns that potential respondents might have about survey participation. All surveys conducted by the federal government must contain privacy language, although the specific language included in survey introductions varies from department to department. While the specifics vary, privacy and confidentiality language should be appropriate to the survey, its objectives and its target population.

In addition, departments and agencies should not request personal information from respondents that is not relevant or essential to the survey. If this information is essential to the survey analysis, interviewers should explain to respondents why the information is important and how the data will be used (when necessary). Personal information might include any demographic information not absolutely required for analytical purposes, such as details about racial background, religion or sexual orientation that the respondent might view as sensitive.

1.3.4 Consider using incentives, where possible.

Incentives are logistically difficult to use in telephone surveys.

There is a general consensus among survey researchers that monetary and non-monetary incentives are an effective way to increase the response rate of a study (Fahimi et al, 2006). Incentives are particularly useful in surveys where the response burden is high–that is, where the respondent has to make an exceptional effort. Where possible, offer the incentive to respondents when first contacting them to take part in survey research (Church, 1993). Compared to no incentives at all, incentives provided after the survey is completed do not significantly improve response rates (Singer et al., 2000).

Incentives have some drawbacks. As well as increasing costs, they may increase the public's expectation of payment, induce perceptions of inequity (if, for example, they are used only to convert refusals), and affect sample composition or responses to specific questions, with those receiving incentives potentially answering more positively. These concerns, especially those related to optics, are only amplified in the context of conducting research for the Government of Canada.

Despite these weaknesses, incentives may be appropriate for some Government of Canada telephone surveys. Incentives may be useful if a survey has one or more of the following aspects.

  • There is potential for bias due to non-response.
  • The survey places a significant burden placed on the respondent.
  • The target population is low incidence–for instance, youth smokers ready to quit.
  • The target population is hard to reach–for instance, physicians.
  • The study is a complex one, such as a longitudinal study or a panel.
  • Incentives may lead to net cost savings–for instance, by reducing the number of callbacks.
  • The overall research budget is substantial.

Distributing a research summary to special-audience respondents is a valuable and relatively common type of non-monetary incentive. Individuals taking part in such research tend to be stakeholders or other professionals who can benefit from, and attribute value to, the findings. Stakeholders are often interested in the outcome of the study to which they contributed, while other professionals see value in the competitive intelligence afforded them by a summary of the findings. Use of this form of incentive is appropriate and effective within the federal government context to increase response rates.

Other common incentives include monetary awards, gift certificates and entries in prize draws. Some literature suggests that the amount of a monetary incentive is less important to respondents than the fact that they receive an incentive–in other words, the incentive need only be symbolic.

1.3.5 Reveal survey sponsorship.

Sponsor identification

Identifying the sponsor of a survey can increase favourable opinion of the sponsor. Attitudinal results from sponsor-identified surveys should not be compared to surveys where the sponsor is not identified, such as general omnibus surveys.

Revealing the sponsor of a survey can increase response rates, depending on the legitimacy and public perceptions of the organization.Footnote 13 Research suggests that government-sponsored or -conducted surveys achieve higher response rates than those of most other organizations (Heberlein and Baumgartner, 1978; Groves and Couper, 1998).Footnote 14 As such, identifying the Government of Canada, or a department or agency, as the sponsor of the survey can increase the response rate. Except for surveys such as awareness studies, when disclosing the sponsor would compromise the survey objectives, the practice should be to reveal sponsorship. The Government of Canada should be emphasized as the study sponsor; the name of the contractor conducting the research on behalf of the government should only be provided after the government is identified. If the department or agency is not well known, the survey introduction should identify the Government of Canada as the study sponsor, either with or without the department or agency name. For example, the interviewer could say, "XYZ Canada, an agency of the Government of Canada, is sponsoring this study."

1.3.6 Offer a validation source.

Government of Canada telephone surveys should offer potential respondents the name and telephone number of a validation source for the study, if they ask for it. This source should be a contact at the sponsoring department or agency–typically, the POR buyer or end client who commissioned the research. The level of the individual is far less important than his or her knowledge of the POR study, including why it is being conducted, how the research firm obtained individuals' contact information and how the government will use the data collected through the survey. Either one bilingual contact person or one person fluent in each official language is required. In addition, all surveys should be registered with the Marketing Research and Intelligence Association's (MRIA's) Survey Registration System, so that potential respondents can call a toll-free MRIA number to determine that the survey is legitimate.

Another effective validation approach in some instances is to refer potential respondents to a relevant toll-free number in the government blue pages of their telephone directory, such as the telephone number for the Employment Insurance Program or the Canada Pension Plan. This approach is particularly useful when surveying seniors, because they are often the target of telephone scams and can be more cautious in dealing with unsolicited telephone calls. This approach may not be practical for all POR telephone surveys; however, it is worth considering, depending on the scope of the research and the target audience. A related approach is to offer potential respondents the telephone number of the media relations office of the sponsoring department or agency.

1.3.7 Inform relevant government call centres or offices about the survey.

Related to the previous point, call centres and other relevant offices of the sponsoring department or agency should be informed of the survey in advance. Even if interviewers do not directly refer potential respondents to the call centre or other office to validate the survey, people may call anyway, asking about the research. This point is particularly relevant to client and stakeholder surveys. POR officials should notify relevant officials about the survey and provide them with information they can use to respond to enquiries. Consider developing a brief Q&A document for the media relations officer in the communications area of the department or agency undertaking the study.

Footnotes

Footnote 2

For a good discussion of online sampling concerns, see Guha (2006).

Return to footnote 2

Footnote 3

RDD can be used to help overcome the lack of complete telephone listings, but no equivalent to RDD is available for online surveys.

Return to footnote 3

Footnote 4

Tavassoli and Fitzsimons (2006) found that people respond differently to the same question when typing an answer rather than saying it. Response modes that require written, not spoken, answers (such as online surveys) change the representation of attitudes and behaviours. The implication drawn from this study is that online surveys may not be useful in discerning changes in attitudes over time.

Return to footnote 4

Footnote 5

Using data on response rates for 205 telephone surveys, McCarty et al. (2006) found that even a one-day increase in the length of the field period (per 100 cases) resulted in a 7% increase in the response rate.

Return to footnote 5

Footnote 6

Findings of studies undertaken by Keeter et al. (2000) and Halpenny and Ambrose (2006) found that response rates for identical surveys improved substantially the longer the surveys remained in field.

Return to footnote 6

Footnote 7

Gallagher et al. (2006) found that maintaining consistently high response rates over time in parallel RDD surveys required an increasing number of field hours and call attempts per completed interview.

Return to footnote 7

Footnote 8

See, for example, McCarty et al. (2006) or Dillman et al. (1993). While the literature examining the impact of survey length on response rates is not conclusive (see Bogen, 1996), logic and practical experience suggest that longer questionnaires will result in lower response rates.

Return to footnote 8

Footnote 9

The importance of conducting a pre-test is reflected in the Office of Management and Budget standards, which make pre-tests mandatory (unless the instrument has previously been used successfully in the field, such as in a tracking survey).

Return to footnote 9

Footnote 10

If the research team is monitoring the pre-test interviews live at the field house, it is generally not possible for the team to hear all of these interviews, since they usually run concurrently.

Return to footnote 10

Footnote 11

This recent meta-analysis of the impact of advance letters on response rates for telephone surveys concluded that pre-notification is an effective strategy. Average response rates went from 58% (no letter) to 66% (advance letter).

Return to footnote 11

Footnote 12

See ZuWallack (2006) for a RDD household respondent selection method designed to increase response rates and reduce survey costs.

Return to footnote 12

Footnote 13

Beebe (2006) found that familiarity with a survey sponsor increases the likelihood of participation.

Return to footnote 13

Footnote 14

Harris-Kojetin and Tucker (1999) found that during times when public opinion of the government was favourable, cooperation rates on a major government survey were higher.

Return to footnote 14

Document "Improving Respondent Cooperation for Telephone Surveys" Navigation