You are here

Survey Methodology2. Design and Methods

Survey Methodology
2. Design and Methods

2.1 Sample Design

Telephone Frame Universe of all Possible Residential Telephone Numbers

The survey used list-assisted RDD techniques to select a nationally representative set of telephone numbers within all valid telephone exchanges in the United States. This process involved restricting the sampling frame to all 100 banks[6] with at least one residential number listed in a published telephone directory. A stratified systematic sample of telephone numbers was selected from the frame with a random start. This approach allowed inclusion of unlisted telephone numbers in the sample.

The study used GENESYS databases for the list-assisted RDD sampling. The Marketing Systems Group[7] (MSG) generates and updates the GENESYS hundred-series banks twice each year using the Donnelley DQI2 Database. This Donnelley database contains approximately 65,000,000 listed residential telephone numbers nationwide, and is updated continuously as new White Page Directories are published. The telephone numbers are collapsed to the hundred-series level, providing a count of listed households for each bank. This frequency provides the basis for defining the standard database (1+ banks) as well as the complement (i.e., those hundred series banks with zero listed households). The GENESYS sample generation methodology produces an equal probability (epsem RDD) sample of telephone numbers in the 1+ banks. The final step in the preparation of the GENESYS database is the imposition of a strict geographic hierarchy. The underlying structural hierarchy creates twenty implicit strata - a combination of ten divisions (9 Census divisions plus Hawaii and Alaska) and a metropolitan/non-metropolitan split within each. The purpose for ordering the GENESYS database with such a strict geo-metro hierarchy is to insure strict geographic representation, especially within larger geographic sample frames. The imposition of even this implicit stratification on the RDD sampling process will tend to reduce the expected sampling variation relative to that of a simple random sample (srs) of the same size.. [8]

Business and Nonworking Numbers

Once the sample of telephone numbers was selected, the next task was to determine if sample numbers were residential, business, or nonworking. Two methods for reducing the cost of identifying nonresidential numbers were used in this survey. One method was a computer match of all the sampled telephone numbers against a file of Yellow Page listings of business numbers. Any telephone number identified in this matching process as being only in the Yellow Pages was classified as nonresidential and excluded from dialing. The second method was to use an automated procedure that dials all the sampled telephone numbers prior to the start of the field period to detect a tritone message (the distinctive three-bell sound heard when a nonworking number is reached). Phone numbers with a tritone message were classified as nonworking and excluded from the sample. Out of scope numbers that were missed by these two methods were later identified by Telephone Research Center (TRC) interviewers.

The RDD sample frame did not include any cellular numbers as these 100 banks are excluded during the first sampling stage.

TTY/TDD Telephone Numbers

The process the study used to select the sample of list-assisted RDD numbers did not exclude any TTY/TDD telephone numbers. Using TTY/TDD telephones, trained staff called the phone numbers flagged as fax/modem to identify which ones were TTY/TDD lines and which ones were households, versus businesses/organizations. Staff then arranged for surveying these households through alternative methods (Internet or mail).

Numbers in Institutions and Group Quarters

The sample frame included telephone numbers for institutions and group quarters, which were not eligible for the survey. The interviewers were trained to interrupt these interviews and classify the phone number as ineligible.  

Sample Selection

CATI screening was used to identify households with one or more occupants of any age with a disability (see CATI questionnaire items B2a.-B2e.). In all such households, the study randomly selected one person with a disability (if there was more than one such person, the study used the birthday rule, which selected the person with the nearest impending birthday). CATI screening was also used to identify persons without disabilities from a subsample of the households in the sample. The subsampling was designed to achieve roughly equal numbers of interviewed persons with and without disabilities. Interviewing all persons without disabilities in all screened households would have yielded more interviews than the required target for persons without disabilities. Approximately one in three sampled households were used to screen and select one person without disabilities (independent of the presence of persons with disabilities in the household). Following a similar procedure used to select persons with a disability, one person without a disability was randomly selected among all persons without disabilities within the household. An extended interview was then conducted with the sampled person(s) -with and without disabilities-, either by self or by proxy, depending on age (under 16 years old or 16 and 17 years if parents would not allow direct interview) and condition (respondent not able to respond for themselves).

Given the choice of an RDD survey of the household population, it was likely that estimates of transportation patterns of persons with a disability would be somewhat underestimated due to the exclusion of those who do not have telephone service. The study adjusted the weights of respondents to reduce the bias due to exclusion of the household population without telephone service (see discussion in Section 4).

In case the eligibility rate was less than expected, additional randomly selected phone numbers were drawn as a reserve sample at the same time as the original sample. A total of 40,000 phone numbers were released for this study; this figure included businesses, non-working numbers, and other cases that were ultimately purged as ineligible for the survey. Of the 10,327 completed household screener interviews, the survey identified 2,531 households with at least one person with a disability, or 24.5 percent of the households (see Table 3.3, below). This was only slightly lower than the original estimate of 27 percent from the analysis of the National Health Interview Survey on Disability (see Introduction).

The original estimate assumed the need for 31,000 phone numbers for the RDD sample. This was based on assumptions of a residential rate of 43 percent among the RDD sample, an overall response rate of 60 percent, and the 27 percent rate of households with at least one person with a disability. Each of these figures was projected to be slightly lower than this, based on early results and forecasts from the survey. For this reason, another 9,000 RDD phone numbers were released from the reserve sample, for a total of 40,000 (see Section 3, Response Rates, below). These numbers were released at least six weeks before the end of the survey to allow them to be fully worked, including opportunities for call backs, refusal conversions, and other steps to enhance response rates (see Data Collection Methodology, below).

2.2 Data Collection Methodology

Survey Instrument Development

A draft of the survey instrument was developed by the BTS and its partners within the DOT. An Expert Panel reviewed this instrument and made recommendations for changes to help enhance intelligibility among the respondents and ease of administration by the interviewers.

To ensure that the questions flowed logically, the survey instrument was organized so that items on similar topics were grouped within one section (e.g., all questions about motor vehicle use were grouped together).

Cognitive interviews were conducted to assure that respondents would clearly understand the questions and respond properly to the alternatives. Cognitive testing has become increasingly popular over the last decade as a technique for testing survey instruments. Cognitive testing includes semi-structured administrations of the instrument designed to yield insights into the cognitive sources of potential response errors. Cognitive testing addresses concerns such as the following:

  • Do participants in the cognitive testing adequately comprehend the instrument questions?
  • Do these respondents recall information that is necessary for answering them?
  • Are the response choices understood?
  • Are the choices mutually exclusive and exhaustive?

Forty-one respondents participated in the cognitive testing, 20 with and 21 without disabilities. To obtain an understanding of experiences with transportation in different locations in the U.S., telephone interviews were conducted with persons living in rural North Carolina, and metropolitan areas, such as San Francisco, Cleveland, New York, and Washington, DC. In addition, two of the interviews were conducted with proxies: one for a person with and the other for a person without disabilities. Each respondent received $25 for participating in the cognitive interviews. Six of the interviews were conducted in-person, while the remainder were conducted by telephone.

The following enhancements were recommended based on the results of the cognitive interviews:

  • Include a question in the screener to desensitize respondents, and emphasize that this is a study about the use of transportation, including transportation by persons with disabilities;
  • Ask the extended interview respondents to answer the disability screening questions (if another person responded to the screener), to verify the accuracy of the information given by the screener respondent;
  • Place sensitive demographic questions at the end of the interview. Some respondents might be discouraged from participating by the income and race questions. Placing them at the end gives interviewers a chance to build rapport with respondents;
  • On the survey instrument, precode the most frequently occurring cognitive testing responses for ease of recognition and data entry by the interviewers (e.g., reasons for ceasing to drive);
  • Standardize the way in which the number of trips is articulated and coded (e.g., the number of round trips versus one-way trips); and
  • Include transition statements when the topic changes within a section to help the respondents interpret the questions within the intended context.

In addition to the cognitive interviews, the survey instrument was pre-tested on persons with and without disabilities using the CATI test version of the instrument. This allowed simulation of the actual conditions of interviewing and confirmed the administration time for completion. This process also allowed testing of the CATI instrument to identify any errors in skip patterns and proper recording of responses in the CATI data file.

Pre-Screening Letter

The mailing of notification letters prior to the first telephone call to households has been shown to improve overall cooperation rates.[9] After the list-assisted RDD sample was developed and purged of business telephone and non-working numbers, the residential cases were passed through the databases of multiple vendors to append mailing addresses for the sampled telephone numbers.

Address matches were obtained for approximately 77 percent of all in-scope telephone numbers. The address match rate was 88 percent for all completed interviews. Households for which mailing addresses were obtained were sent a prenotification package. The package contained a cover letter with the names and toll-free numbers for staff members at the BTS and for other members of the study team for the household to contact with questions and comments. The advance mailing also included a brochure describing the study. These materials were developed by BTS staff.

Interviewer Recruiting, Hiring, and Training

The study team selected interviewers for the survey from a pool of more than 4,700 experienced interviewers. Personnel who had worked on similar studies in the past and had proven themselves were given a high priority in the selection process. The study team maintains a computer database listing these experienced telephone interviewers who are available for continued work. These files also contain performance evaluations to aid in the selection of qualified candidates.

In addition to this pool of experienced interviewers, the study team had the ability and experience to hire new interviewers when it was necessary to do so. In general, the study used the following criteria when evaluating interviewers:

  • Communication skills to interact with respondents on the telephone;
  • Reading skills to follow instructions and pay attention to detail;
  • Motivation to produce high quality work; and
  • Availability to work the hours needed to perform the necessary tasks.

To assess these characteristics in potential personnel, the study team relied on personal interviews, contact with personal references, an assessment of previous experience, and observations during training sessions.

The study used the following techniques to ensure that the process of hiring interviewing staff yielded the best candidates possible:

  • If a candidate had worked on previous studies, the candidate's former supervisor was contacted for an evaluation of the candidate's performance. Reference checks were also conducted for each candidate considered;
  • All candidates were required to complete a standardized form requesting detailed information about their educational and work histories, their specific data collection experience, references, and availability;
  • Personal interviews were conducted with each candidate. The candidates were administered a standardized practice interview, to judge their reading abilities, pacing, and voice quality; and
  • Finally, each data collector received formal training for the survey. If performance during this training session was inadequate, interviewers were retrained or dismissed before starting work on the survey. New staff also would be subject to a 30-day probationary period when they began work on the survey. If performance during the probationary period was inadequate, interviewers would be retrained or dismissed. However, all the interviewers on this survey were experienced interviewers.

A total of 84 interviewers were used for the survey, all of whom were current or former employees with experience in telephone interviewing; so there were no new hires for this study. All interviewers were under the direct supervision of an experienced group of supervisors, and the ratio of supervisors to interviewers was 1 to 5, or 20 percent.

Training of Interviewing Staff

The Telephone Research Center (TRC) interviewer training followed a structured process to prepare interviewers to conduct interviews in a professional, controlled, and consistent manner. The main purpose of the training was to familiarize interviewers with all interview-related terms, every question on the survey and related screener, and all answer categories and answer-dependent skip patterns. Thorough training contributes to increased response rates because interviewers who are familiar with the survey instrument sound confident on the telephone and can easily answer questions respondents may have about the survey.

Generic Training

All interviewers hired by Westat receive 4 hours of General Interview Technique (GIT) training before they are assigned to a survey. The GIT training includes:

  • An introduction to survey research;
  • The basics of telephone interviewing;
  • Samples of types of survey questions and recording conventions;
  • Interviewer roles and responsibilities including refusal avoidance techniques;
  • Providing suggestions for specific probes to help interviewers clarify answers;
  • Confidentiality;
  • A review of the monitoring that is done by telephone center supervisors; and
  • An interactive, training session on the use of the Computer Assisted Telephone Interviewing (CATI) system.

Study-Specific Training

Every interviewer assigned to this project received 16 hours of training designed specifically for this survey. The main training document was a comprehensive interviewer's manual, which described all survey procedures for the interviewer. It provided an overview of the survey, and question-by-question specifications for each item in the questionnaire. For this study, a significant portion of the training involved sensitivity to the needs of persons with disabilities, such as interviewing those with hearing, other physical, or mental conditions. Interviewers received copies of respondent materials such as the introductory letter and brochure to review during training to ensure they were fully prepared for any questions respondents asked.

Conduct of Training

Training began with an introduction to the survey and an interactive lecture, during which the specifications for each question in each of the data collection instruments (screener and extended questionnaires) were reviewed. This lecture was followed by a group role play, in which the trainer took on the role of a respondent while the trainees took turns being interviewers. During this exercise, the interviewers were encouraged to raise questions about areas of confusion. Ways of handling these areas and "problem" responses were discussed during this exercise. Interviewers who were having problems were identified during the group role-plays and were followed more closely and given special assistance, if needed, during the rest of the training sessions.

Just prior to beginning live interviews, trainees participated in dyad role plays. One trainee acted as the respondent, using a script provided by the trainers. The other trainee acted as the interviewer, and had to decide how to code respondent responses, practice probing, and utilize refusal avoidance techniques. After completing one role play, trainees switched roles as interviewer and respondent. These role plays were designed to further familiarize the interviewer with the wording and skip patterns in the questionnaire, and also allowed telephone center supervisors to observe the interviewing skills of the trainees. Any trainees needing further training were helped. No trainees were allowed to conduct live interviews until the telephone center staff observed them successfully completing the role play interviews. Each dyad was observed closely by a member of the training staff.

The interviewers were also thoroughly trained in the survey contact procedures and in refusal avoidance techniques to help with the more difficult participants.

Techniques for Interviewing People with Disabilities

Of equal importance for this study was training interviewers to be sensitive to the needs of people with disabilities. Interviewers were trained on issues related to interviewing persons living with disabilities, including those with hearing or other physical disabilities and/or cognitive or mental health disabilities. During training, the following topics were discussed:

  • Methods to accommodate individual's needs (e.g., use of an interpreter, use of proxies when appropriate, breaking interviews up into two or more sessions, speaking slowly, communicating with people who have difficulty concentrating or communicating, repeating questions, etc.);
  • How various disabilities affect the person's ability to communicate, and that a difficulty communicating does not suggest problems with intelligence or understanding;
  • That these interviews are really no different from other telephone interviews. Everyone who participates in a survey should be treated with respect;
  • To attempt to interview the person directly, and to not assume that a difficulty in communicating would require an interpreter or a proxy; and
  • Guidelines to use to determine when an interpreter or a proxy would be necessary, but the final decision was left to supervisory personnel. Proxy interviews were allowed under specific limited circumstances, and when the respondent was a child under the age of 16. All interviewers were carefully monitored throughout the data collection period to ensure that they were conducting themselves in an appropriate manner.

Interviewers were trained to be sensitive to the need for calling back for an interview, or calling back to complete an interview in another session.

The types of disabilities interviewers were told they might encounter were:

  • Cerebral Palsy (person may have speech impairments);
  • Traumatic Brain Injury (person may have short- or long-term memory impairments);
  • Blindness/Visual Impairment (most people who are "blind" do have some sight);
  • Stroke (may have speech, memory, and processing impairments);
  • Deafness/Hearing Impairment (may need an interpreter);
  • Cognitive Impairment (may need an interpreter); and
  • Paralysis due to illness or injury (may need to have the interview broken into more than one session, depending on how the paralysis has affected the respondent)

Interviewers were trained in how to provide appropriate accommodations when requested by respondents.

Training Dates and Agenda

Training was conducted at the telephone facility just prior to the start of data collection. Training was held on July 8 to 11, 2002, and again with a second group of interviewers on July 15 to 18, 2002. Both training sessions were conducted in the evening. The last session of each interviewer training program involved on-line interviewing with actual respondents under the close observation of trainers and supervisors. The first group of interviewers began work on live interviews on July 12, 2002, and the second group began on July 19, 2002.

The agenda for the training sessions (four evening sessions of four hours each) was:

Day 1 Day 2 Day 3 Day 4
Introduction

Voice quality demonstration

Screener interactive

Group interactives (2)
Contact procedures

Group interactive
Contact procedures exercise

Sensitivity training

Refusal avoidance

Problem sheet review

Interviewer questions
Dyad role plays

Contact procedure role plays

Refusal Conversion Training

Approximately two weeks after the start of interviewing, refusal conversion training was conducted. The first step in refusal conversion was to collect information about the refusal at the time it occurred. At the time each refusal occurred, it was documented using a non-interview report form that was part of the CATI system. The form recorded the date and time of the refusal; the point at which the subject terminated the contact; what, if anything, the respondent said when terminating the interview; and the interviewer's assessment of who the respondent was (i.e., male or female, young or old). This information is useful for the interviewer who is assigned the refusal conversion.

Regular monitoring tracked the performance of the interviewers (on at least a weekly basis) using reports that indicated each interviewer's response rate. Interviewers with the highest response rates were asked to conduct refusal conversion responsibilities. These interviewers received special refusal conversion training, focusing on what motivates subjects to respond. They were also trained to review carefully the circumstances that led to the original refusal, as documented in the non-interview report form. They listened as experienced refusal conversion interviewers described the techniques they use to convince reluctant subjects to cooperate. Finally, they practiced refusal conversion extensively before receiving their first assignments.

The progress of the refusal conversion effort was monitored carefully. Supervisors reviewed the results of the effort as they were documented in the weekly reports. Even interviewers who had done refusal conversion on other studies were given refusal conversion training specific to this study. This assisted the interviewers in responding to objections that were particular to this study.

Addition of the Internet and Mail Options

This study had a unique offer to make to respondents who refused: during refusal conversion, if the respondent still refused to do the interview by telephone, the refusal converter was told to offer an Internet or mail version of the survey (see the attached questionnaires). These versions of the survey were initially developed to address the possibility that some of our respondents' disabilities might prevent them from completing the survey by telephone.

Interviewers were told to proceed as follows, if persons had refused a second time to respond by telephone:

  • To offer the Internet as an alternative, followed by the mail option;
  • If the respondents said they had Internet access, the interviewers were told to ask for the respondent's email address, to send them their passwords and a link to the survey;
  • If the respondents refused to give their email address, the interviewer asked for their mailing address to send them the Web site and password for the survey. A copy of the mail survey was also included, in case the respondent was unable to complete the survey on the Internet. Respondents were asked to either complete the Internet or the mail version, but not both;
  • If the respondents refused to complete the telephone survey and did not have Internet access, the interviewers offered to mail them a questionnaire; and
  • If the respondent refused, the interviewer thanked them and ended the interview. The results of the above refusal conversion efforts were recorded into the CATI software for tracking purposes.

Training of Text Telephone/Telecommunications Device for the Deaf (TTY/TDD) callers

The study also tried to identify numbers to call that might have been TTY/TDD phones. TTY machines were used to try to identify households that only had these devices.

Possible TTY numbers were defined as numbers that were identified prior to the interviewers making calls as fax or modem lines. In addition, any numbers that were identified by the interviewers as possible fax or modem lines during CATI calls were also defined as possible TTY numbers. The study used this definition because the sound a TTY machine makes can be mistaken for a fax or modem line (even though the sounds are somewhat different from each other).

Training for making the calls to the possible TTY/TDD numbers was conducted with a group of four interviewers on August 23, 2002. The training was conducted by a hearing-impaired Westat employee who used a TTY machine to demonstrate the use of this device to communicate with hearing-impaired respondents. This trainer included information about specific TTY "shorthand" used, such as typing "R" for "are" and "GA" for "go ahead" to signal the respondents when it was their turn to respond.

Interviewing and Data Collection Methods

Computer Assisted Telephone Interviewing (CATI) began on July 12, 2002. Because this survey was a telephone study of persons with disabilities, there was a concern that people who had communication difficulties (i.e. speech or hearing difficulties) would not be able to participate in the study, even with the use of interpreters, or in limited circumstances, proxies.

Therefore, an Internet version and a mail version of the survey were developed. The Internet version mimicked as closely as possible the CATI version of the survey. There were many skip patterns and questions with long lists of possible answers that were easily administered on the Internet. For the mail version, the questionnaire was shortened to include only some of the questions (see attached questionnaire). For example, questions that had long lists of response items in CATI were asked as open-ended questions in the mail version of the questionnaire. This was actually closer to the CATI version, because the interviewers asked those questions, and then coded the respondent's answers into the available categories, rather than reading the categories to the respondent. The categories were displayed in the Internet version, and the respondent was asked to select the specific answer categories, rather than provide an open-ended response that the interviewers then coded to closed-ended categories.

Refusal Conversion

Initially, the Internet and mail versions of the survey were only offered during refusal conversion efforts. This was done to avoid having respondents agree to complete the survey via the Internet or by mail just to avoid the interview during the initial call into the household. Refusal conversion efforts were begun on July 30, 2002, approximately two weeks after the start of data collection.

Mail Follow-Up

On August 30, 2002, cases that had a final result code indicating the interview could not be completed because of a language or hearing problem, and cases that were coded as a mild refusal, were sent the mail version of the survey, if there was a matching mailing address for them.

The study also sent a letter via FedEx on September 13, 2002, to 648 households where interviewers had reached an answering machine, but had not reached a household member to complete the survey (and for which there was a mailing addresses). This was done to encourage the households to participate in the survey.

TTY/TDD Calls

The study identified 1,032 numbers prior to the start of interviewing, and another 1,128 numbers were identified by the interviewers as possible fax/modem lines. On August 28, 2002, interviewers began calling these numbers. At least two calls were made to all of the numbers. Any number re-identified during these calls as a fax/modem line was called at least once more to verify that it was indeed not a TTY. All other numbers were called at least twice, and possible residences were called three times (e.g., where interviewers found an answering machine, a busy signal, or a ring with no answer). Calls were made to these numbers through September 29, 2002 (the close of data collection).

When a TTY machine was encountered, the respondent was offered the Internet or mail version of the survey. However, all 16 TTY respondents refused to complete the survey.

Summary of Data Collection Dates- and Times

The interviews were conducted at two locations, one on the East Coast and one on the West Coast. Calls were made to respondents from 9:00AM to 9:00PM respondent's time Monday through Friday, 10:00AM to 6:00PM respondent's time on Saturday, and 2:00PM to 9:00PM respondent's time on Sunday. Approximately 40 interviewers were trained at each of the two locations, ensuring a large available pool of interviewers. Enough interviewers were scheduled at each location to cover the available work. The following table shows the hours the telephone facilities were open.

Telephone Research Center (TRC) Operating Days and Times

East Coast Operating Days and Hours (EST)
Monday through Friday: 9:00 AM to 10:00 PM
Saturday: 10:00AM to 6:00PM
Sunday: 2:00PM to 10:00PM

West Coast Operating Days and Hours (PST)
Monday through Friday: 7:00 AM to 9:00 PM
Saturday: 7:00AM to 6:00PM
Sunday: 11:00AM to 9:00PM

Source: 2002 Transportation Availability and Use Study

The following tables summarize the data collection dates. The dates shown are the recorded dates when each data collection mode was exercised.

CATI

West Coast Calls made July 12, 2002 to September 29, 2002
East Coast Calls made July 19, 2002 to September 29, 2002

Mail

Refusal conversion Sent mail surveys July 30, 2002 to September 9, 2002(Completes accepted through September 29, 2002)
Language, hearing and mild refusals Sent mail surveys August 30, 2002(Completes accepted through September 29, 2002)
TTY/TDD Sent mail surveys August 30, 2002 to September 23, 2002(Completes accepted through September 29, 2002)

Internet

Refusal conversion Offered August 5, 2002 to September 12, 2002(Completes accepted through September 29, 2002)
TTY/TDD Offered September 10, 2002 to September 18, 2002(Completes accepted through September 29, 2002)

Source: 2002 Transportation Availability and Use Study

Call Attempts and Callback Methods

After calling the household, the interviewer read the study introduction, then asked if the telephone was in a household (versus a business) and if the person on the phone was at least 18 years old. If they were not, the interviewer asked for a household member who was at least 18 years old or made an appointment to call back when one would be available. Once the interviewer had a household member on the phone who was at least 18 years old, the interviewer attempted to complete the screener interview with that person. During the screener, a respondent was selected for the extended interview using the following guidelines:

  • If the household had a person with a disability, that person was selected for an extended questionnaire interview (if there was more than one person with a disability, the one with the nearest impending birthday was selected);
  • In addition to screening all sampled households for persons with a disability, approximately one-third of the full sample was randomly selected and also screened for persons without a disability (if there was more than one person without a disability, the person with the nearest impending birthday was selected for an extended questionnaire interview). In some of these households two persons (one with and one without a disability) were selected for an extended interview. In these cases, the selection was done separately (independently), for persons with and without a disability;[10] and
  • If the interviewer was not able to complete the survey, several interim and final result codes were assigned to identify the reason for non-response.

Refusals

An interim refusal code was assigned when the person on the phone refused to complete the survey (at any point in the interview). These cases were called back after a two week period. If they refused again, they were coded as a final refusal. If the refusal was a mild refusal that occurred prior to August 30, 2002, (and there was a mailing address), a mail survey was sent to the household on August 30, 2002. Mild refusals include persons who hang up without responding to the calls or persons who politely refuse to respond even after the interviewers ask why and attempt to address the respondents' concerns.

Appointments

Interviewers had the option of scheduling a call back to the household, if the respondent indicated they would be able to complete the interview at another time. Interviewers could schedule call backs for a specific time, a time period (e.g., afternoon or evening; weekend or weekday) or for a general call back at anytime during the hours of operation.

Language and Hearing Problems

Cases were coded language or hearing problems if the interviewer was not able to communicate with the respondent in English or if the respondent seemed to be having a hard time either understanding the interviewer or the interviewer could not understand them. Cases coded as language problems were given to Spanish speaking interviewers to call. Hearing and communication problems were called back at a different time to try to get someone in the household who could communicate over the telephone. Any cases that could not be completed in either English or Spanish by telephone and were identified prior to August 30, 2002, (and where there were addresses) were sent a mail questionnaire.

Data Quality Control Measures

The study implemented quality control measures during every phase of data collection. To develop the sample frame, obtain addresses, and conduct automated tritone and business screening, the study used experienced vendors for drawing the RDD sample, as described in Section 2.1, Sample Design, above. Just one individual interacted with these vendors to ensure that specifications and procedures were consistent and unambiguous. Survey methodologists reviewed the screening questions to ensure that terminology used reduced the incidence of under- or over-coverage of persons with disabilities. The CATI and Internet software under went thorough testing to ensure that the programs mimicked hard copy questionnaire specifications. The quality control procedures during the prescreener mailing ensured that each household where an address was available was mailed a letter prior to receiving a telephone call.

Interviewers were monitored by management and supervisory staff throughout the data collection period. Interviewers were unaware of the monitoring while it occurred. Their handling of contacts, administration of the questionnaire, probing, and demeanor were assessed. Each monitoring session was recorded on a monitoring form. After monitoring, interviewers were apprised of their strengths and areas needing improvement. General adjustments or specific instructions for the interviewing process were made as a result of the monitoring findings. As appropriate, individuals were retrained or released from the study. Once data collection began, close coordination was essential to maintain consistency across interviewers. The telephone center operations manager conducted a daily conference to discuss ad hoc issues with the lead supervisors. The supervisors disseminated the information to the interviewers at the start of each shift.

The study devoted substantial resources to the training of interviewers to ensure an ability to effectively screen for the correct respondent for the full study questionnaire. Interviewers were monitored to ensure that they were implementing strategies for refusal avoidance, recording information accurately, and adhering to the study's protocol.

Quality Control of CATI Responses

The study team checked the CATI responses for consistency. During data collection, data preparation staff continuously monitored the data. Interviewer comments and problem sheets were reviewed daily and updates made as necessary. In addition, frequencies of responses to all data items were reviewed to ensure that appropriate skip patterns were followed. Each item was checked to make sure that the correct number of responses was represented. When a discrepancy was discovered, the problem cases were identified and reviewed.

Some checking of data items occurred within the CATI system during interview administration. Range checking is one example of edits that were applied while the respondent was on the phone. The ranges of responses for closed-ended items in the CATI survey were determined by the permissible response codes. For continuous variables, (such as age), a specific set of response items was not available. Therefore, reasonable ranges were defined and applied to these items, and the CATI system queried the interviewers for implausible responses (e.g., a respondent whose age is 105). The CATI software also identified inconsistent responses, based on answers to previous questions.

Hard and soft range limits were defined in the CATI system. A hard range could not be overridden by the interviewer. A soft range, on the other hand, required that the interviewer ask the question of concern a second time before the CATI system allowed the response to be entered into the database. The CATI software enabled the interviewer to correct erroneous entries, regardless of whether they immediately preceded the current question or were several questions back. The CATI software provided great flexibility in correction and annotation ability. Either could be used to streamline questionnaire administration. For example, interviewers could back up through the questionnaire to correct information, and then move forward through the corrected paths. If the erroneous data was collected at the beginning of the interview, however, it was too time consuming to back up to where the error occurred. At that point, the interviewer relied on two other options for entering the corrections:

  • The CATI software had a built-in utility for collecting interviewer comments. The comments were written to a file where they were reviewed by data preparation staff who subsequently update the data as necessary; and
  • The interviewers also completed a CATI Update Sheet, explaining the circumstances and providing the correct data. These sheets were reviewed nightly by the interviewer supervisor, and appropriate information was then transmitted to the data preparation staff for update. The CATI software's updating utility was simple to use and provided a journal of all update transactions that could be queried whenever necessary.

These interviewer comments were reviewed by data and project staff. When necessary (e.g., the interviewer conducted the interview with the wrong respondent), the erroneous data were removed from the CATI database, and the case was re-released to be worked. Interviewers also entered comments for "other, specify" responses, which helped guide the coding decisions, e.g., put a response into an existing category if applicable, or create additional categories of responses, consistent with the new information.

Quality Control for Paper Questionnaires

Paper questionnaires were tracked by a case identification number (from the CATI system). Cases that were referred for a mail questionnaire were coded in the CATI system as "referred for mail out." The name and address was sent to project staff. Project staff sent the questionnaire and a copy of the advance letter to the respondent. Each time a mail questionnaire was sent out, the date was recorded next to the case identification number.

Once a completed mail questionnaire was received, it was checked to verify that the respondent had clearly marked all answers and that skip patterns were followed. Sometimes respondents answered questions they did not need to answer, or wrote comments on the questionnaire. Project staff did not change answers, but they did remove responses that did not follow the specified skip patterns. In addition, many times respondents answered "other," but the answer they gave could be coded into one of the existing answer categories for the question. In these cases, the editors re-coded these items accordingly. Sometimes respondents did not write clearly, and project staff clarified their handwriting before data entry. If questions were not answered, but should have been, the response items were coded as missing (-9).

Once the questionnaire had been checked and verified, project staff entered the responses into the CATI system. Mail questionnaire data was then cleaned and verified with other data in the CATI database.

Quality Control for the Internet Survey

Internet questionnaires were also tracked by a case identification number (from the CATI system). Cases that were referred for the Internet questionnaire were coded in the CATI system as "referred for Internet." Respondents who told the interviewer they would complete the Internet survey were sent the information via email or regular mail (if the respondent gave us that information) for access to the survey. If the Internet information was mailed to the respondent, a paper survey was also sent. This was done to give another option, should the Internet connection not allow satisfactory completion of the survey. The study also tracked whether the respondent had completed the survey on the Internet or by mail, and verified that the respondent had indeed completed the entire survey. The data from the completed Internet questionnaire was entered into the CATI system, and cleaned and verified with other data in the CATI database.

Internet survey data was entered manually by skilled staff, using the CATI software. This allowed application of all the logic, range, and internal consistency checks of the CATI software. Given the limited capacity of home computer Internet systems, it was not plausible to incorporate all the CATI checks into the Internet version. This was especially true for the complex CATI internal consistency checks, which required comparing an individual response with prior responses in a database. To have done so would have inordinately slowed the response time for the Internet version, especially for respondents with relatively slow dial-up Internet connections that use regular telephone lines.

[6] A "100 bank" is the set of phone numbers with the same area code, exchange and all but the final two digits identical to each other. For example, all phone numbers 301-315-59xx constitute a 100 bank.

[7] The Marketing Systems Group (MSG) of Fort Washington, PA, is the commercial firm that developed the GENESYS Sampling Systems and provides the sampling frame of listed banks used for drawing our list-assisted telephone samples.

[8] www.genesys-sampling.com

[9] Groves, R.M. (1989). Survey Errors and Survey Costs. New York, John Wiley and Sons.

[10] See Section 3, Response Rates, below, for a description of how often a person with a disability and a person without a disability in the same household were selected.