Methodology

Methodology

Sample Design

According to the 2000 decennial census, approximately 19 percent of the U.S. resident (noninstitutionalized) population aged 5 years or older has a disability.4

The disability prevalence rate among children under the age of 5 years is approximately 3 percent.5 In addition, analysis of the 1995 National Health Interview Survey on Disability (NHIS-D), using disability measures similar to those used in the decennial census, indicated that among households with disabled people, 79 percent contained only one person with a disability, 18 percent contained two disabled persons, and the remaining 3 percent contained three or more people with a disability.6 This information was used to estimate how many households would need to be screened in order to interview at least 2,000 people with disabilities for this survey.

The sample design employed list-assisted random-digit dialing (RDD) techniques to select a nationally representative set of telephone numbers within all valid telephone exchanges in the United States. Because this was a residential survey, all business numbers were excluded. Also, because calls to cellular phones may cost the respondent at a per-minute rate, the sample frame excluded all cell numbers. However, households having only cell phones (that is, no "land line" in the household) were accounted for during the weighting process. Weighting attempts to make the estimates from a survey representative of the total population that was sampled, including making adjustments for imperfections in the sample frame.

Because of the importance of including people with disabilities in the universe from which this survey sample was selected, the sample selection process did not exclude any telephone numbers identified as teletype (TTY) or teledata (TDD) that would be used by people with hearing impairments.

A total of 40,000 phone numbers was released for this survey; this figure included businesses, nonworking numbers, and other cases that were ultimately purged as ineligible for the survey.

The sampling frame called for a two-staged respondent selection process. At the first stage, also called the "screener" interview phase, the household phone number was dialed and any eligible household member was asked questions pertaining to the household characteristics and whether anyone in the household had a disability. The first stage resulted in selection of a respondent for the second stage, called the "extended" interview phase. At the second stage, the selected respondent verified his disability status and then answered the remaining questions for the survey.

Of the 10,327 completed household "screener" interviews, the survey identified 2,531 households with at least one person with a disability.

Survey Preparations

The BTS survey design called for a household-level "screening" questionnaire (to identify whether or not a disabled person lived in the household), followed by an "extended questionnaire" for each selected person. Persons of any age (including children) were eligible to be interviewed. Proxy interviews with knowledgeable respondents were required for people under the age of 16 years, for those aged 16 to17 years who lived with adults, and for those who were unable to complete the interviews for themselves due to type or the severity of their impairments.

In order to ensure full access to the interview by all respondents, the extended (person-level) questionnaire was available by computer-assisted telephone interviewing (CATI), by mail, and by Internet. Interviewers trained in the use of teletype (TTY) or teledata (TDD) communication devices were available to interview persons with hearing difficulties.

Because, according to Census 2000, about 5 percent of residents in this country speak only Spanish, and again in order to promote full access to the interview for each person selected for interview, a select group of Spanish-speaking interviewers was available to administer the CATI questionnaire to respondents who spoke Spanish but not English.

Questionnaire Development

BTS provided a questionnaire draft to its contractor, Westat, a private survey research firm located in Rockville, Maryland. The contractor conducted cognitive interviews with 41 paid respondents (20 with disabilities) to ensure that the questionnaire content, flow, and response categories would yield the highest-quality survey data.

Once the CATI questionnaire was final, it was then coded for Internet response and printed for mail response in case either of these data collection modes was requested by respondents. Ultimately the internet and mail versions were also offered to mild refusals and all likely residential households where the interviewers reached answering machines. During the editing phase of the survey, the Internet and mail questionnaires were keyed into the CATI system, which resulted in a single survey database for analysis.

Interviewers

The contractor employed a total of 84 interviewers for the survey, all current or former employees with experience in telephone interviewing; there were no new hires used for this survey. All interviewers were under the direct supervision of an experienced group of supervisors, with a ratio of supervisors to interviewers of 1:5, or 20 percent.

Interviewer Training

Because only experienced interviewers were used on the project, each received 4 hours of general interviewing skills training when first hired and, in addition, had on-the-job experience with at least one CATI survey prior to the BTS survey. Every interviewer working on this survey received 16 hours of survey-specific training to become familiar with all interview-related terms, every item on the household and extended questionnaires, and all answer categories and answer-dependent skip patterns. A significant portion of the training involved sensitivity to the needs of persons with disabilities, such as interviewing those with hearing, other physical, or mental conditions.

Respondent Materials

Mailing introductory letters prior to the first telephone call to households has been shown to improve overall cooperation rates.7 After the list-assisted RDD sample of 40,000 numbers was drawn and purged (of nonresidential or nonworking numbers), the 10,327 remaining residential cases were passed through the databases of multiple vendors to append mailing addresses for the sampled telephone numbers.

Address matches were obtained for approximately 77 percent of all in-scope telephone numbers, and for 88 percent of all the completed interviews. Households for which mailing addresses were obtained were sent a pre-screening package containing a cover letter signed by the Director of the BTS, asking for survey cooperation, and a brochure describing the survey.

Data Collection and Response Rates

Interviewing began on July 12, 2002 and closed out on September 29, 2002.

Computer-assisted telephone interviewing (CATI) screening was used to identify households with one or more occupants of any age with a disability. In households where only one occupant had a disability, that person was selected for the survey. In households with two or more disabled people, one person was selected using the "next birthday" method. CATI screening was also used to select respondents without disabilities from a subsample of nondisabled households, again using the "next birthday" method to identify the selected respondent. The subsampling was designed to achieve roughly equal numbers of interviews for people with disabilities and people without disabilities. This was an important component of the survey design because it provides a basis to compare the two groups and identify common transportation uses and problems as well as uses and problems unique to each group.

Each person selected in the screening phase was asked to complete a CATI "extended questionnaire" to provide detailed transportation information about his or her transportation availability, use, and satisfaction.

The target for completed "extended interviews" was 4,000 persons: 2,000 with disabilities and 2,000 without disabilities.

Respondents who initially refused participation in the survey were recontacted several times. FedEx letters, mail questionnaires, and Internet instructions were also sent to 648 households where interviewers reached an answering machine, but had not reached a household member to complete the interview.

Prior to or during interviewing, over 2,000 numbers were identified as possible fax/modem lines. Interviewers specially trained to use TTY/TDD machines called all of these numbers and, when a TTY/TDD was encountered, offered the respondent an Internet or mail verson of the survey.

Interviewers actually completed 5,019 interviews: 2,321 with disabled people (who self-identified as such through any of the disability questions asked: Census 2000, ADA, and special education) and 2,698 nondisabled people. Please note that, although this report uses only the Census 2000 questions as the disability indicator, the public use data files and documentation include many different disability measures, allowing analysts to construct their own definition of disability using the multiple items in the survey.

A separate variable, CDISABLD, identifies if a respondent reported one or more of the several Census disabilities. The purpose of this variable is to assist users who may want to compare the results of this survey with Census 2000 data, according to a common set of disability items. However, the decennial census collected disability information about those aged 5 years or older, while the BTS survey included persons of all ages. Therefore, when comparing the data, users should select only persons who are aged 5 years or older from the BTS dataset.

In addition to the 2000 Census disability items, this survey asked two questions about a disability related to the Americans with Disabilities Act (ADA) and one question about the receipt of special education services, which are designed for children with disabilities. A second constructed disability variable, TDISABLD, identified if a respondent reported any of the ADA items, the special education item, or the Census disability items. The purpose of this composite measure is to give the user a variable that identifies respondents reporting any of the disabilities in this survey.

The overall response rate for this survey was 56.03% and was calculated in accordance with the standards defined by the Council of American Survey Research Organizations (CASRO, 1982). This survey design required that the overall rate be computed as the product of the screener interview response rate (64.25%) and the extended interview response rate (87.21%).

Data Quality

The survey implemented quality control measures at every phase. Survey methodologists reviewed the screening questions to ensure that the wording would reduce the incidence of under- or over-coverage of people with disabilities. The CATI and Internet software underwent thorough testing to ensure that the programs mimicked hardcopy questionnaire specifications. The quality control procedures during the prescreener mailing ensured that each household where an address was available was mailed a letter prior to receiving a telephone call.

Interviewers were monitored, and retrained when necessary, by management and supervisory staff from the first to the last day of data collection. Questionnaire keying (for those cases not collected by CATI) were 100% verified.

The CATI system automatically edited for correct ranges, response numbers, and inconsistencies. CATI files were repeatedly checked for consistency during the data collection period. Interviewer comments and problem sheets were reviewed daily and updated as necessary. Frequencies of responses to all data items were reviewed to ensure that appropriate skip patterns were followed.

During post-processing, BTS mathematical statisticians and data analysts carefully reviewed the data files for consistency and accuracy, and worked closely with the contractor to make any needed corrections.

Weighting and Variance Estimation

Weighting is a process to make the estimates from the survey representative of the total population from which the sample was drawn. It does this by accounting for the chances of selecting units into the sample and making adjustments for imperfections in the sample frame.

Survey weights were developed to reduce the bias introduced by:

  • nonresponse cases,
  • unknown residential status,
  • nontelephone households,
  • multiple telephone line households, and
  • subsampling for disability status.

The weighting process began with a base weight that was adjusted to account for nonresponse and undercoverage. The base weight is the inverse of the probability of selection of the sampled unit. During the weighting process, additional information from external sources, such as the Census, was used to benchmark the weights and achieve consistency between totals from the survey data and the external sources. In order to produce estimates, weights were applied to sample data to estimate aggregate statistics. In particular, survey data were weighted to accomplish the following objectives:

  • compensate for differential probabilities of selection for households and persons;
  • reduce biases occurring because nonrespondents may have different characteristics from respondents;
  • adjust, to the extent possible, for undercoverage in the sampling frames and in the conduct of the survey; and
  • reduce the variance of the estimates by using auxiliary information.

Each final weight is the result of a series of sequential adjustments made to the base weights. As part of the weighting process, a household weight is created for all households that completed the screener interview. The household weight is the base weight computed as the inverse of the probability of selection of the sample telephone number adjusted for:

  • unknown residential status,
  • screener interview nonresponse,
  • multiple telephone numbers,
  • subsampling for disability status, and
  • household poststratification.

The poststratified household-level weight is adjusted to create an individual-level (person) weight for each extended interview. The adjustments incorporate the within-household probability of selection of the sampled person and account for nonresponse. Similar to the creation of the household-level weights, each of the adjustments corresponds to a multiplicative weighting factor applied to the individual-level weight. For the individual-level weights the following factors are included:

  • probability of selection of the person,
  • extended interview nonresponse adjustment,
  • trimming, and
  • raking to person-level control totals.

A full discussion of the weighting and variance estimation procedures is available as part of the survey documentation and will be available at the BTS website when the public use data file is posted.

4. U.S. Census Bureau, Census 2000. Summary File 3. For information on confidentiality protection, sampling error, nonsampling error, and definitions, see www.census.gov/prod/cen2000/doc/sf3.pdf.

5. National Center for Health Statistics. Health United States, 2002. Hyattsville, MD: 2002.

6. 1994/95 National Health Interview Survey on Disability, original tabulations from public use data files.

7. Groves, R.M. (1989). Survey Errors and Survey Costs. New York, John Wiley and Sons.