close
close

Methodology of the 2024 National Public Opinion Reference Survey

Methodology of the 2024 National Public Opinion Reference Survey

Summary

SSRS conducted the National Public Opinion Reference Survey (NPORS) for the Pew Research Center using address-based sampling and a multimode protocol. The survey was conducted from February 1, 2024 to June 10, 2024. Participants first received an invitation by mail to complete an online survey. A paper survey was later sent to those who did not respond. Additionally, the mailings invited participants to call a toll-free number to complete the survey over the phone with a live interviewer. A total of 2,535 respondents completed the survey online, 2,764 respondents completed the paper survey and 327 respondents completed the survey by telephone (total N=5,626). The survey was administered in English and Spanish. The American Association for Public Opinion Research (AAPOR) 1 response rate was 32%.

Example definition

The example is from the US Postal Service Computerized Delivery Sequence File and was provided by Marketing Systems Group (MSG). Occupied residential addresses (including “drop points”) in all US states (including Alaska and Hawaii) and the District of Columbia had a zero probability of selection. The draw was a national, stratified random sample, with different probabilities of selection between the mutually exclusive strata. SSRS has designed the sample plan shown in the table below.

Methodology of the 2024 National Public Opinion Reference Survey

Mail protocol

SSRS sent the initial mailings in a 9-by-12-inch window envelope via first class mail to the 18,834 sampled households. These packages contained two $1 bills (visible on the outside of the envelope) and a letter asking a household member to complete the survey. The letter provided a URL for the online survey; a toll-free dial-in number; a password that you can enter on the home page of the online survey or read aloud to the telephone interviewers if they choose to call in; and a FAQ section printed on the back. If there were two or more adults in the household, the letter asked the adult with the next birthday to complete the survey. Households that did not respond later received a reminder postcard and then a reminder letter via first class mail.

After the web portion of the data collection period ended, SSRS mailed non-responding households with a drop-off address a 9-by-12-inch Priority Mail window envelope. The Priority envelope contained a letter with a frequently asked questions section on the back, a visible $5 bill, a paper copy of the survey, and a postage-paid return envelope. The paper survey consisted of a folded booklet measuring 11 by 17 inches. The within-household selection instructions were identical to those used in the previous online survey request. The same households later received a second envelope containing another copy of the paper questionnaire by first class mail.

The first mailing was sent in two separate launches: a soft launch and a full launch. The soft launch made up 5% of the sample and shipped several days earlier than the full launch. The entire launch consisted of the remaining sample.

Pew Research Center developed the questionnaire in consultation with SSRS. The online questionnaire was tested on both desktop and mobile devices. The test data was analyzed to ensure the logic and randomizations worked as intended before the survey was launched.

Development and testing of questionnaires

Pew Research Center developed the questionnaire in consultation with SSRS. The online questionnaire was tested on both desktop and mobile devices. The test data was analyzed to ensure the logic and randomizations worked as intended before the survey was launched.

Weighing

The survey was weighted to support reliable inferences from the sample to the target population of US adults. The weight was created using a multi-step process that includes an adjustment of the base weight for different selection probabilities and a rake calibration that aligns the survey with population benchmarks. The process starts with the basis weight, which takes into account the probability of selecting the address from the US Postal Service Computerized Delivery Sequence File frame, as well as the number of adults living in the household, and included an adjustment of the adaptive mode for cases where responded in an offline mode.

The basis weights are then calibrated against population benchmarks using raking or iterative proportional adjustment. The rake dimensions and the source for the population parameter estimates are shown in the table below. All raking targets are based on the US non-institutionalized adult population (18 years and older). These weights are trimmed at the 1st and 99th percentiles to reduce the loss of accuracy due to variance in the weights.

Design effect and margin of error

Weighting and survey design features that deviate from simple random sampling typically result in an increase in the variance of survey estimates. This increase, known as the design effect, or “deff,” should be included in the margin of error, standard errors, and tests of statistical significance. The total design effect for a study is usually approximated as one plus the squared coefficient of variation of the weights.

For this study, the margin of error (half-width of the 95% confidence interval), which includes the design effect for full sample estimates of 50%, is plus or minus 1.8 percentage points. Estimates based on subgroups will have a larger margin of error. It is important to remember that random sampling error is only one possible source of error in a survey estimate. Other sources, such as question wording and inaccurate reporting, can cause additional errors. A summary of the weights and the associated design effect can be found in the table below.

The following table shows the unweighted sample size and the error attributable to the sample that would be expected at a 95% confidence level for different groups in the survey.

Sample sizes and sampling errors for other subgroups are available upon request. In addition to sampling error, one should keep in mind that question wording and practical issues in conducting surveys can introduce errors or biases into poll findings.

A note on the Asian adult monster

This survey includes a total sample size of 231 Asian adults. The sample includes primarily English-speaking Asian adults and therefore may not be representative of the overall Asian adult population. Despite this limitation, it is important to report Asian adults’ views on the topics in this study. As always, responses from Asian adults in this report are included in the general population figures.

Dispositions

The table below shows the setup of all sampled households for the survey.