Survey Example: Thinking through sample size limitations

Sam Larson
6 min readOct 2, 2023

--

A walk-through guide for journalists and discerning news consumers on spot-checking question and response sample sizes, margins of error, and reliability.

As a data reporter, I’ve seen many examples of misused, misconstrued, or sensationalized survey results. My goal is to help people understand the strengths and weaknesses of survey data, and provide insight into the many factors that go into building, launching, and interpreting surveys.

In this series, we are exploring the use of public opinion polls and surveys in news media. Topics include responsible usage, methodology, and fact-checking with real-world examples.

Process for finding information about the sample and methodology, and following that to a full report
Finding information about the sample and methodology, and following that to a full report

When it comes to survey research, rigorous methodology evaluation is essential to ensure the collected data’s accuracy, reliability, and representativeness of the targeted population. In this post, we’ll go through a detailed survey on consumer sentiment, and apply sample size evaluation to various questions and responses from the report.

When writing or reading a news article containing survey data, it’s important to approach the findings with a critical eye. Here, we have a market research article referencing findings from a new survey report by a marketing and communications firm. The process for uncovering sample details is shown in the image above.

In some cases, the original report is provided through a link, while in other cases, you will need to conduct a search for it. As we’ve discussed before, understanding the survey’s purpose and its sponsor can greatly assist in assessing the practical limitations of the survey and the reliability of the findings.

The usefulness of any survey depends on the specific insights we intend to report. In this instance, we are examining a niche segment of a specific industry — this population and their interests may not be covered by more authoritative research organizations. Let’s dig deeper!

Highlights:

  • This is a unique attempt to gain insights into a niche population known as “early food adopters,” who are interested in dining out, engaging with food content online, and open to trying new foods. However, the use of qualifying questions to limit respondents prevents this from being a truly random sample.
  • The article reporting on the survey results includes an interview with someone familiar with the data, which is great! Readily available contact information and numerous direct quotes in the article allow a glimpse into the motivations behind the research design.

Considerations:

  • The subjectivity of the subgroup and its source raises questions about the group’s authenticity; we are unable to verify the behavioral qualifiers for those allowed to take the survey. How do we know the respondents engage online? How are respondents interpreting an “interest” in dining out? This is a common challenge associated with self-report data and online panel surveys.
  • Most importantly, the small sample size may hinder reliability as we look for subgroup-specific takeaways, especially since this is not a random sample and our ability to calculate “true” error margins is limited. Theoretical “credibility intervals” will be estimated going forward.

Other thoughts:

  • The market research article concludes with recommendations for retailers on how to possibly boost sales. While this survey data does suggest possible consumer motivations, businesses and journalists should consider comparing this opinion data with sales trends or other observable data, as these sources can provide a more concrete reflection of consumer behavior. Sometimes a survey isn’t the best choice!
Best Practices for Survey Research: is a survey the best choice?
Best Practices for Survey Research | American Association for Public Opinion Research

Vetting the Methods

When reporting on survey findings from a sample of 700 people, we need to be extremely cautious because our sub-group slices will get very small, very quickly. It’s noteworthy that the demographic information is readily available in the full PDF report, revealing that there are fewer than 50 respondents in the Gen Z age group. The theoretical credibility interval for a sample of that size is approximately +/- 14 percentage points.

Subgroup sample size breakdown
Subgroup sample size breakdown

For instance, one takeaway from the report highlights that 31% of Gen Z individuals claim TikTok has the most influence on their food curiosity. However, with only 15 respondents in this category, the theoretical margin of error would be about +/- 25 percentage points: from 6 to 56% of Gen Z. Newsrooms should establish editorial standards that prevent reporting on such minuscule samples with large ranges in their margins of error.

American Association for Public Opinion Research | Margin of sampling error chart and table
American Association for Public Opinion Research | Margin of sampling error chart and table

When a survey employs a series of 5-point ratings (from “not at all curious” to “extremely curious”, below), grouping “very” and “extremely” curious respondents can enhance reliability by creating larger sample subgroups. In the report, it’s noted that 78% of respondents are very or extremely curious about global/cultural flavors and cuisines, which is a solid approach to bolster reliability.

Full response percentages for a 5-point response scale
Full response percentages for a 5-point response scale

In the subsequent “select all that apply” statistics (below), even the most selected option has a sizable theoretical credibility interval, ranging from 30% to 42%. As reporters, we must be cautious when presenting such data, using language like “about 1 in 3 individuals” are curious about this item. Because of how close the top option is in quantity to the next 2 or 3 topics, we can’t confidently say it’s the most important item as those margins of error will overlap. Ranking these takeaways wouldn’t be advised.

Select-all-that-apply chart implying a ranked order that might not exist
Select-all-that-apply chart implying a ranked order that might not exist

As researchers, it’s essential to acknowledge when sample sizes become too small to provide reliable data. Similarly, as journalists, we must ensure that the statistics we report are trustworthy and substantively different enough to merit attention. For instance, in election polls, small percentage differences become overblown discussions when they could simply be due to chance or overlapping margins of error.

As far as the Food Curiosity Report mentioned above, it offers a unique perspective on consumer sentiment that’s probably not readily available elsewhere. While referencing the broad statistics is likely fine, we will want to exercise caution when dealing with smaller subgroups, as they may not offer a reliable picture.

How to apply the Sample Size Lens to takeaways:

While the overall survey sample size is often readily available in the methodology section of a survey article or in the lead paragraph with the first takeaway, it’s important to continue checking sample sizes for every question and response item.

When dealing with survey findings from a sample of 700 people as we just did, it’s crucial to be cautious, as subgroup analyses can lead to small sample sizes. Understanding the number of respondents being analyzed is pivotal in determining the significance of the takeaways. Establish editorial standards for minimum sample sizes for subgroup analyses, and always double-check for overlapping margins of error.

Additional thoughts: Question Types and Wording

Launching a survey with many “select all that apply” questions can be limiting for the respondents and may end up reflecting the survey writer’s bias. There are many methods for developing survey questionnaires to help reduce different types of biases, and we will cover those later!

Stay connected: Upcoming content will delve into responsible usage and methodological considerations for survey data to explore these sources’ strengths and weaknesses, featuring more vetting methods and practical walk-throughs of real-world examples. Let me know what topics you would like to explore!

--

--

Sam Larson
Sam Larson

Written by Sam Larson

Survey enthusiast. Data journalist. Former butcher.

No responses yet