Does sample source matter for theory? Testing model invariance with the influence of presumed influence model across Amazon Mechanical Turk and Qualtrics Panels

https://doi.org/10.1016/j.chb.2022.107416Get rights and content

Highlights

  • Online data collection is common in communication; do different online data sources affect the consistency of theory?

  • Data consistency was tested in the context of fake news using the influence of presumed influence model.

  • Model fit and statistical significance was compared between data collected from Amazon Mechanical Turk and Qualtrics Panels.

  • Theoretical predictions were generally the same between both sources, with neither source exhibiting consistent differences.

  • These results suggest that theory from mass communication may be durable across different online convenience samples.

Abstract

Online data collection services are increasingly common for testing mass communication theory. However, how consistent are the theoretical tenets of theory when tested across different online data services? A pre-registered online survey (N = 1546) examined the influence of the presumed influence model across subjects simultaneously recruited from Amazon Mechanical Turk and Qualtrics Panels. Results revealed that model parameters were mostly consistent with the IPI theory regardless of data source. Methodological implications are discussed.

Section snippets

Sampling

Scholars have long argued that sampling methods and sampling populations are critical factors that can affect the external validity of a study's results and conclusions (Erba et al., 2018; Landers & Behrend, 2015). For example, it is only through probability sampling that a representative sample can be obtained based on its inclusion of random selection in the process (Feild et al., 2006). However, as social science researchers are rarely able to employ true random sampling, the field has seen

Methods

The researchers conducted an online survey in December 2020 with subjects recruited from two data sources: Amazon Mechanical Turk (MTurk) and from the Qualtrics Panel service. Both groups of participants completed a survey that measured variables from the IPI model including perceived self-exposure, perceived other exposure, perceived influence on others, personal attitude, and behavioral intention. The full questionnaire instrument along with the study's pre-registration plan (including

Results

To examine RQ1, the researchers analyzed the data using path analysis via the IBM AMOS statistical package. The model was estimated with 5000 bootstrapped samples and 95% bias-adjusted confidence intervals. Missing data were replaced via expectation maximization prior to analysis. The researchers conducted multiple group analyses to compare the invariance of model fit across the two sample populations. Specifically, the researchers compared the fit of two path models: a freely estimated model,

Discussion

Many different services for online data collection are increasingly employed by mass communication scholars, but questions still remain regarding the relative efficacy of different online sampling services. The present study focused on two of the most widely used sources for online data recruitment: Amazon Mechanical Turk (MTurk) and Qualtrics Panel Services. Specifically, the present study tested whether an established theory from mass communications (namely, the influence of presumed

Conclusion

In sum, the present study offers important methodological implications for the use of online samples. The findings show that while some differences across MTurk and Qualtrics samples can be found when estimating the IPI model, essential aspects of the model are persistent regardless of sample including parameter significance and directionality. For scholars interested in applying the IPI model, it appears that Qualtrics and MTurk both offer samples that should yield theoretically consistent

Author statement

All Authors contributed to the project throughout the completion of the research including conceptualization, data collection, analysis, and reporting.

T. Franklin Waddell (Ph.D., The Pennsylvania State University) is an associate professor in the College of Journalism and Communication at the University of Florida. His research addresses emerging technological and ethical issues at the intersection of journalism and online storytelling including topics such as automated journalism, online comments, and entertainment portrayals of female reporters.

References (47)

  • S.M. Jang et al.

    Third person effects of fake news: Fake news regulation and media literacy interventions

    Computers in Human Behavior

    (2018)
  • S.M. Smith et al.

    A multi-group analysis of online survey respondent data quality: Comparing a regular USA consumer panel to MTurk samples

    Journal of Business Research

    (2016)
  • K.A. Thomas et al.

    Validity and Mechanical Turk: An assessment of exclusion methods and interactive experiments

    Computers in Human Behavior

    (2017)
  • K. Ali et al.

    The effects of emotions, individual attitudes towards vaccination, and social endorsements on perceived fake news credibility and sharing motivations

    Computers in Human Behavior

    (2022)
  • Y.M. Baek et al.

    Fake news should be regulated because it influences both “others” and me”: How and why the influence of presumed influence model should be extended

    Mass Communication & Society

    (2019)
  • U. Bernhard et al.

    Corrective or confirmative actions? Political online participation as a consequence of presumed media influences in election campaigns

    Journal of Information Technology & Politics

    (2015)
  • T.C. Boas et al.

    Recruiting large online samples in the United States and India: Facebook, mechanical Turk, and Qualtrics

    Polit. Sci. Res. Methods

    (2018)
  • T.C. Boas et al.

    Recruiting large online samples in the United States and India: Facebook, mechanical Turk, and Qualtrics

    Polit. Sci. Res. Methods

    (2020)
  • L. Chang et al.

    Comparing oral interviewing with self-administered computerized questionnaires:

    (2010)
  • Y. Cheng et al.

    The influence of presumed fake news influence: Examining public support for corporate corrective response, media literacy interventions, and governmental regulation

    Mass Commun. Soc.

    (2020)
  • M. Chmielewski et al.

    An MTurk crisis? Shifts in data quality and the impact on study results

    Social Psychological and Personality Science

    (2020)
  • S. Clifford et al.

    Are samples drawn from Mechanical Turk valid for research on political ideology?

    Res. Politic.

    (2015)
  • J. Erba et al.

    Sampling methods and sample populations in quantitative mass communication research studies: A 15-year census of six journals

    Communication Research Reports

    (2018)
  • L. Feild et al.

    Using probability vs. nonprobability sampling to identify hard-to-access participants for health-related research: Costs and contrasts

    Journal of Aging and Health

    (2006)
  • D.J. Follmer et al.

    The role of MTurk in education research: Advantages, issues, and future directions

    Educational Researcher

    (2017)
  • J.K. Goodman et al.

    Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples

    Behav. Decision Making

    (2013)
  • A.C. Gunther et al.

    Presumed influence on peer norms: How mass media indirectly affect adolescent smoking

    Journal of Communication

    (2006)
  • A.C. Gunther et al.

    The influence of presumed influence

    Journal of Communication

    (2003)
  • E. Hargittai et al.

    Comparing internet experiences and prosociality in amazon mechanical Turk and population-based survey samples

    Socius

    (2020)
  • M.S. Heen et al.

    A comparison of different online sampling approaches for generating national samples

    Center Crime Justice Policy

    (2014)
  • C.A. Hoffner et al.

    Perceived media influence, mental illness, and responses to news coverage of a mass shooting

    Psychol. Popular Media Cul.

    (2017)
  • S.S. Ho et al.

    Let’s nab fake science news: Predicting scientists’ support for interventions using the influence of presumed media influence model

    Journalism

    (2020)
  • T.P. Holt et al.

    Using Qualtrics panels to source external auditors: A replication study

    Journal of Information Systems

    (2019)
  • T. Franklin Waddell (Ph.D., The Pennsylvania State University) is an associate professor in the College of Journalism and Communication at the University of Florida. His research addresses emerging technological and ethical issues at the intersection of journalism and online storytelling including topics such as automated journalism, online comments, and entertainment portrayals of female reporters.

    Holly Overton (Ph.D., The Pennsylvania State University, 2016) is an associate professor in the Bellisario College of Communications at The Pennsylvania State University and director of research for the Arthur W. Page Center for Integrity in Public Communication. Her research examines corporate social responsibility communication and corporate social advocacy.

    Robert McKeever (Ph.D., UNC-Chapel Hill) is an associate professor in the School of Journalism and Mass Communications at the University of South Carolina. His research interests include health communication and prosocial media effects on beliefs, attitudes, and behaviors.

    View full text