Skip to content
BY 4.0 license Open Access Published by De Gruyter Mouton September 23, 2022

The mediating role of comments’ credibility in influencing cancer cure misperceptions and social sharing

  • Juan Liu

    Juan Liu (Ph.D., Wayne State University) is an Assistant Professor in the Department of Mass Communication at Towson University. Her research centers on correcting misinformation on social media, media effects, public opinion, and corporate social advocacy.

    EMAIL logo
    , Carrie Reif-Stice

    Carrie Reif-Stice (Ph.D., The University of Southern Mississippi) is an Assistant Professor in the Department of Communication at Augusta University. Her research focuses on risk, crisis, and health communication. She has been published in communication journals and referred book chapters.

    and Bruce Getz

    Bruce Getz, Jr. (Ph.D., University of Florida) is an educator and creative services professional and currently serves as Assistant Professor of Integrated Media within Columbus State University’s Department of Communication.

Abstract

Purpose

The rise of fake news is an increasing issue for cancer patients. Specifically, the use of cannabis as a cure for cancer is the most shared social media content regarding alternative cancer treatments (Shi, Siyu, Arthur R. Brant, Aaron Sabolch & Erqi Pollom. 2019. False news of a cannabis cancer cure. Cureus 11(1). e3918. DOI:10.7759/cureus.3918). To better understand the relationship between fake news, perceived credibility, social sharing, and belief in health misinformation, we conducted an online experiment in the United States to explore how people react to fake cancer news on Facebook.

Design/methodology/approach

A four-condition between-subjects online experiment was conducted to examine whether the perceived credibility of information and comments serve as mediating factors to influence misperceptions and social sharing of cancer misinformation.

Findings

We find that it is the comments’ credibility rather than information credibility that acts as a mediator between the effects of exposure to variations of comments on cancer treatment misperceptions and social sharing intentions.

Practical implications

Our study provides important insights into correcting health misinformation on social media. Findings demonstrate the importance of healthcare professionals and organizations engaging with misleading and potentially harmful misinformation posted. Additionally, practitioners need to provide training to enhance individuals’ media literacy to better discern credible health information from misinformation on social media.

Value

The study advances prior misinformation correction and credibility literature. Theoretically, we find that perceived comments’ credibility act as a mediator in mitigating the spread of fake news. Furthermore, exposure to variations of corrective comments (vs. peers’ supportive comments) increased cancer cure misperceptions via comments’ credibility, a backfire effect indicating that cancer cure misperceptions persisted, were complicated, and difficult to correct.

1 Introduction

The widespread circulation of fake health news on social networking sites is dangerous to health consumers (Carlson 2018; Pennycook et al. 2020). The use of cannabis as a cancer cure represents the largest category of social media content about alternative cancer treatment (Shi et al. 2019). Viral claims that marijuana can treat serious health conditions, such as cancer, is a growing concern for the Food and Drug Administration (2017) in the United States. To combat the spread of cancer misinformation, Facebook has changed its algorithms to reduce the promotion and sharing of miracle cures (Hernandez and McMillan 2019), and noted U.S. health organizations also addressed the health effects of marijuana (e.g., National Cancer Institute 2017; U.S. Department of Health and Human Services 2014), but the outreach exerted a low level of influence on social media (Wang et al. 2019).

Although fake news is not a new phenomenon, the rapid sharing of fabricated and fraudulent health information across multiple platforms continues to garner scholarly attention (Sharma et al. 2017; Southwell et al. 2018). Researchers are exploring how health-related misinformation on social media presents a serious risk to public health and action (Melchior and Oliveira 2021). Extensive literature exists on the prevalence (Chen et al. 2018; Pennycook et al. 2020; Pulido et al. 2020; Waszak et al. 2018) and impact (Albarracin et al. 2018; Chua and Banerjee 2018; Johnson et al. 2022) of health misinformation. Most research found that prevention-related misinformation diffused more broadly and deeply than accurate information on social media (Chen et al. 2018; Pennycook et al. 2020), and misinformation about cancer was potentially harmful by promoting unproven treatments as alternatives (Johnson et al. 2022). Many social media users lack health literacy and struggle to distinguish accurate information from misleading claims (Oh and Lee 2019). As a result, they continue to spread health-related misinformation by sharing posts on their personal pages (Broniatowski et al. 2018; Chou et al. 2018; Kata 2012).

Extant studies explore how health organizations can create message interventions to contradict health-related misinformation on social media (Bode and Vraga 2017; Chou et al. 2018; Gesser-Edelsburg et al. 2018; van der Meer and Jin 2020; Vraga and Bode 2017a, 2017b) and expert sources are found to be more effective in correcting misinformation than users who engaged in corrections on social media (Lewandowsky et al. 2012; van der Meer and Jin 2020; Vraga and Bode 2017b). However, only a few studies examine the mediating role of credibility perceptions in reducing belief in misinformation (Vraga and Bode 2018; Kim et al. 2020).

By focusing on cancer cure misinformation in the United States, we examined the role of various corrective comments in cancer cure misperceptions and social sharing. Specifically, the study investigates whether the perceived credibility of information and comments serve as mediators in this process. The study advances the health misinformation correction and credibility literature.

1.1 Defining misinformation and misperceptions

Both misinformation – the inadvertent sharing of false information (Bauer and von Hohenberg 2020) and disinformation – “a coordinated or deliberate effort to knowingly circulate misinformation in order to gain money, power, or reputation” (Swire-Thompson and Lazer 2020, 435) are growing and pervasive threats to the healthcare community (Allington et al. 2020; Rodgers and Massac 2020). Chou et al. (2018, 1) defined health-related misinformation as a “health-related claim of fact that is currently false due to the lack of scientific evidence.” Research indicates that health misinformation can have negative effects in the real world, including spreading controversy on vaccines (Broniatowski et al. 2018) and amplifying false cancer treatments (Gage-Bouchard et al. 2018).

Misperceptions refer to “cases in which people’s beliefs about factual matters are not supported by clear evidence and expert opinion” (Nyhan and Reifler 2010, 305). Misinformation, unlike misperceptions, is about the creation and dissemination of inaccurate information (Su et al. 2022). Conversely, misperceptions are individuals’ beliefs in false or inaccurate information not supported by expert evidence (Vraga et al. 2020b). Research suggests that misinformation facilitates misperceptions. For example, after exposing to misinformation on social media, people become less likely to engage in disease prevention behaviors (Larson 2018; Massey et al. 2020). The pervasive use of social media has contributed to the spread of health misinformation, and Su et al. (2022) found that social media information seeking was positively related to COVID-19 misperceptions. Misinformation and misperceptions work together to encourage the spread of inaccurate information. In other words, before individuals can produce or spread false information, they must first hold misperceptions and consume misinformation (Bode and Vraga 2015).

1.2 Social correction versus expert correction

To mitigate the spread of misinformation on social media, scholars and practitioners have examined various correction strategies, such as media literacy programs (Jones-Jang et al. 2019; Vraga et al. 2020a) fact-checkers (Ecker et al. 2020a; Lewandowsky et al. 2012) artificial intelligence (Fernandez-Luque and Imran 2018) algorithm correction (Huang and Wang 2020) and responsive correction—a correction from a platform (Bode and Vraga 2015) or another social media user refuting a claim (Margolin et al. 2018).

Scholars are exploring the importance of social corrections in limiting the circulation of false information (Bode and Vraga 2017; van der Meer and Jin 2020). Social correction, unlike other methods, relies on acquaintances on social networks, who are not necessarily credible but are part of a large and unknown audience (Bode and Vraga 2017; Edwards et al. 2014; Marwick and Boyd 2011). When social media users refute misinformation with either an external source (e.g., a link to an expert source) or no evidence, a social correction occurs (Bode and Vraga 2017). In most cases, social corrections are statements made by peers or other social media users. Individuals use online comments to navigate information and assess credibility (Metzger et al. 2010). According to Vraga and Bode (2017a) social corrections are effective in reducing misinformation, especially when there are multiple corrections and corrective information is cited from credible sources (Vraga and Bode 2020). In a study of the Zika virus, social corrections were effective in clarifying misinformation and reducing misperceptions about the causes of the Zika virus (Bode and Vraga 2017).

Social corrections can function to correct misperceptions but differ across social media platforms (Vrag and Bode 2018). On Facebook, adding an external source to reinforce the social correction greatly enhanced perceptions of the corrective comments, while providing this source did not influence evaluations of corrective replies on Twitter (Vraga and Bode 2018). Given competing social media channels provide different affordances (Majchrzak et al. 2013) and could be perceived differently by their users, the present study only focuses on a single social media platform – Facebook – because cancer treatment misinformation articles obtained higher engagement on Facebook than other platforms (Johnson et al. 2022).

For health misinformation, expert corrections are more effective compared to corrections from non-experts (Walter et al. 2020). Because expertise is often associated with credibility, expert corrections are effective in combating false health information. High credibility bolsters the strength of corrections, especially when the experts are perceived as unbiased actors (Bode and Vraga 2015; Petty and Brinol 2008; Slater and Rouner 1996). An expert was identified as either a health professional or an official health agency (Vraga and Bode 2017b). Research shows that expert sources like the CDC are more effective than other users in diminishing misperceptions and anxiety (van der Meer and Jin 2020; Vraga and Bode 2017b) due to their credibility in presenting the messages, which is in line with a meta-analysis that reveals high credibility sources are considered to be more persuasive (Pornpitakpan 2004; Walter et al. 2019).

Previous research indicates that user comments’ valence affects perceived credibility and intentions of spreading misinformation (Naab et al. 2020; Colliander 2019). Naab et al. (2020) found that participants who consumed critical user comments perceived the article as less credible than participants who were exposed to supportive comments. Colliander (2019) indicates exposure to user comments critical of a fake news article leads people to have lower intentions to share the article than exposure to supportive comments. The current study investigates whether users’ supportive comments can lead individuals to believe in and spread inaccurate information.

Scholars have demonstrated how social and expert corrections help mitigate beliefs in misinformation (Bode and Vraga 2017; Vraga and Bode 2017a; Walter et al. 2020). However, research fails to address whether mixed social and expert corrective comments can minimize misperceptions and the sharing of misinformation. Additionally, further research is needed to understand whether corrective messages could cause backfire effects (Ecker et al. 2020b; Nyhan and Reifler 2010). Thus, the study asks the following research question:

RQ1:

What is the impact of exposure to comments (peers’ supportive comments vs. peers’ correction comments vs. expert-only correction comments vs. mixed correction comments) on (a) cancer treatment misperceptions and (b) social sharing?

1.3 Credibility evaluations as mediators

The role of credibility in reducing misperceptions was recently examined in the correction literature (Huang and Wang 2020; Kim and Masullo Chen 2020; Kim et al. 2020; van der Meer and Jin 2020; Vraga et al. 2020b). Credibility refers to the believability of information messages and sources, as perceived by the information receiver (Metzger and Flanagin 2011).

Metzger et al. (2003) defined credibility in terms of (a) the message source, (b) the message itself such as message structure and content, and (c) the medium as the message dissemination platform. Source credibility was defined as “judgments made by a perceiver concerning the believability of a communicator” (O’Keefe 1990, 130–131). There are two important dimensions related to perceived source credibility: expertise and trustworthiness (Hovland et al. 1953; Pornpitakpan 2004). Based on source credibility, expertise is the perceived knowledge, skill, and experience of the source (Fogg and Tseng 1999). Expertise helps individuals assess the extent a communicator can make correct statements (Viviani and Pasi 2017). Trustworthiness is how likely people perceive the statement made by a communicator to be valid (Hovland et al. 1953). This feature is closely related to message credibility, which indicates information is trustworthy when it appears to be valid, accurate, and fair (Hilligoss and Rieh 2008).

Extant misinformation research has identified the useful role of expert source credibility in correcting misinformation across various expert sources, including news media (Nyhan and Reifler 2010; Thorson 2016; van der Meer and Jin 2020) health organizations (Vraga and Bode 2017b) government agencies (van der Meer and Jin 2020) and fact-checking organizations (Amazeen et al. 2018; Bode and Vraga 2015; Hameleers and Van der Meer 2019). Several studies examining expert corrections (Garrett et al. 2013; Gesser-Edelsburg et al. 2018; Lewandowskly et al. 2012; Nyhan and Reifler 2010; Vraga and Bode 2017b) argue that expertise is an integral part of credibility, and such source credibility can enhance the persuasive impact of the communication (Austin and Dong 1994; Chaiken and Maheswaran 1994; Eastin 2001). For instance, Vraga and Bode (2017b) discovered that because the public trusts the CDC, CDC corrections significantly reduced public misperceptions and did not harm the credibility of the organization.

Message credibility explores “how message characteristics impact perceptions of believability, either of the source or of the source’s message” (Metzger et al. 2003, 302). Viewed in this way, source and message credibility are overlapping concepts and closely connected (Slater and Rouner 1996; Stamm and Dube 1994). However, credibility is context dependent (Vraga et al. 2020b). Appelman and Sundar (2016, 63) defined message credibility as “an individual’s judgment of the veracity of the content communication.” Message credibility is measured based on the content characteristics of the messages, such as ratings of believability, authenticity, accuracy, or trustworthiness (Appelman and Sundar 2016; Flanagin and Metzger 2000).

Scholars have investigated the impact of perceived source and medium credibility to correct misinformation (Nyhan and Reifler 2010; Vraga and Bode 2017b, 2018; Mena et al. 2020). However, few studies have examined the mediating role of message credibility to reduce misperceptions (Huang and Wang 2020; Kim et al. 2020; Vraga et al. 2020b).

For instance, to compare the two correction strategies (fact-focused vs. logic-focused), Vraga et al. (2020b) conducted a study in which participants viewed Instagram posts containing misinformation on climate change. Logic-focused correction strategies refer to rhetorical methods used to reduce individuals’ misperceptions by identifying the misleading techniques in the messages such as pointing out fake experts, oversimplification fallacy, and incomplete evidence (Cook et al. 2018) while fact-focused corrections mean combating misinformation by providing audiences with accurate information. Vraga et al. (2020b) found misinformation credibility serves as a mediator between the corrections and individuals’ misperceptions of climate change, an indirect pathway that is only significant for the logic-focused correction rather than the fact-focused correction. Another study demonstrated that attention to the correction image can reduce the credibility of a misinformation tweet, resulting in reduced human papillomavirus (HPV) misperceptions (Kim et al. 2020). Conversely, Huang and Wang (2020) showed that narrative message format in social correction reduced credibility evaluations and decrease intentions to stop using e-cigarettes.

Theoretically, corrections debunking a misinformation message should decrease its credibility (Huang and Wang 2020; Vraga et al. 2020b). For instance, Vraga et al. (2020b) found that logic-based correction reduced the credibility of the misinformation tweet and led to more accurate attitudes toward the HPV vaccine, because logic-based correction explains the fallacious reasoning in misinformation arguments (Cook et al. 2017). Given prior research (Kim et al. 2020; Vraga et al. 2020b) suggests corrections can reduce misinformation credibility, and leads to lower misperceptions, we anticipate the following:

H1a:

Perceived information credibility mediates the effects of exposure to variations of comments on cancer treatment misperceptions.

While existing research often studied the credibility of misinformation as a mediator to decrease misperceptions (Kim et al. 2020; Vraga et al. 2020b). it has not yet examined its effect on reducing sharing intentions. Prior fact-checking literature found that fact-checking information decreased social sharing intentions (Chung and Kim 2021). To bridge this gap, we expect that information credibility acts as a mediator between the effects of corrective comments on sharing of misinformation. Thus, we proposed the following hypothesis.

H1b:

Perceived information credibility mediates the effects of exposure to variations of comments on social sharing intentions.

Furthermore, our study examines whether comments’ credibility acting as a mediator affects misperceptions and social sharing. Given that comments valence affects perceived news article credibility (Naab et al. 2020) and social sharing (Colliander 2019) and Vraga et al. (2020b) revealed that perceived credibility of the correction messages mediated the effect of logic-focused corrections on plant misperceptions, we expect perceptions of comments’ credibility can act as a mediator in affecting misperceptions and social sharing intentions of the inaccurate claim. Thus, the following hypotheses are proposed:

H2a:

Perceived comments’ credibility mediates the effects of exposure to variations of comments on cancer treatment misperceptions.

H2b:

Perceived comments’ credibility mediates the effects of exposure to variations of comments on social sharing intentions.

2 Method

2.1 Study design

A four-condition between-subjects experiment was conducted in early July of 2019. Participants were randomly assigned to view one of the four types of comments (peers’ supportive comments vs. peers’ correction comments vs. expert-only correction comments vs. mixed correction comments) on a Facebook post claiming marijuana can cure cancer.[1]

2.2 Sample

A total of 358 participants were recruited via Amazon’s Mechanical Turk (MTurk) and completed the experiment. To participate, individuals needed to reside in the U.S. and be at least 18 years old. The resulting sample was 64.2% male, 69.8% obtaining a college degree, with an average age of 31.4. The sample was comprised of 43.3% who identified as Caucasian, 41.3% as Asian, 7.3% as Black or African American, 4.2% as Hispanic, and 3.9% as others. Using 5-point Likert scales (1 = strongly disagree to 5 = strongly agree). Participants considered information from the Centers for Disease Control and Prevention (CDC) (M = 4.05, SD = 0.829). American Red Cross (M = 3.94, SD = 0.938) and World Health Organization (WHO) (M = 4.09, SD = 0.887) as credible. Additionally, 18.7% of participants self-reported that they were diagnosed with cancer, spent a moderate amount of time looking for cancer treatment information (M = 3.44, SD = 2.27), and indicated generally higher health efficacy (M = 5.50, SD = 0.96).

2.3 Procedure and manipulations

Participants were randomly exposed to a simulated Facebook post featuring a fake health news headline claiming, “marijuana kills cancer” with a list of different types of comments (see Appendix) and asked to imagine that they came across it in their news feeds. The post contained a photo, headline, and the same number of “likes,” “shares,” and “comments” across all four groups. To enhance external validity, the study used a fake news article claiming marijuana kills cancer from snopes.com (Kasprak 2018).

In the condition of peers’ supportive comments, 85 participants were exposed to three user comments supporting the Facebook post’s claim. As for the condition of peers’ corrective comments, 84 participants were shown three user comments criticizing the posts’ points of view. A 5-point Likert scale asked participants’ level of agreement that user comments under the Facebook post were supportive of the Facebook news. A manipulation check confirms that participants who encountered peer comments supporting the post were more likely to recognize that the user comments support the claim of the post (M = 4.04, SD = 0.79) than participants who read peer comments opposing the post (M = 2.95, SD = 1.46) t (167) = 6.02, p < 0.001.

In the condition of expert-only comments, 94 participants were exposed to three expert-only organizations (e.g., CDC, World Health Organization, and American Red Cross) correcting the claim. In the condition of mixed comments, 95 participants received combined comments (two users’ comments and one CDC comment) critical of the claim. Two questions asked participants to report how many user comments are from individuals and organizations. A manipulation check showed that there were significant differences across the experimental conditions in terms of the number of individual comments χ2 = (12, 355) = 118.74, p < 0.001, and the number of comments from the organizations χ2 = (12, 355) = 113.91, p < 0.001.

After exposure to the stimuli, participants were asked about their perceptions of how credible the claim is, the perceived credibility of comments, their beliefs in the claim, and demographic information. There were no significant differences in participants’ background characteristics, diagnosis of cancer, seeking cancer treatment information, and health efficacy by experimental conditions, indicating successful randomization.

2.4 Measures

The predictor variables of interest in this study were the exposure to different types of comments, which was a manipulation in the study design.

Perceived credibility of information. Participants were asked to rate the Facebook post on 7-point scales adapted from Vraga and Bode (2017b) to measure its trustworthiness, credibility, accuracy, and informativeness (M = 4.33, SD = 1.83, Cronbach’s α = 0.97).

Perceived credibility of comments, which was adapted from Vraga and Bode (2017b) to measure how participants perceived the comments they viewed as trustworthy, credible, accurate, relevant, and useful on 7-point Likert scales (M = 4.67, SD = 1.28, Cronbach’s α = 0.92).

Misperceptions about cancer treatment were measured with five items. Using 7-point Likert scales (1 = strongly disagree to 7 = strongly agree). participants were asked their level of agreement on (a) marijuana can cure cancer, (b) more marijuana should be used to cure cancer, (c) use of marijuana will greatly improve cancer treatment, (d) marijuana can save cancer patients’ lives, (f) marijuana is the new anti-cancer drug (M = 4.26, SD = 1.83, Cronbach’s α = 0.95).

Social sharing was adapted from prior measures (e.g., Lee and Ma 2012; Weeks and Holbert 2013). On a 7-point scale, participants were asked their likelihood to comment, share, like, and talk to someone else about the Facebook post on cancer treatment. (M = 4.19, SD = 1.84, Cronbach’s α = 0.94).

In addition, participants’ race was recoded as White (1) versus non-White (0) alongside whether diagnosis with cancer, how often sought cancer treatment information, and health efficacy, were measured as covariates.

3 Results

To answer RQ1a and RQ1b, we conducted a one-way multivariate analysis of covariance (MANCOVA) to explore whether exposure to various comments affects participants’ cancer treatment misperceptions and the likelihood of social sharing after controlling for the covariates. No significant effects of experimental conditions on two dependent variables were found, Wilks’ Lambda = 0.97, F (6, 350) = 1.64, p = 0.13, η p 2 = 0 . 014 . Of the covariates, spending time looking for cancer treatment information, F (1, 350) = 101.91, p < 0.001, η p 2 = 0 . 23 , and health efficacy, F (1, 350) = 15,97, p < 0.001, η p 2 = 0.04 were significantly related to belief in misinformation, whereas being White, F (1, 350) = 8,18, p = 0.004, η p 2 = 0.02 , spending time looking for cancer treatment information, F (1, 350) = 16.43, p < 0.001, η p 2 = 0.05 , and having a cancer, F (1, 350) = 4.87, p = 0.028, η p 2 = 0.01 , were significantly related to social sharing.

Table 1 summarizes the means of cancer treatment misperceptions and social sharing by experimental conditions. An examination of the univariate effects was performed to check whether different comments influenced cancer treatment misperceptions and social sharing. For social sharing, results did not identify a significant main effect F (1, 350) = 1.69, p = 0.17, η p 2 = 0.01 . As for cancer cure misperceptions, the univariate analyses indicated there were significant differences among comments, F (1, 350) = 2.69, p = 0.046, η p 2 = 0.02 . Specifically, participants in the mixed comments condition reported relatively lower misperceptions (M = 4.12, SD = 1.99) than participants reading peers’ supportive comments (M = 4.65, SD = 1.66).

Table 1:

Means of cancer treatment misperceptions and social sharing of misinformation by experimental conditions.

Experimental conditions
Peers’

supportive

comments
Peers’

corrective

comments
Expert-only

corrective

comments
Mixed

corrective

comments
N = 85 N = 84 N = 94 N = 95
Cancer treatment misperceptions M 4.65 4.27 4.05 4.12
SD 1.66 1.88 1.71 1.99
Social sharing of misinformation M 3.85 4.12 4.43 4.30
SD 1.74 1.96 1.72 1.90

3.1 Mediation via information credibility

H1a predicted that the credibility of the information acts as a mediator between the types of comments and participants’ cancer treatment misperceptions. We used the Hayes (2013) PROCESS Macro and the Model 4 template with 5000 bias-corrected bootstrap samples and 95% confidence intervals (CIs) to test this mediation hypothesis. Statistical significance (p < 0.05) is achieved when lower bound (LL) and upper bound (UL) CI do not include zero.

We found a positive relationship between the credibility of the information and cancer treatment misperceptions (B = 0.83, SE = 0.03, p < 0.001) but the indirect pathways between the types of comments and cancer treatment misperceptions via information credibility are not significant (see Table 2). The direct effects of the comments on cancer treatment remain significant for the expert-only corrections (B = −0.38, SE = 0.15, p = 0.011) compared with peers’ supportive comments. Analyses suggest that expert-only comments compared with peers’ supportive comments exert a direct impact on cancer treatment misperceptions that is not explained by information credibility. H1a is not supported.

Table 2:

Indirect pathways of different types of comments in predicting cancer treatment misperceptions via information credibility.

Cancer treatment misperceptions
B SE LLCI ULCI
Indirect via information credibility
Peer corrections versus peer supporting comments −0.14 0.28 −0.69 0.42
Expert-only corrections versus peer supporting comments −0.27 0.27 −0.81 0.27
Mixed corrections versus peer supporting comments −0.33 0.27 −0.86 0.21
Direct effects
Peer corrections versus peer supporting comments −0.27 0.15 −0.58 0.03
Expert-only corrections versus peer supporting comments −0.38a 0.15 −0.67 −0.09
Mixed corrections versus peer supporting comments −0.26 0.15 −0.56 0.03
Total effects model
Peer corrections versus peer supporting comments −0.39 0.28 −0.94 0.16
Expert-only corrections versus peer supporting comments −0.60a 0.27 −1.14 −0.07
Mixed corrections versus peer supporting comments −0.53a 0.27 −1.07 −0.00
  1. Unstandardized beta coefficients reported; significant effects bolded when 95 percent confidence interval does not include zero, p < 0.05. LLCI, lower level confidence interval; ULCI, upper level confidence interval. ap < 0.05.

H1b proposed information credibility acts as a mediator between the types of comments on social sharing. However, neither the indirect nor direct pathways were significant, as shown in Table 3. H1b is rejected.

Table 3:

Indirect pathways of different types of comments in predicting social sharing of misinformation via information credibility.

Social sharing of misinformation
B SE LLCI ULCI
Indirect via information credibility
Peer corrections versus peer supporting comments −0.14 0.28 −0.69 0.42
Expert-only corrections versus peer supporting comments −0.27 0.27 −0.81 0.27
Mixed corrections versus peer supporting comments −0.33 0.27 −0.86 0.21
Direct effects
Peer corrections versus peer supporting comments 0.20 0.24 −0.27 0.66
Expert-only corrections versus peer supporting comments 0.44 0.23 −0.01 0.89
Mixed corrections versus peer supporting comments 0.28 0.23 −0.18 0.73
Total effects model
Peer corrections versus peer supporting comments 0.27 0.28 −0.28 0.83
Expert-only corrections versus peer supporting comments 0.59a 0.27 0.05 1.12
Mixed corrections versus peer supporting comments 0.45 0.27 −0.08 0.99
  1. Unstandardized beta coefficients reported; significant effects bolded when 95 percent confidence interval does not include zero, p < 0.05. LLCI, lower level confidence interval; ULCI, upper level confidence interval. ap < 0.05.

3.2 Mediation via the credibility of comments

Turning to H2a, which proposed the perceived comments’ credibility serves as a mediator between the types of comments and participants’ cancer treatment misperceptions. In this model, the credibility of comments (B = 0.43, SE = 0.07, p < 0.001) is positively associated with cancer cure misperceptions. The indirect effect of exposure to three corrective comments (vs. peers’ supportive comments) on cancer cure misperceptions is significant.

Additionally, there remains a significant direct effect on cancer cure misperceptions, by which exposure to peers’ corrective, expert-only, and mixed corrective comments results in lower misperceptions compared to peers’ supportive comments. Supporting H2a, perceived comments’ credibility partially mediates between the types of comments and cancer treatment misperceptions (see Table 4). Sobel’s test value also pointed out the partial mediation is statistically significant.[2]

Table 4:

Indirect pathways of different types of comments in predicting cancer treatment misperceptions via comments credibility.

Cancer treatment misperceptions
B SE LLCI ULCI
Indirect via comments credibility
Peer corrections versus peer supporting comments 0.50 a 0.19 0.12 0.89
Expert-only corrections versus peer supporting comments 0.66 b 0.19 0.29 1.03
Mixed corrections versus peer supporting comments 0.58 b 0.19 0.21 0.95
Direct effects
Peer corrections versus peer supporting comments −0.60 a 0.27 −1.14 −0.07
Expert-only corrections versus peer supporting comments −0.88 c 0.26 −1.40 −0.36
Mixed corrections versus peer supporting comments −0.78 b 0.26 −1.30 −0.27
Total effects model
Peer corrections versus peer supporting comments −0.39 0.28 −0.94 0.16
Expert-only corrections versus peer supporting comments −0.60 a 0.27 −1.14 −0.07
Mixed corrections versus peer supporting comments −0.53 a 0.27 −1.07 −0.00
  1. Unstandardized beta coefficients reported; significant effects bolded when 95 percent confidence interval does not include zero, p < 0.05. LLCI, lower level confidence interval; ULCI, upper level confidence interval. ap < 0.05, bp < 0.01, cp < 0.001.

Finally, we tested if the credibility of the comments mediates the relationship between types of comments and social sharing (H2b). The model showed that the credibility of comments (B = −0.28, SE = 0.08, p < 0.001) is negatively associated with social sharing intentions of misinformation. However, there is a significant negative relationship via comments credibility for all three corrective comments. Results indicated that viewing peer corrections, expert-only corrections, and mixed corrections enhanced the higher credibility assessment of the corrective comments as compared to the peer supportive comments, which subsequently reduced social sharing intentions. Moreover, the direct effects on social sharing intentions remain significant for expert-only comments and mixed comments (vs. peers’ supportive comments) (see Table 5).[3]

Table 5:

Indirect pathways of different types of comments in predicting social sharing of misinformation via comments credibility.

Social sharing
B SE LLCI ULCI
Indirect via comments credibility
Peer corrections versus peer supporting comments 0.50 a 0.19 0.12 0.89
Expert-only corrections versus peer supporting comments 0.66 c 0.19 0.29 1.03
Mixed corrections versus peer supporting comments 0.58 b 0.19 0.21 0.95
Direct effects
Peer corrections versus peer supporting comments 0.42 0.28 −0.14 0.97
Expert-only corrections versus peer supporting comments 0.77 b 0.27 0.23 1.31
Mixed corrections versus peer supporting comments 0.62 a 0.27 0.08 1.15
Total effects model
Peer corrections versus peer supporting comments 0.27 0.28 −0.28 0.83
Expert-only corrections versus peer supporting comments 0.59 a 0.27 0.05 1.12
Mixed corrections versus peer supporting comments 0.45 0.27 −0.08 0.99
  1. Unstandardized beta coefficients reported; significant effects bolded when 95 percent confidence interval does not include zero, p < 0.05. LLCI, lower level confidence interval; ULCI, upper level confidence interval. ap < 0.05, bp < 0.01, cp < 0.001.

Therefore, it is comments’ credibility rather than information credibility that mediates the relationship between the effects of exposure to variations of comments on cancer treatment misperceptions and social sharing intentions. In other words, participants who read peers’ correction comments, expert-only correction comments, or mixed correction comments yielded less likelihood of social sharing through comments’ credibility as compared to peers’ supportive comments. By contrast, exposure to variations of corrective comments (vs. peers’ supportive comments) increased cancer cure misperceptions via comments’ credibility.

4 Discussion

The present study adds knowledge to our understanding of the correction of a popular cancer treatment misconception on Facebook. Results suggest that using various corrective comments (vs. supportive comments) to counter the social sharing of misinformation is only effective when individuals can perceive the corrective comments as credible and meaningful. The study advances prior correction and credibility literature in three ways. First, we examine information credibility and comments’ credibility that people may engage in when processing misinformation and online comments on social media. Second, we select a socially debated health topic–cannabis curing cancer–given its salience and higher engagement on social media (Allem et al. 2020; Shi et al. 2019). Finally, we investigated the credibility of information and variations of comments as mediators in reducing subsequent sharing intentions.

Our results suggest that while exposure to peers’ corrective, expert-only, or mixed corrective comments (vs. peers’ supportive comments) are effective on Facebook to enhance perceived comments’ credibility, leading to less likelihood of sharing misinformation, it also increased cancer treatment misperceptions, a “backfire effect” (Nyhan 2021; Nyhan and Reifler 2010) that individuals became more strongly endorsed the misperceptions when they were exposed to corrections.

Our mediation analyses provided important insight into how perceived comments’ credibility influences misperceptions and the spread of health misinformation. One notable finding is that exposure to variations of corrective comments as compared to peers’ supportive comments increased credibility evaluations of comments, but also prompted participants to dismiss unwelcome factual corrections. Credibility plays an important orienting role in improving its persuasive power (Sülflow et al. 2019). Findings show that while viewing corrective messages about cancer treatment can strengthen the credibility perceptions, it may unintentionally boost the false claim’s familiarity, which may counteract and offset the intended effect of the corrections. Such corrective messages may activate counterarguments (Ecker et al. 2020a) and participants ended up more certain of their prior preferences or misperceptions. The persistence of cancer cure misperceptions could be attributed to corrective messages failing to reach people to change their beliefs. The other reason is that individuals may often fall into the trap of misinformation due to a deficiency of cognitive ability and processing efforts (Nyhan 2021). Higher levels of analytic thinking or media literacy are needed for people to navigate through the complicated information on social media (Xiao et al. 2021).

Additionally, the mediation analyses also suggested that participants who viewed various corrective comments as compared to peers’ supportive comments, perceived the comments as highly credible and trustworthy, subsequently reducing intentions to share the inaccurate claim on social media. This finding suggests that exposure to corrective comments from peers, expert organizations, or mixed conditions debunking fake news is beneficial to improve perceived correction credibility, which leads to weaker intentions to spread misinformation. Results highlight that promoting high-quality information on social media enhances the perceived credibility of corrective comments. This type of information discourages people from sharing fake news on social media (Chung and Kim 2021).

It is worth mentioning that information credibility did not act as a mediator of misperceptions and social sharing. This finding is inconsistent with prior research (Kim et al. 2020; Vraga et al. 2020b) that the effect of correction strategy on misperceptions was mediated by misinformation credibility. One possible reason could be that exposure to real-world misinformation (e.g., miracle cancer cures) tends to be more difficult to correct misperceptions compared to constructed misinformation (Walter and Murphy 2018). Efforts to debunk real-world misinformation encounter many practical challenges, such as previous exposure, prior attitudes, and defensive processing (Thorson 2016). Future research could explore how individuals’ prior exposure to fake news influences the perceptions of information credibility to explain the effectiveness of corrective comments.

Our study carries meaningful implications for health professionals and organizations involved in mitigating the diffusion of health misinformation on social media. This study shows that comments’ credibility plays a mediating role in weakening intentions to share fake news. Because individuals seek out health information online, these findings are extremely important. Inaccurate health information, such as a false cure, can have negative and harmful consequences on users’ health (Eysenbach 2008; Freeman and Spyridakis 2004). The ability to access the reliability, validity and credibility of corrective messages are important aspects of demonstrating media literacy (Swire-Thompson and Lazer 2020). Our findings further stress the importance of effectively evaluating the credibility of messages to reduce the sharing of misinformation. When people demonstrate high levels of media literacy, they can critically process social media content and identify credible health-related information (Guess et al. 2020). Scholars (Koc and Barut 2016; Lin et al. 2013) illustrated a positive relationship between the need for cognition (NFC) and media literacy. People with a high level of NFC are skeptical of social media information, which promotes media literate behaviors, such as fact-checking (Vraga and Tully 2021). To mitigate the potential harm caused by health misinformation online, one important intervention is to use media literacy education to strengthen individuals’ digital media literacy (Kahne and Bowyer 2019) which helps individuals analyze information critically (Choi and Stvilia 2015).

Although training individuals with health literacy (Oh and Lee 2019) is important in combating health misinformation, health practitioners also need to focus on developing strong online clinician-client relationships (Trembath et al. 2016). When health organizations have a strong media presence and active engagement, users are less likely to disseminate false claims and feel emotionally connected to the organization (Gesser-Edelsburg et al. 2018; Oh and Lee 2019).

The findings of our study contribute to empirical studies in the field of correcting health misinformation by providing both theoretical and practical implications. Theoretically, we found that credibility evaluations of comments act as a mediator in stemming the spread of fake news. Additionally, the study also showed that exposure to post hoc corrective comments as compared to peers’ supportive comments can increase comments’ credibility and lead to a backfire effect. These types of corrections might familiarize individuals with previously heard false claims, resulting in stronger misconceptions. Thus, we emphasize the importance of enhancing individuals’ media literacy to better decipher misleading content on social media. Practically, to minimize the spread of health misinformation, organizations, like the CDC and WHO, need to have a strong social media presence with a high level of user engagement (Gesser-Edelsburg et al., 2018; Vraga and Bode 2017b). We recommend that health organizations actively promote and bolster facts, high-quality information, and health literacy, and correct false claims related to public health.

4.1 Limitations

Several limitations need to be addressed. First, this study relied upon a non-generalizable sample from Amazon’s Mechanical Turk (MTurk). Although the study’s population was not representative of the broader American public, the participants are internet users and are likely to encounter misinformation on social media. Future research could expand populations studied to examine how education level or need for cognition influences the consumption of misinformation.

Second, this study examines the effects of variations of comments without measuring prior misperceptions or previous attitudes toward the medical use of marijuana. Future research might consider assessing participants’ prior knowledge and attitudes before exploring the interventions’ impact.

Finally, the legalization of marijuana in the United States has promoted the increased availability of cannabis and motivated an interest in adopting marijuana as a therapeutic agent. Therefore, it is important to consider the legal status of marijuana in the United States and the way this status might affect participants’ perceptions of marijuana as a medical treatment. States have taken an approach to legalize recreational and/or medical marijuana use (Berke and Gould 2020) despite its criminal status at the federal level. With several overlapping and sometimes contradictory legalities at the state and federal level, realistic and de-criminalized access to marijuana may be a more complicated issue for participants based on their geographic location, perhaps independent of their personal beliefs. The study is conducted in the U.S., and it is necessary to investigate whether these findings would generalize to other countries, especially countries that have made medical use of marijuana illegal.


Corresponding author: Juan Liu, Department of Mass Communication, Towson University, Towson, Maryland, USA, E-mail:
Article Note: This article underwent double-blind peer review.

Funding source: Columbus State University Grant

About the authors

Juan Liu

Juan Liu (Ph.D., Wayne State University) is an Assistant Professor in the Department of Mass Communication at Towson University. Her research centers on correcting misinformation on social media, media effects, public opinion, and corporate social advocacy.

Carrie Reif-Stice

Carrie Reif-Stice (Ph.D., The University of Southern Mississippi) is an Assistant Professor in the Department of Communication at Augusta University. Her research focuses on risk, crisis, and health communication. She has been published in communication journals and referred book chapters.

Bruce Getz

Bruce Getz, Jr. (Ph.D., University of Florida) is an educator and creative services professional and currently serves as Assistant Professor of Integrated Media within Columbus State University’s Department of Communication.

  1. Research funding: This study is funded by a grant from Columbus State University.

Appendix A

  1. Peer Comments Supporting the Facebook Post “Marijuana Kills Cancer”

  1. Peer Comments Correcting the Facebook Post “Marijuana Kills Cancer”

  1. Expert-Only Condition Correcting the Facebook Post “Marijuana Kills Cancer”

  1. Mixed Peer and Expert Condition Correcting the Facebook Post “Marijuana Kills Cancer”

References

Albarracin, Dolores, Daniel Romer, Christopher Jones, Kathleen H. Jamieson & Patrick Jamieson. 2018. Misleading claims about tobacco products in YouTube videos: Experimental effects of misinformation on unhealthy attitudes. Journal of Medical Internet Research 20(6). e229. https://doi.org/10.2196/jmir.9959.Search in Google Scholar

Allem, Jon-Patrick, Patricia Escobedo & Likhit Dharmapuri. 2020. Cannabis surveillance with Twitter data: Emerging topics and social bots. American Journal of Public Health 110(3). 357–362. https://doi.org/10.2105/ajph.2019.305461.Search in Google Scholar

Allington, Daniel, Bobby Duffy, Simon Wessely, Nayana Dhavan & James Rubin. 2020. Health-protective behaviour, social media usage and conspiracy belief during the COVID-19 public health emergency. Psychological Medicine 51(10). 1–7. https://doi.org/10.1017/s003329172000224x.Search in Google Scholar

Amazeen, Michelle A., Emily, Thorson, Ashley, Muddiman & Lucas, Graves. 2018. Correcting political and consumer misperceptions: The effectiveness and effects of rating scale versus contextual correction formats. Journalism & Mass Communication Quarterly 95(1). 28–48.10.1177/1077699016678186Search in Google Scholar

Appelman, Alyssa & S. Shyam Sunder. 2016. Measuring message credibility: Construction and validation of an exclusive scale. Journalism & Mass Communication Quarterly 93(1). 59–79. https://doi.org/10.1177/1077699015606057.Search in Google Scholar

Austin, W. Erica & Qingwen Dong. 1994. Source v. content effects on judgments of news credibility. Journalism Quarterly 71. 973–983. https://doi.org/10.1177/107769909407100420.Search in Google Scholar

Bauer, Paul C. & Bernhard Clemm von Hohenberg. 2020. Believing and sharing information by fake sources: An experiment. Political Communication 38(6). 647–671. https://doi.org/10.1080/10584609.2020.1840462.Search in Google Scholar

Berke, Jeremy & Skye Gould. 2020. Legal marijuana just went on sale in Illinois. Here are all the states where cannabis is legal. Business Insider. https://www.businessinsider.nl/legal-marijuana-states-2018-1/ (accessed 01 January 2020).Search in Google Scholar

Bode, Leticia & Emily K. Vraga. 2015. In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication 65. 619–638. https://doi.org/10.1111/jcom.12166.Search in Google Scholar

Bode, Leticia & Emily K. Vraga. 2017. See something, say something: Correction of global health misinformation on social media. Health Communication 33(9). 1131–1140. https://doi.org/10.1080/10410236.2017.1331312.Search in Google Scholar

Broniatowski, A. David, Amelia M. Jamison, SiHua Qi, Lulwah AlKulaib, Tao Chen, Adrian Benton, Sandra C. Quinn & Mark Dredze. 2018. Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health 108(10). 1378–1384. https://doi.org/10.2105/ajph.2018.304567.Search in Google Scholar

Carlson, Matt. 2018. Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Information, Communication & Society 23(3). 374–388. https://doi.org/10.1080/1369118x.2018.1505934.Search in Google Scholar

Chaiken, Shelly & Durairaj Maheswaran. 1994. Heuristic processing can bias systematic pro- cessing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment. Journal of Personality and Social Psychology 66. 460–473. https://doi.org/10.1037/0022-3514.66.3.460.Search in Google Scholar

Chen, Liang, Xiaohui Wang & Tai-Quan Peng. 2018. Nature and diffusion of gynecologic cancer–related misinformation on social media: Analysis of tweets. Journal of Medical Internet Research 20(10). e11515. https://doi.org/10.2196/11515.Search in Google Scholar

Choi, Wonchan & Besiki Stvilia. 2015. Web credibility assessment: Conceptualization, operationalization, variability, and models. Journal of the Association for Information Science and Technology 66(12). 2399–2414. https://doi.org/10.1002/asi.23543.Search in Google Scholar

Chou, Wen-Ying Sylvia, April Oh & William M. P. Klein. 2018. Addressing health-related misinformation on social media. JAMA 320(23). 2417–2418. https://doi.org/10.1001/jama.2018.16865.Search in Google Scholar

Chua, Y. K. Alton & Snehasish Banerjee. 2018. Intentions to trust and share online health rumors: An experiment with medical professionals. Computers in Human Behavior 87. 1–9. https://doi.org/10.1016/j.chb.2018.05.021.Search in Google Scholar

Chung, Myojung & Nuri, Kim. 2021. When I learn the news is false: How fact-checking information stems the spread of fake news via third-person perception. Human Communication Research 47(1). 1–24.10.1093/hcr/hqaa010Search in Google Scholar

Colliander, Jonas. 2019. This is fake news”: Investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media. Computers in Human Behavior 97. 202–215. https://doi.org/10.1016/j.chb.2019.03.032.Search in Google Scholar

Cook, John, Peter Ellerton & David Kinkead. 2018. Deconstructing climate misinformation to identify reasoning errors. Environmental Research Letters 13(2). 024018. https://doi.org/10.1088/1748-9326/aaa49f.Search in Google Scholar

Cook, John, Stephan Lewandowsky & Ullrich K. H. Ecker. 2017. Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One 12(5). e0175799. https://doi.org/10.1371/journal.pone.0175799.Search in Google Scholar

Eastin, Matthew S. 2001. Credibility assessments of online health information: The effects of source expertise and knowledge of content. Journal of Computer-Mediated Communication 6(4). 643. https://doi.org/10.1111/j.1083-6101.2001.tb00126.x.Search in Google Scholar

Ecker, Ullrich K.H., Stephan Lewandowsky & Matthew Chadwick. 2020a. Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect. Cognitive Research: Principles and Implications 5(1). 1–25. https://doi.org/10.1186/s41235-020-00241-6.Search in Google Scholar

Ecker, K.H. Ullrich, Ziggy O’Reilly, Jesse S. Reid & Ee Pin Chang. 2020b. The effectiveness of short-format refutational fact-checks. British Journal of Psychology 111(1). 36–54. https://doi.org/10.1111/bjop.12383.Search in Google Scholar

Edwards, Chad, Autumn Edwards, Patric R. Spence & Ashleigh K. Shelton. 2014. Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter. Computers in Human Behavior 33. 372–376. https://doi.org/10.1016/j.chb.2013.08.013.Search in Google Scholar

Eysenbach, Gunther. 2008. Credibility of health information and digital media: New perspectives and implications for youth. In Metzger, Miriam J. & Flanagin, Andew J. (eds.), Digital media, youth, and credibility, 123–154. Cambridge, MA: The MIT Press.Search in Google Scholar

Fernandez-Luque, Luis & Muhammad Imran. 2018. Humanitarian health computing using artificial intelligence and social media: A narrative literature review. International Journal of Medical Informatics 114. 136–142. https://doi.org/10.1016/j.ijmedinf.2018.01.015.Search in Google Scholar

Flanagin, Andrew J. & Miriam J. Metzger. 2000. Perceptions of Internet information credibility. Journalism & Mass Communication Quarterly 77(3). 515–540. https://doi.org/10.1177/107769900007700304.Search in Google Scholar

Fogg, Brian J. & Hsiang Tseng. 1999. The elements of computer credibility. Proceedings of the SIGCHI conference on human factors in computing systems.10.1145/302979.303001Search in Google Scholar

Food and Drug Administration. 2017. FDA warns companies marketing unproven products, derived from marijuana, that claim to treat or cure cancer [FDA News Release]. Available at: https://www.fda.gov/news-events/press-announcements/fda-warns-companies-marketing-unproven-products-derived-marijuana-claim-treat-or-cure-cancer.Search in Google Scholar

Freeman, Krisandra S. & Jan H. Spyridakis. 2004. An examination of factors that affect the credibility of online health information. Technical Communication 51(2). 239–263.Search in Google Scholar

Gage-Bouchard, A. Elizabeth, Susan LaValley, Molli Warunek, Lynda Kwon Beaupin & Michelle Mollica. 2018. Is cancer information exchanged on social media scientifically accurate? Journal of Cancer Education 33(6). 1328–1332. https://doi.org/10.1007/s13187-017-1254-z.Search in Google Scholar

Garrett, R. Kelly, Erik C. Nisbet & Emily K. Lynch. 2013. Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory. Journal of Communication 63. 617–637. https://doi.org/10.1111/jcom.12038.Search in Google Scholar

Gesser-Edelsburg, Alon Diamant, Hijazi Rana & Gustavo S. Mesch. 2018. Correcting misinformation by health organizations during measles outbreaks: A controlled experiment. PLoS One 13(12). e0209505. https://doi.org/10.1371/journal.pone.0209505.Search in Google Scholar

Guess, M. Andrew, Michael Lerner, Benjamin Lyons, Jacob M. Montgomery, Brendan Nyhan, Jason Reifler & Neelanjan Sircar. 2020. A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences 117(27). 15536–15545. https://doi.org/10.1073/pnas.1920498117.Search in Google Scholar

Hameleers, Michael. & Toni G. L. A. Van der Meer. 2019. Misinformation and polarization in a high-cchoice media environment: How effective are political fact-checkers? Communication Research 47(2). 227–250. https://doi.org/10.1177/0093650218819671.Search in Google Scholar

Hayes, A. F. 2013. Introduction to mediation, moderation, and conditional process analysis. A regression-based approach. New York, NY: The Guilford Press.Search in Google Scholar

Hernandez, Daniela & Robert McMillan. 2019. Facebook, YouTube overrun with bogus cancer-treatment claims. The Wall Street Journal. https://www.wsj.com/articles/facebook-youtube-overrun-with-bogus-cancer-treatment-claims-11562072401 (accessed 02 July 2019).Search in Google Scholar

Hilligoss, Brian & Soo Young Rieh. 2008. Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing & Management 44(4). 1467–1484. https://doi.org/10.1016/j.ipm.2007.10.001.Search in Google Scholar

Hovland, Carl Iver, Irving Lester Janis & Harold H. Kelley. 1953. Communication and persuasion. New Haven, Connecticut: Yale University Press.Search in Google Scholar

Huang, Yan & Weirui Wang. 2020. When a story contradicts: Correcting health misinformation on social media through different message formats and mechanisms. Information, Communication & Society 25(8). 1–18. https://doi.org/10.1080/1369118x.2020.1851390.Search in Google Scholar

Johnson, Skyler, Matthew Parsons, Tanya Dorff, Meena S. Moran, John H. Ward, Stacey A. Cohen, Akerley Wallace, Jessica Bauman, Joleen Hubbard, Daniel E. Spratt, Carma L. Bylund, Briony Swire-Thompson, Tracy Onega, Laura D. Scherer, Jonathan Tward & Angela Fagerlin. 2022. Cancer misinformation and harmful information on Facebook and other social media: A brief report. JNCI: Journal of the National Cancer Institute 114(7). 1036–1039. https://doi.org/10.1093/jnci/djab141.Search in Google Scholar

Jones-Jang, S. Mo, Tara Mortensen & Jingjing Liu. 2019. Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. American Behavioral Scientist 65(2). 1–18. https://doi.org/10.1177/0002764219869406.Search in Google Scholar

Kahne, Joseph & Benjamin Bowyer. 2019. Can media literacy education increase digital engagement in politics? Learning, Media and Technology 44(2). 211–224. https://doi.org/10.1080/17439884.2019.1601108.Search in Google Scholar

Kasprak, Alex. 2018. Did the National Cancer Institute ‘finally admit’ that marijuana kills cancer? Snopes. https://www.snopes.com/fact-check/did-nci-admit-marijuana-kills-cancer/ (accessed 04 June 2018).Search in Google Scholar

Kata, Anna. 2012. Anti-vaccine activists, Web 2.0, and the postmodern paradigm–An overview of tactics and tropes used online by the anti-vaccination movement. Vaccine 30(25). 3778–3789. https://doi.org/10.1016/j.vaccine.2011.11.112.Search in Google Scholar

Kim, Ji Won & Gina Masullo Chen. 2020. Exploring the influence of comment tone and content in response to misinformation in social media news. Journalism Practice 15(4). 456–470. https://doi.org/10.1080/17512786.2020.1739550.Search in Google Scholar

Kim, Sojung Claire, Emily K. Vraga & John Cook. 2020. An eye tracking approach to understanding misinformation and correction strategies on social media: The mediating role of attention and credibility to reduce HPV vaccine misperceptions. Health Communication 36(13). 1687–1696. https://doi.org/10.1080/10410236.2020.1787933.Search in Google Scholar

Koc, Mustafa & Esra Barut. 2016. Development and validation of new media literacy scale (NMLS) for university students. Computers in Human Behavior 63. 834–843. https://doi.org/10.1016/j.chb.2016.06.035.Search in Google Scholar

Larson, Heidi J. 2018. The biggest pandemic risk? Viral misinformation. Nature 562. 309–310.10.1038/d41586-018-07034-4Search in Google Scholar

Lee, Chei Sian & Long Ma. 2012. News sharing in social media: The effect of gratifications and prior experience. Computers in Human Behavior 28(2). 331–339. https://doi.org/10.1016/j.chb.2011.10.002.Search in Google Scholar

Lewandowsky, Stephan, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz & John Cook. 2012. Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest 13. 106–131. https://doi.org/10.1177/1529100612451018.Search in Google Scholar

Lin, Tzu-Bin, Jen-Yi Li, Feng Deng & Ling Lee. 2013. Understanding new media literacy: An explorative theoretical framework. Journal of Educational Technology and Society 16(4). 160–170.Search in Google Scholar

Majchrzak, Ann, Samer Faraj, Gerald C. Kane & Bijan Azad. 2013. The contradictory influence of social media affordances on online communal knowledge sharing. Journal of Computer-Mediated Communication 19(1). 38–55. https://doi.org/10.1111/jcc4.12030.Search in Google Scholar

Margolin, Drew B., Aniko Hannak & Ingmar Weber. 2018. Political fact-checking on Twitter: When do corrections have an effect? Political Communication 35. 196–219. https://doi.org/10.1080/10584609.2017.1334018.Search in Google Scholar

Marwick, Alice E. & Danah Boyd. 2011. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society 13. 114–133. https://doi.org/10.1177/1461444810365313.Search in Google Scholar

Massey, Philip M., Matthew D. Kearney, Michael K. Hauer, Preethi Selvan, Emmanuel Koku & Amy E. Leader. 2020. Dimensions of misinformation about the HPV vaccine on Instagram: Content and network analysis of social media characteristics. Journal of Medical Internet Research 22(12). e21451.10.2196/21451Search in Google Scholar

Melchior, Cristiane & Mirian Oliveira. 2021. Health-related fake news on social media platforms: A systematic literature review. New Media & Society 24(6). 1–23.10.1177/14614448211038762Search in Google Scholar

Mena, Paul, Danielle Barbe & Sylvia Chan-Olmsted. 2020. Misinformation on Instagram: The impact of trusted endorsements on message credibility. Social Media + Society 6(2). 2056305120935102. https://doi.org/10.1177/2056305120935102.Search in Google Scholar

Metzger, Miriam J. & Andrew J. Flanagin. 2011. Online health information credibility. In Encyclopedia of health communication, 976–978. Thousand Oaks: California: Sage.Search in Google Scholar

Metzger, Miriam J., Andrew J. Flanagin & Ryan B. Medders. 2010. Social and heuristic approaches to credibility evaluation online. Journal of Communication 60(3). 413–439. https://doi.org/10.1111/j.1460-2466.2010.01488.x.Search in Google Scholar

Metzger, Miriam J., Andrew J. Flanagin, Keren Eyal, Daisy R. Lemus & Robert M. Mccann. 2003. Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. Communication Yearbook 27(1). 293–335. https://doi.org/10.1207/s15567419cy2701_10.Search in Google Scholar

Naab, Teresa K., Dominique Heinbach, Marc Ziegele & Marie-Theres Grasberger. 2020. Comments and credibility: How critical user comments decrease perceived news article credibility. Journalism Studies 21(6). 783–801. https://doi.org/10.1080/1461670x.2020.1724181.Search in Google Scholar

National Cancer Institute. 2017. Cannabis and Cannabinoids (PDQ®)–patient version. Available at: https://www.cancer.gov/about-cancer/treatment/cam/patient/cannabis-pdq#link/_15.Search in Google Scholar

Nyhan, Brendan. 2021. Why the backfire effect does not explain the durability of political misperceptions. Proceedings of the National Academy of Sciences 118(15). e1912440117. https://doi.org/10.1073/pnas.1912440117.Search in Google Scholar

Nyhan, Brendan & Jason Reifler. 2010. When corrections fail: The persistence of political misperceptions. Political Behavior 32. 303–330. https://doi.org/10.1007/s11109-010-9112-2.Search in Google Scholar

Oh, Hyun Jung & Hyegyu Lee. 2019. When do people verify and share health rumors on social media? The effects of message importance, health anxiety, and health literacy. Journal of Health Communication 24(11). 837–847. https://doi.org/10.1080/10810730.2019.1677824.Search in Google Scholar

O’keefe, Daniel J. 1990. Persuasion: Theory and research. Newbury Park, California: Sage.Search in Google Scholar

Pennycook, Gordon, Jonathon McPhetres, Yunhao Zhang, Jackson G. Lu & David G. Rand. 2020. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science 31(7). 770–780. https://doi.org/10.1177/0956797620939054.Search in Google Scholar

Petty, Richard E. & Pablo Brinol. 2008. Persuasion: From single to multiple to metacognitive processes. Perspectives on Psychological Science 3. 137–147. https://doi.org/10.1111/j.1745-6916.2008.00071.x.Search in Google Scholar

Pornpitakpan, Chanthika. 2004. The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology 34(2). 243–281. https://doi.org/10.1111/j.1559-1816.2004.tb02547.x.Search in Google Scholar

Pulido, Cristina M., Laura Ruiz-Eugenio, Gisela Redondo-Sama & Beatriz Villarejo-Carballido. 2020. A new application of social impact in social media for overcoming fake news in health. International Journal of Environmental Research and Public Health 17(7). 2430. https://doi.org/10.3390/ijerph17072430.Search in Google Scholar

Rodgers, Kimberly & Nnandi Massac. 2020. Misinformation: A threat to the public’s health and the public health system. Journal of Public Health Management and Practice 26(3). 294–296. https://doi.org/10.1097/phh.0000000000001163.Search in Google Scholar

Sharma, Megha, Kapil Yadav, Nitika Yadav & Keith C. Ferdinand. 2017. Zika virus pandemic-analysis of Facebook as a social media health information platform. American Journal of Infection Control 45(3). 301–302. https://doi.org/10.1016/j.ajic.2016.08.022.Search in Google Scholar

Shi, Siyu, Arthur R. Brant, Aaron Sabolch & Erqi Pollom. 2019. False news of a cannabis cancer cure. Cureus 11(1). e3918. https://doi.org/10.7759/cureus.3918.Search in Google Scholar

Slater, Michael D. & Donna Rouner. 1996. How message evaluation and source attributes may influence credibility assessment and belief change. Journalism & Mass Communication Quarterly 73. 974–991. https://doi.org/10.1177/107769909607300415.Search in Google Scholar

Southwell, Brian G., Emily A. Thorson & Laura Sheble. 2018. Introduction: Misinformation among mass audiences as a focus for inquiry. In Misinformation and Mass audiences, 1–11. Austin, Texas: University of Texas Press.10.7560/314555-002Search in Google Scholar

Stamm, Keith & Ric Dube. 1994. The relationship of attitudinal components to trust in media. Communication Research 21. 105–123. https://doi.org/10.1177/009365094021001006.Search in Google Scholar

Su, Yan, Danielle Ka Lai Lee & Xizhu Xiao. 2022. I enjoy thinking critically, and I’m in control”: Examining the influences of media literacy factors on misperceptions amidst the COVID-19 infodemic. Computers in Human Behavior 128. 107111. https://doi.org/10.1016/j.chb.2021.107111.Search in Google Scholar

Sülflow, Michael, Svenja Schäfer & Stephan Winter. 2019. Selective attention in the news feed: An eye-tracking study on the perception and selection of political news posts on Facebook. New Media & Society 21(1). 168–190. https://doi.org/10.1177/1461444818791520.Search in Google Scholar

Swire-Thompson, Briony & David Lazer. 2020. Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health 41. 433–451. https://doi.org/10.1146/annurev-publhealth-040119-094127.Search in Google Scholar

Thorson, Emily. 2016. Belief echoes: The persistent effects of corrected misinformation. Political Communication 33. 460–480. https://doi.org/10.1080/10584609.2015.1102187.Search in Google Scholar

Trembath, David, Jessica Paynter, Deb Keen & Ullrich K. H. Ecker. 2016. Attention: Myth Follows!” Facilitated Communication, parent and professional attitudes towards evidence-based practice, and the power of misinformation. Evidence-Based Communication Assessment and Intervention 9(3). 113–126. https://doi.org/10.1080/17489539.2015.1103433.Search in Google Scholar

U.S. Department of Health and Human Services. 2014. The health consequences of smoking: 50 Years of progress. A Report of the Surgeon General. Available at: https://www.hhs.gov/surgeongeneral/reports-and-publications/tobacco/index.html.Search in Google Scholar

Van der Meer, Toni G. L. A. & Yan Jin. 2020. Seeking formula for misinformation treatment in public health crises: The effects of corrective information type and source. Health Communication 35(5). 560–575. https://doi.org/10.1080/10410236.2019.1573295.Search in Google Scholar

Viviani, Marco & Gabriella Pasi. 2017. Credibility in social media: Opinions, news, and health information—a survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 7(5). e1209. https://doi.org/10.1002/widm.1209.Search in Google Scholar

Vraga, Emily K. & Leticia Bode. 2017a. I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society 21(10). 1337–1353. https://doi.org/10.1080/1369118x.2017.1313883.Search in Google Scholar

Vraga, Emily K. & Leticia Bode. 2017b. Using expert sources to correct health misinformation in social media. Science Communication 39(5). 621–645. https://doi.org/10.1177/1075547017731776.Search in Google Scholar

Vraga, Emily K. & Leticia Bode. 2018. I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society 21(10). 1337–1353. https://doi.org/10.1080/1369118x.2017.1313883.Search in Google Scholar

Vraga, Emily K. & Leticia Bode. 2020. Correction as a solution for health misinformation on social media. American Journal of Public Health 110(S3). 278–280. https://doi.org/10.2105/ajph.2020.305916.Search in Google Scholar

Vraga, Emily K., Leticia Bode & Melissa Tully. 2020a. Creating news literacy messages to enhance expert corrections of misinformation on Twitter. Communication Research 49(2). 1–23.10.1177/0093650219898094Search in Google Scholar

Vraga, Emily K., Sojung Claire Kim & John Cook. 2019. Testing logic-based and humor-based corrections for science, health, and political misinformation on social media. Journal of Broadcasting & Electronic Media 63(3). 393–414. https://doi.org/10.1080/08838151.2019.1653102.Search in Google Scholar

Vraga, Emily K., Sojung Claire Kim, John Cook & Leticia Bode. 2020b. Testing the effectiveness of correction placement and type on Instagram. The International Journal of Press/Politics 25(4). 632–652. https://doi.org/10.1177/1940161220919082.Search in Google Scholar

Vraga, Emily K. & Melissa Tully. 2021. News literacy, social media behaviors, and skepticism toward information on social media. Information, Communication & Society 24(2). 150–166. https://doi.org/10.1080/1369118x.2019.1637445.Search in Google Scholar

Walter, Nathan, John J. Brooks, Camille J. Saucier & Sapna Suresh. 2020. Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis. Health Communication. 1–9. https://doi.org/10.1080/10410236.2020.1794553.Search in Google Scholar

Walter, Nathan, Jonathan Cohen, R. Lance Holbert & Yasmin Morag. 2019. Fact-checking: A meta-analysis of what works and for whom. Political Communication 37(3). 350–375. https://doi.org/10.1080/10584609.2019.1668894.Search in Google Scholar

Walter, Nathan & Sheila T. Murphy. 2018. How to unring the bell: A meta-analytic approach to correction of misinformation. Communication Monographs 85(3). 423–441. https://doi.org/10.1080/03637751.2018.1467564.Search in Google Scholar

Wang, Yuxi, Martin McKee, Aleksandra Torbica & David Stuckler. 2019. Systematic literature review on the spread of health-related misinformation on social media. Social Science & Medicine 240. 112552. https://doi.org/10.1016/j.socscimed.2019.112552.Search in Google Scholar

Waszak, Przemyslaw M., Wioleta Kasprzycka-Waszak & Alicja Kubanek. 2018. The spread of medical fake news in social media–the pilot quantitative study. Health Policy and Technology 7(2). 115–118. https://doi.org/10.1016/j.hlpt.2018.03.002.Search in Google Scholar

Weeks, Brian E. & R. Lance Holbert. 2013. Predicting dissemination of news content in social media: A focus on reception, friending, and partisanship. Journalism & Mass Communication Quarterly 90(2). 212–232. https://doi.org/10.1177/1077699013482906.Search in Google Scholar

Xiao, Xizhu, Porismita Borah & Yan Su. 2021. The dangers of blind trust: Examining the interplay among social media news use, misinformation identification, and news trust on conspiracy beliefs. Public Understanding of Science 30(8). 977–992. https://doi.org/10.1177/0963662521998025.Search in Google Scholar

Received: 2022-04-29
Accepted: 2022-08-29
Published Online: 2022-09-23

© 2022 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.3.2024 from https://www.degruyter.com/document/doi/10.1515/omgc-2022-0033/html
Scroll to top button