Distortions

Tracking Viral Misinformation

Times reporters will chronicle and debunk false and misleading information that is going viral online.

Tiffany Hsu
Feb. 24, 2022, 12:01 a.m. ET

41 million Americans are QAnon believers, survey finds.

Image
A hat with a QAnon symbol at a 2020 Trump rally in Wisconsin.Credit...Al Drago for The New York Times

More than a year after Donald J. Trump left office, the QAnon conspiracy theory that thrived during his administration continues to attract more Americans, including many Republicans and far-right news consumers, according to results from a survey released on Thursday from the Public Religion Research Institute.

The nonprofit and nonpartisan group found that 16 percent of Americans, or roughly 41 million people, believed last year in the three key tenets of the conspiracy theory. Those are that Satanist pedophiles who run a global child sex-trafficking operation control the government and other major institutions, that a coming storm will sweep elites from power and that violence might be necessary to save the country.

In October 2021, 17 percent of Americans believed in the conspiracy theory, up from 14 percent in March, the survey said. At the same time, the percentage of people who rejected QAnon falsehoods shrank to 34 percent in October from 40 percent in March. The survey covered more than 19,000 respondents and was conducted across the country throughout 2021.

The QAnon movement, which the F.B.I. considers to be a potential terrorist threat, centers on an anonymous author whose online messages, signed Q, fueled the spread of the reality-warping ideology. Mr. Trump also figured in the conspiracy theory as someone who was recruited by top military officials to use his presidency against the shadowy liberal cabal. The conspiracy theory was amplified and spread on social media.

After Mr. Trump lost the 2020 presidential election, QAnon was expected to be hobbled without him. But it has persisted despite that and despite efforts by tech platforms to staunch its spread. Forensic linguists have also tried to unmask and defang the anonymous author who signed online messages as Q.

QAnon adherents, including one known as the QAnon Shaman, have been connected to violent crimes and to last year’s Capitol riot. The convoluted mythology could influence the midterm elections this year, with dozens of candidates for national office expressing at least some support of QAnon.

Robert P. Jones, the founder and chief executive of the research group and a social science researcher with decades of experience, said he never expected to be dealing with serious survey questions about whether powerful American institutions were controlled by devil-worshiping, sex-trafficking pedophiles. To have so many Americans agree with such a question, he said, was “stunning.”

“There’s a real temptation to dismiss this as farcical and kind of outlandish, but we were convinced pretty early on that this was actually a serious movement that’s making inroads into not only mainstream political parties but also mainstream religious groups and putting down roots in more mainstream institutions,” Mr. Jones said. “We saw this move from the message boards on Reddit to the U.S. Capitol on Jan. 6 — this is a reality that we really have to contend with.”

Believers are “racially, religiously and politically diverse,” Natalie Jackson, the institute’s director of research, said in a statement. But some demographics are more likely to fit the bill.

Among Republicans, 25 percent found QAnon to be valid, compared with 14 percent of independents and 9 percent of Democrats. Media preferences were a major predictor of QAnon susceptibility, with people who trust far-right news sources such as One America News Network and Newsmax nearly five times more likely to be believers than those who trust mainstream news. Fox News viewers were twice as likely to back QAnon ideas, the survey found.

Most QAnon believers associated Christianity with being American and said that the United States risked losing its culture and identity and must be protected from foreign influence. Nearly seven in 10 believers agreed with the lie that the 2020 election was stolen from Mr. Trump.

More than half of QAnon supporters are white, while 20 percent are Hispanic and 13 percent are Black. They were most likely to have household incomes of less than $50,000 a year, hold at most a high school degree, hail from the South and reside in a suburb.

The results are from 19,399 respondents to four surveys conducted by the institute throughout 2021. The margin of error is plus or minus 0.9 percentage points, the institute said.

Advertisement

SKIP ADVERTISEMENT
Stuart A. Thompson
Jan. 28, 2022, 5:22 p.m. ET

No, athletes are not dying from Covid-19 vaccines.

Image
Senator Ron Johnson, center, at a panel discussion Monday on Covid-19. He has been promoting a conspiracy theory about deaths supposedly tied to Covid vaccines.Credit...Drew Angerer/Getty Images

The conspiracy theory that athletes are collapsing or dying after receiving a Covid-19 vaccine resurfaced this week after two prominent voices advanced the idea.

Senator Ron Johnson, a Republican from Wisconsin, spread the falsehood in an appearance on the conservative podcast “The Charlie Kirk Show.”

“We’ve heard story after story. I mean, all these athletes dropping dead on the field, but we’re supposed to ignore that,” Mr. Johnson said.

A similar claim was also made by John Stockton, the Hall of Fame basketball player, who said on Sunday that “over 100 professional athletes” had dropped dead after receiving the vaccine. He provided no evidence for the claim.

Health officials say that the links between vaccines and athlete deaths are baseless and that there is no evidence to suggest the vaccine is causing more deaths or injuries among athletes. Professional leagues have not reported any rise in such cases. A representative from the National Football League said there were no vaccine-related deaths or hospitalizations among roughly 3,000 players in the N.F.L. Ninety-five percent of the league’s players have been vaccinated.

A spokeswoman for Mr. Johnson said the senator was referring to deaths worldwide and had talked about them because he believed federal health agencies should investigate them. Mr. Stockton could not be reached for comment.

Stories about professional athletes dying during soccer matches and basketball games after getting the jab have been a recurring conspiracy theory since Covid-19 vaccines were introduced.

On social media, users share links to local news reports about amateur athletes who died during games or while jogging. The articles rarely state whether someone was vaccinated or not and are usually published before the cause of death is determined. But these deaths, among otherwise healthy people, have gripped anti-vaccine communities and raised concerns about vaccine risks.

It’s rare for athletes to suffer cardiac arrest during games, but it does happen. While athletes tend to be healthier than the general population, people with underlying heart conditions are more likely to experience complications when exercising.

In a 2015 study of players in the National Collegiate Athletic Association, researchers showed that risks varied by sport and gender. Male Division I basketball players faced up to 10 times the risk of sudden cardiac death compared with all N.C.A.A. athletes. Male athletes faced a higher risk than women, and Black men faced a higher risk than men overall, the study found.

“Folks who maintain good amounts of exercise throughout their life span, they end up at lower risk of having these sudden events,” said Dr. Meagan Wasfy, a sports cardiologist at Massachusetts General Hospital, who published a review of the study. “But for that small period of time where you’re exercising, that risk goes up.”

One list circulating about the claim included 543 unconfirmed reports of athletes around the world who have died or faced “serious issues” since 2021. It was published by the anti-vaccine website Good Sciencing.

The list was based on a mix of news reports and entries on the Vaccine Adverse Event Reporting System, or VAERS, which relies on self-reported cases from patients and doctors. Most news reports did not mention whether the deceased had been vaccinated. Health officials warn against using VAERS to make determinations about vaccine risks.

There is a known and uncommon vaccine side effect, called myocarditis, that involves an inflammation of the heart muscle. Men and boys who receive the Covid-19 vaccine are at higher risk of developing the condition, which can lead to chest pain and shortness of breath. In very rare cases, it can lead to more severe complications, including death.

Doctors say that the risk of developing myocarditis after getting vaccinated appears low and that most people afflicted with the condition quickly recover. One study found that boys and young men infected with Covid-19 are up to six times more likely to develop myocarditis than people who received the vaccine.

As of Jan. 20, VAERS had received 2,132 preliminary reports of myocarditis or pericarditis (an inflammation of the outer lining of the heart) among vaccinated people 30 or younger, according to the Centers for Disease Control and Prevention. More than 48 million people ages 5 to 24 have received at least one dose of the vaccine.

Health care providers are required to report any death after vaccination, even if there is no sign that the vaccine caused it. VAERS has received 11,657 reports of someone dying at some point after receiving the vaccine, representing 0.002 percent of all vaccinated people.

Davey Alba
Dec. 22, 2021, 10:00 a.m. ET

Pro-China misinformation group continues spreading messages, researchers say.

Two years ago, researchers uncovered details about a disinformation network that made a coordinated effort to push Chinese government messaging outside the country. Now, a separate research group says the network is still at it, despite efforts by social media companies to stop it.

More than 2,000 accounts continued to spread Chinese propaganda in the last year, according to a new report from the disinformation research group Miburo. They have promoted such falsehoods as the denial of human rights abuses in China’s Xinjiang region, where the Communist Party has carried out repressive policies against the Uyghurs, a Muslim ethnic minority, and Covid-19 misinformation, like the conspiracy that the U.S. military developed the coronavirus as a bioweapon.

The accounts point to a “well-resourced, high-skill actor that keeps reappearing,” said Nick Monaco, the director of China research at Miburo. He added that the timing and messaging of the posts in the network aligned perfectly with public messaging put out by the Chinese government in the last year.

Miburo said it was difficult to determine whether the influence campaign was organized by the ruling Communist Party or if some accounts were by nationalist citizens. But “knowing who pressed the enter key is less important” than the implication of a well-known actor spreading Chinese propaganda “at a high volume on international social media networks,” Mr. Monaco said in a blog post about the campaign.

China is known to use social media to broadcast its political messages with the aim of shaping global opinion. In June, The New York Times and ProPublica revealed the existence of thousands of videos orchestrated by the Chinese government in which citizens denied abuses in Xinjiang. This week, The Times reported on a set of documents that showed how Chinese officials tap private businesses to generate propaganda on demand.

Miburo said the network, nicknamed “Spamouflage” by researchers, was first discovered by the research group Graphika in a 2019 report. Though some posts have since been removed, Miburo tracked around 2,000 more accounts that Facebook, YouTube and Twitter failed to remove, from January 2021 to this month.

Miburo found nearly 8,000 YouTube videos in the network in the past year that collected over 3.6 million views, and links to the videos were posted on both Facebook and Twitter. The researchers also found 1,632 accounts in the network on Facebook, including some accounts that used fake profile photos generated with the help of artificial intelligence and Bangladeshi Facebook pages that later changed their names and started to post about China.

In early December, 287 YouTube channels spreading the Chinese propaganda were still up, Mr. Monaco said. All were removed after the researchers sent their data set to YouTube.

Farshad Shadloo, a YouTube spokesman, said the channels were terminated in the last month as part of YouTube’s continuing investigation into coordinated influence operations linked to China. He said that most of the channels had uploaded “spammy content” that had generated most of the views and that “a very small subset uploaded content in Chinese and English about China’s Covid-19 vaccine efforts and social issues in the U.S.”

Twitter said it had permanently suspended a number of accounts based on Miburo’s report under its platform manipulation and spam policy. Margarita Franklin, a Facebook spokeswoman, said the company would continue to work with researchers to detect and block the attempts of networks “to come back, like some of the accounts mentioned in this report.”

Facebook said that while some of the accounts flagged by Miburo resembled the behavior of Spamouflage, it could not yet confirm their connection to the network without more research. The network got little engagement on the platform, and a handful of accounts spotted by Miburo were false positives, the company said.

In January, according to Miburo’s report, a Facebook user linked to a YouTube video that spread propaganda about coronavirus vaccines. “Many countries [prefer to] buy Chinese vaccines first, U.S. vaccines have side effects,” the post said.

Image
A screenshot of an example that researchers found on YouTube.Credit...Miburo

By August and September, several Facebook accounts began pushing the false conspiracy that Covid-19 was developed in Fort Detrick, an American military base in Maryland, and alleged that the U.S. military was behind the coronavirus.

Image
A screenshot of an example that researchers found on a Facebook account. Identifying information has been redacted.Credit...Miburo

But Mr. Monaco argued that the most troubling new aspect of this version of the Spamouflage campaign was “the malice of spreading propaganda that denies human rights atrocities on a mass scale” by posting about Xinjiang.

On June 27, two different Facebook pages in the network posted identical messages within 10 minutes of each other, falsely denying forced labor and genocide in Xinjiang and characterizing it as “the lie of the century,” an unattributed quote of Zhao Lijian, a Chinese Ministry of Foreign Affairs spokesman.

Image
A screenshot of an example that researchers found showing identical Facebook messages. Identifying information has been redacted.Credit...Miburo

Advertisement

SKIP ADVERTISEMENT
Davey Alba
Nov. 3, 2021, 4:00 a.m. ET

Researchers say a coordinated misinformation campaign on Twitter backed Kenya’s president.

Image
Uhuru Kenyatta, Kenya’s president, in Glasgow on Monday.Credit...Pool photo by Yves Herman

Last month, reporting on newly disclosed financial documents showed that Kenya’s president, Uhuru Kenyatta, and members of his family were linked to 13 offshore companies with hidden assets of more than $30 million. The findings, part of the leaked documents known as the Pandora Papers, initially generated outrage online among Kenyans.

But within days, that sentiment was hijacked on Twitter by a coordinated misinformation campaign, according to a new report published by the nonprofit Mozilla Foundation. The effort generated thousands of messages supporting the president, whose term is ending, and criticizing the release of the documents.

“Like clockwork, an alternative sentiment quickly emerged, supporting the president and his offshore accounts,” said Odanga Madung, a fellow at Mozilla and an author of the report.

“Kenyan Twitter was awash in Pandora Paper astroturfing,” he said.

The research underscores how online platforms based in the United States still struggle to police inauthentic behavior abroad. Internal documents obtained by the former Facebook product manager turned whistle-blower, Frances Haugen, repeatedly showed how the social network failed to adequately police hate speech and misinformation in countries outside North America, where 90 percent of its users reside.

Ann-Marie Lowry, a Twitter spokeswoman, said in a statement that the company’s uniquely open nature empowered research such as Mozilla’s. “Our top priority is keeping people safe, and we remain vigilant about coordinated activity on our service,” Ms. Lowry said. “We are constantly improving Twitter’s auto-detection technology to catch accounts engaging in rule-violating behavior as soon as they pop up on the service.”

Mr. Madung and another researcher, Brian Obilo, looked at over 10,000 tweets discussing mentions of Mr. Kenyatta in the Pandora Papers over a four-week period. They found a campaign of nearly 5,000 tweets with thousands of likes and shares that was “clearly inauthentic” and “coordinated to feign public support,” according to the research.

The 1,935 accounts that they found had participated in the campaign tweeted for days only about the Pandora Papers and Mr. Kenyatta, and got certain hashtags like #phonyleaks and #offshoreaccountfacts to appear on Twitter’s dedicated sidebar for trending topics by posting the hashtag repeatedly. The researchers noted that many of the accounts they found had been part of a previous disinformation campaign tweeting pro-government propaganda from May that they had flagged to Twitter. The company took down some of the accounts but allowed others to remain up.

“Before chest thumping and yawning mercilessly that H.E had stolen your money, know first when the offshore accounts were acquired #OffshoreAccountFacts,” said one tweet that posted as part of the campaign, using a short hand for “His Excellency” to refer to Mr. Kenyatta. The post collected 341 likes and shares on Twitter before the account was suspended.

A similar campaign was attempted on Facebook but the researchers found only 12 posts with under 100 interactions there, Mr. Madung said.

After the researchers shared the report with Twitter’s policy team, the company suspended more than 230 accounts for violating its platform manipulation and spam policies. It added that it would continue to work with third-party organizations that helped to identify tweets or accounts that violated the social network’s policies.

Davey Alba
Oct. 14, 2021, 3:05 p.m. ET

YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.

Share of Election-Related Posts on Social Platforms Linking to Videos Making Claims of Fraud

Source: Center for Social Media and Politics at New York University

By The New York Times

YouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.

Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.

But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.

The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.

The trend was replicated on Facebook. A postelection surge in sharing videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before rising slightly before the Jan. 6 riot at the Capitol. The proportion dropped again, to 4 percent by Inauguration Day, after the new policies were put in place on Jan. 7.

To reach their findings, researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same for YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.

From this large data set, the researchers filtered for YouTube videos about the election broadly, as well as about election fraud using a set of keywords like “Stop the Steal” and “Sharpiegate.” This allowed the researchers to get a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.

Misinformation on major social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has toughened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.

Ivy Choi, a YouTube spokeswoman, said that YouTube was the only major online platform with a presidential election integrity policy. “We also raised up authoritative content for election-related search queries and reduced the spread of harmful election-related misinformation,” she said.

Megan Brown, a research scientist at the N.Y.U. Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos that promoted election fraud. It is also possible that interest in the election fraud theories dropped considerably after states certified their election results.

But the bottom line, Ms. Brown said, is that “we know these platforms are deeply interconnected.” YouTube, she pointed out, has been identified as one of the most-shared domains across other platforms, including in both of Facebook’s recently released content reports and N.Y.U.’s own research.

“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform becomes healthier, others do as well.”

Advertisement

SKIP ADVERTISEMENT
Daisuke WakabayashiTiffany Hsu
Oct. 7, 2021, 4:53 p.m. ET

Google bans ads on content, including YouTube videos, with false claims about climate change.

Image
An aircraft drops fire retardant on a ridge during the Walbridge fire in Healdsburg, Calif., last year. Google said it will no longer allow ads that promote climate change denial.Credit...Josh Edelson/Agence France-Presse — Getty Images

Google said it will no longer display advertisements on YouTube videos and other content that promote inaccurate claims about climate change.

The decision, by the company’s ads team, means that it will no longer permit websites or YouTube creators to earn advertising money via Google for content that “contradicts well-established scientific consensus around the existence and causes of climate change.” And it will not allow ads that promote such views from appearing.

“In recent years, we’ve heard directly from a growing number of our advertising and publisher partners who have expressed concerns about ads that run alongside or promote inaccurate claims about climate change,” the company said.

The policy applies to content that refers to climate change as a hoax or a scam, denies the long-term trend that the climate is warming, or denies that greenhouse gas emissions or human activity is contributing to climate change.

Google limits or restricts advertising alongside certain sensitive topics or events, such as firearms-related videos or content about a tragic event. This is the first time Google has added climate change denial to the list.

Facebook, Google’s main rival for digital advertising dollars, does not have an explicit policy outlawing advertisements denying climate change.

In addition to not wanting to be associated with climate change misinformation, ad agencies, in an echo of their shift away from the tobacco business decades earlier, have begun to re-evaluate their association with fossil-fuel clients. Agencies such as Forsman & Bodenfors have signed pledges to no longer work for oil and gas producers. Calls have increased to ban the industry from advertising on city streets and sponsoring sports teams.

Greenpeace USA and other environmental groups filed a complaint with the Federal Trade Commission earlier this year accusing Chevron of “consistently misrepresenting its image to appear climate-friendly and racial justice-oriented, while its business operations overwhelmingly rely on climate-polluting fossil fuels.” Exxon faces lawsuits from Democratic officials in several states accusing it of using ads, among other methods, to deceive consumers about climate change.

Publications such as the British Medical Journal, The Guardian and the Swedish publications Dagens Nyheter and Dagens ETC have limited or stopped accepting fossil fuel ads. The New York Times prevents oil and gas companies from sponsoring its climate newsletter, its climate summit or its podcast “The Daily,” but it allows the industry to advertise elsewhere.

Davey Alba
Sept. 29, 2021, 10:00 a.m. ET

YouTube bans all anti-vaccine misinformation.

Image
YouTube said it was banning the accounts of several prominent anti-vaccine activists from its platform, including Robert F. Kennedy Jr.’s.Credit...Clemens Bilan/EPA, via Shutterstock

YouTube said on Wednesday that it was banning the accounts of several prominent anti-vaccine activists from its platform, including those of Joseph Mercola and Robert F. Kennedy Jr., as part of an effort to remove all content that falsely claims that approved vaccines are dangerous.

In a blog post, YouTube said it would remove videos claiming that vaccines do not reduce rates of transmission or contraction of disease, and content that includes misinformation on the makeup of the vaccines. Claims that approved vaccines cause autism, cancer or infertility, or that the vaccines contain trackers, will also be removed.

The platform, which is owned by Google, has had a similar ban on misinformation about the Covid-19 vaccines. But the new policy expands the rules to misleading claims about long-approved vaccines, such as those against measles and hepatitis B, as well as to falsehoods about vaccines in general, YouTube said. Personal testimonies relating to vaccines, content about vaccine policies and new vaccine trials, and historical videos about vaccine successes or failures will be allowed to remain on the site.

“Today’s policy update is an important step to address vaccine and health misinformation on our platform, and we’ll continue to invest across the board” in policies that bring its users high-quality information, the company said in its announcement.

In addition to barring Dr. Mercola and Mr. Kennedy, YouTube removed the accounts of other prominent anti-vaccination activists such as Erin Elizabeth and Sherri Tenpenny, a company spokeswoman said.

The new policy puts YouTube more in line with Facebook and Twitter. In February, Facebook said it would remove posts with erroneous claims about vaccines, including assertions that vaccines cause autism or that it is safer for people to contract the coronavirus than to receive vaccinations against it. But the platform remains a popular destination for people discussing misinformation, such as the unfounded claim that the pharmaceutical drug ivermectin is an effective treatment for Covid-19.

In March, Twitter introduced its own policy that explained the penalties for sharing lies about the virus and vaccines. But the company has a five “strikes” rule before it permanently bars people for violating its coronavirus misinformation policy.

The accounts of such high-profile anti-vaccination activists like Dr. Mercola and Mr. Kennedy remain active on Facebook and Twitter — although Instagram, which Facebook owns, has suspended Mr. Kennedy’s account.

YouTube started looking into broadening its policy on anti-vaccine content shortly after creating a set of rules around Covid-19 vaccine misinformation in October, according to a person close to the company’s policymaking process, who would speak only anonymously because he was not permitted to discuss the matters publicly. YouTube found many videos about the coronavirus vaccine spilled over into general vaccine misinformation, making it difficult to tackle Covid-19 misinformation without addressing the broader issue.

But creating a new set of rules and enforcement policies took months, because it is difficult to rein in content across many languages and because of the complicated debate over where to draw the line on what users can post, the person said. For example, YouTube will not remove a video of a parent talking about a child’s negative reaction to a vaccine, but it will remove a channel dedicated to parents providing such testimonials.

Misinformation researchers have for years pointed to the proliferation of anti-vaccine content on social networks as a factor in vaccine hesitation — including slowing rates of Covid-19 vaccine adoption in more conservative states. Reporting has shown that YouTube videos often act as the source of content that subsequently goes viral on platforms like Facebook and Twitter, sometimes racking up tens of millions of views.

“One platform’s policies affect enforcement across all the others because of the way networks work across services,” said Evelyn Douek, a lecturer at Harvard Law School who focuses on online speech and misinformation. “YouTube is one of the most highly linked domains on Facebook, for example.”

She added: “It’s not possible to think of these issues platform by platform. That’s not how anti-vaccination groups think of them. We have to think of the internet ecosystem as a whole.”

Prominent anti-vaccine activists have long been able to build huge audiences online, helped along by the algorithmic powers of social networks that prioritize videos and posts that are particularly successful at capturing people’s attention. A nonprofit, the Center for Countering Digital Hate, published research this year showing that a group of 12 people were responsible for sharing 65 percent of all anti-vaccine messaging on social media, calling the group the “Disinformation Dozen.” In July, the White House cited the research as it criticized tech companies for allowing misinformation about the coronavirus and vaccines to spread widely, sparking a tense back-and-forth between the administration and Facebook.

Several people listed in the Disinformation Dozen no longer have channels on YouTube, including Dr. Mercola, an osteopathic physician who took the top spot on the list. His following on Facebook and Instagram totals more than three million, while his YouTube account, before it was taken down, had nearly half a million followers. Dr. Mercola’s Twitter account, which is still live, has over 320,000 followers.

YouTube said that in the past year it had removed over 130,000 videos for violating its Covid-19 vaccine policies. But this did not include what the video platform called “borderline videos” that discussed vaccine skepticism on the site. In the past, the company simply removed such videos from search results and recommendations, while promoting videos from experts and public health institutions.

Daisuke Wakabayashi contributed reporting. Ben Decker contributed research.

Advertisement

SKIP ADVERTISEMENT
Davey Alba
Sept. 28, 2021, 5:00 a.m. ET

Facebook groups promoting ivermectin as a Covid-19 treatment continue to flourish.

Image
Credit...Houston Cofield for The New York Times

Facebook has become more aggressive at enforcing its coronavirus misinformation policies in the past year. But the platform remains a popular destination for people discussing how to acquire and use ivermectin, a drug typically used to treat parasitic worms, even though the Food and Drug Administration has warned people against taking it to treat Covid-19.

Facebook has taken down a handful of the groups dedicated to these discussions. But dozens more remain up, according to recent research. In some of those groups, members discuss strategies to evade the social network’s rules.

Media Matters for America, a liberal watchdog group, found 60 public and private Facebook groups dedicated to ivermectin discussion, with tens of thousands of members in total. After the organization flagged the groups to Facebook, 25 of them closed down. The remaining groups, which were reviewed by The New York Times, had nearly 70,000 members. Data from CrowdTangle, a Facebook-owned social network analytics tool, showed that the groups generate thousands of interactions daily.

Facebook said it prohibited the sale of prescription products, including drugs and pharmaceuticals, across its platforms, including in ads. “We remove content that attempts to buy, sell or donate for ivermectin,” Aaron Simpson, a Facebook spokesman, said in an emailed statement. “We also enforce against any account or group that violates our Covid-19 and vaccine policies, including claims that ivermectin is a guaranteed cure or guaranteed prevention, and we don’t allow ads promoting ivermectin as a treatment for Covid-19.”

In some of the ivermectin groups, the administrators — the people in charge of moderating posts and determining settings like whether the group is private or public — gave instructions on how to evade Facebook’s automated content moderation.

In a group called Healthcare Heroes for Personal Choice, an administrator instructed people to remove or misspell buzzwords and to avoid using the syringe emoji.

An administrator added, referring to video services like YouTube and BitChute: “If you want to post a video from you boob or bit ch ut e or ru m b l e, hide it in the comments.” Facebook rarely polices the comments section of posts for misinformation.

Image
Identifying information has been redacted.

Facebook said that it broadly looks at the actions of administrators when determining if a group breaks the platform’s rules, it said, and if moderators do break the rules, that counts as strikes against the overall group.

The groups also funnel members into alternative platforms where content moderation policies are more lax. In a Facebook group with more than 5,000 members called Ivermectin vs. Covid, a member shared a link to join a channel on Telegram, a messaging service, for further discussion of “the latest good news surrounding this miraculous pill.”

“Ivermectin is clearly the answer to solve covid and the world is waking up to this truth,” the user posted.

After The Times contacted Facebook about the Ivermectin vs. Covid group, the social network removed it from the platform.

Image
Identifying information has been redacted.
Davey Alba
Sept. 23, 2021, 5:00 a.m. ET

Wikipedia’s next leader on preventing misinformation: ‘Neutrality requires understanding.’

Image
Maryana Iskander, a social entrepreneur in South Africa, will become the chief executive of the foundation that oversees Wikipedia in January.Credit...Gabriel Diamond

Two decades ago, Wikipedia arrived on the scene as a quirky online project that aimed to crowdsource and document all of human knowledge and history in real time. Skeptics worried that much of the site would include unreliable information, and frequently pointed out mistakes.

But now, the online encyclopedia is often cited as a place that, on balance, helps combat false and misleading information spreading elsewhere.

Last week, the Wikimedia Foundation, the group that oversees Wikipedia, announced that Maryana Iskander, a social entrepreneur in South Africa who has worked for years in nonprofits tackling youth unemployment and women’s rights, will become its chief executive in January.

We spoke with her about her vision for the group and how the organization works to prevent false and misleading information on its sites and around the web.

Give us a sense of your direction and vision for Wikimedia, especially in such a fraught information landscape and in this polarized world.

There are a few core principles of Wikimedia projects, including Wikipedia, that I think are important starting points. It’s an online encyclopedia. It’s not trying to be anything else. It’s certainly not trying to be a traditional social media platform in any way. It has a structure that is led by volunteer editors. And as you may know, the foundation has no editorial control. This is very much a user-led community, which we support and enable.

The lessons to learn from, not just with what we’re doing but how we continue to iterate and improve, start with this idea of radical transparency. Everything on Wikipedia is cited. It’s debated on our talk pages. So even when people may have different points of view, those debates are public and transparent, and in some cases really allow for the right kind of back and forth. I think that’s the need in such a polarized society — you have to make space for the back and forth. But how do you do that in a way that’s transparent and ultimately leads to a better product and better information?

And the last thing that I’ll say is, you know, this is a community of extremely humble and honest people. As we look to the future, how do we build on those attributes in terms of what this platform can continue to offer society and provide free access to knowledge? How do we make sure that we are reaching the full diversity of humanity in terms of who is invited to participate, who is written about? How are we really making sure that our collective efforts reflect more of the global south, reflect more women and reflect the diversity of human knowledge, to be more reflective of reality?

What is your take on how Wikipedia fits into the widespread problem of disinformation online?

Many of the core attributes of this platform are very different than some of the traditional social media platforms. If you take misinformation around Covid, the Wikimedia Foundation entered into a partnership with the World Health Organization. A group of volunteers came together around what was called WikiProject Medicine, which is focused on medical content and creating articles that then are very carefully monitored because these are the kinds of topics that you want to be mindful around misinformation.

Another example is that the foundation put together a task force ahead of the U.S. elections, again, trying to be very proactive. [The task force supported 56,000 volunteer editors watching and monitoring key election pages.] And the fact that there were only 33 reversions on the main U.S. election page was an example of how to be very focused on key topics where misinformation poses real risks.

Then another example that I just think is really cool is there’s a podcast called “The World According to Wikipedia.” And on one of the episodes, there’s a volunteer who is interviewed, and she really has made it her job to be one of the main watchers of the climate change pages.

We have tech that alerts these editors when changes are made to any of the pages so they can go see what the changes are. If there’s a risk that, actually, misinformation may be creeping in, there’s an opportunity to temporarily lock a page. Nobody wants to do that unless it’s absolutely necessary. The climate change example is useful because the talk pages behind that have massive debate. Our editor is saying: “Let’s have the debate. But this is a page I’m watching and monitoring carefully.”

One big debate that is currently happening on these social media platforms is this issue of the censorship of information. There are people who claim that biased views take precedence on these platforms and that more conservative views are taken down. As you think about how to handle these debates once you’re at the head of Wikipedia, how do you make judgment calls with this happening in the background?

For me, what’s been inspiring about this organization and these communities is that there are core pillars that were established on Day 1 in setting up Wikipedia. One of them is this idea of presenting information with a neutral point of view, and that neutrality requires understanding all sides and all perspectives.

It’s what I was saying earlier: Have the debates on talk pages on the side, but then come to an informed, documented, verifiable citable kind of conclusion on the articles. I think this is a core principle that, again, could potentially offer something to others to learn from.

Having come from a progressive organization fighting for women’s rights, have you thought much about misinformers weaponizing your background to say it may influence the calls you make about what is allowed on Wikipedia?

I would say two things. I would say that the really relevant aspects of the work that I’ve done in the past is volunteer-led movements, which is probably a lot harder than others might think, and that I played a really operational role in understanding how to build systems, build culture and build processes that I think are going to be relevant for an organization and a set of communities that are trying to increase their scale and reach.

The second thing that I would say is, again, I’ve been on my own learning journey and invite you to be on a learning journey with me. How I choose to be in the world is that we interact with others with an assumption of good faith and that we engage in respectful and civilized ways. That doesn’t mean other people are going to do that. But I think that we have to hold on to that as an aspiration and as a way to, you know, be the change that we want to see in the world as well.

When I was in college, I would do a lot of my research on Wikipedia, and some of my professors would say, ‘You know, that’s not a legitimate source.’ But I still used it all the time. I wondered if you had any thoughts about that!

I think now most professors admit that they sneak onto Wikipedia as well to look for things!

You know, we’re celebrating the 20th year of Wikipedia this year. On the one hand, here was this thing that I think people mocked and said wouldn’t go anywhere. And it’s now become legitimately the most referenced source in all of human history. I can tell you just from my own conversations with academics that the narrative around the sources on Wikipedia and using Wikipedia has changed.

Advertisement

SKIP ADVERTISEMENT
Davey Alba
Sept. 10, 2021, 3:58 p.m. ET

Facebook sent flawed data to misinformation researchers.

Image
Mark Zuckerberg, chief executive of Facebook, testifying in Washington in 2018.Credit...Tom Brenner/The New York Times

More than three years ago, Mark Zuckerberg of Facebook trumpeted a plan to share data with researchers about how people interacted with posts and links on the social network, so that the academics could study misinformation on the site. Researchers have used the data for the past two years for numerous studies examining the spread of false and misleading information.

But the information shared by Facebook had a major flaw, according to internal emails and interviews with the researchers. The data included the interactions of only about half of Facebook’s U.S. users — the ones who engaged with political pages enough to make their political leanings clear — not all of them as the company had said. Facebook told the researchers that data about users outside of the United States, which has also been shared, did not appear to be inaccurate.

“This undermines trust researchers may have in Facebook,” said Cody Buntain, an assistant professor and social media researcher at the New Jersey Institute of Technology who was part of the group of researchers, known as Social Science One, who have been given the user activity information.

“A lot of concern was initially voiced about whether we should trust that Facebook was giving Social Science One researchers good data,” Mr. Buntain said. “Now we know that we shouldn’t have trusted Facebook so much and should have demanded more effort to show validity in the data.”

The company apologized to the researchers in a email this week. “We sincerely apologize for the inconvenience this may cause and would like to offer as much support as possible.” Facebook added that it was updating the data set to fix the issue but that, given the large volume of data, it would take weeks before the work would be completed.

Representatives of the company, including two members of Facebook’s Open Research and Transparency Team, held a call with researchers on Friday, apologizing for the mistake, according to two people who attended the meeting.

The Facebook representatives said only about 30 percent of research papers relied on U.S. data, said the people on the call, who agreed to speak only anonymously. But the representatives said that they still did not know whether other aspects of the data set were affected.

Several researchers on the call complained that they had lost months of work because of the error, the people on the call said. One researcher said doctoral degrees were at risk because of the mistake, while another expressed concern that Facebook was either negligent or, worse, actively undermining the research.

“From a human point of view, there were 47 people on that call today and every single one of those projects is at risk, and some are completely destroyed,” Megan Squire, one of the researchers, said in an interview after the call.

Mavis Jones, a Facebook spokeswoman, said the issue was caused by a technical error, “which we proactively told impacted partners about and are working swiftly to resolve.”

The error in the data set was first spotted by Fabio Giglietto, an associate professor and social media researcher from the University of Urbino, in Italy. Mr. Giglietto said he discovered the inaccuracy after he compared data that Facebook released publicly last month about top posts on the service with the data the company had provided exclusively to the researchers. He found that the results of the two were different.

“It’s a great demonstration that even a little transparency can provide amazing results,” Mr. Giglietto said of the chain of events leading to his discovery.

This is the second time in recent weeks that researchers and journalists have found discrepancies in the data sets Facebook has provided for more transparency on the platform. In late August, Politico reported that tens of thousands of Facebook posts from the days before and after the Jan. 6 riots on Capitol Hill had gone missing from CrowdTangle, an analytics tool owned by the social network that is used by journalists and researchers.

Ryan Mac contributed reporting.

Davey Alba
Sept. 3, 2021, 4:03 p.m. ET

These two rumors are going viral ahead of California’s recall election.

Image
Gov. Gavin Newsom of California faces a recall election on Sept. 14. Credit...Jim Wilson/The New York Times

As California’s Sept. 14 election over whether to recall Gov. Gavin Newsom draws closer, unfounded rumors about the event are growing.

Here are two that are circulating widely online, how they spread and why, state and local officials said, they are wrong.

Rumor No. 1: Holes in the ballot envelopes were being used to screen out votes that say “yes” to a recall.

On Aug. 19, a woman posted a video on Instagram of herself placing her California special election ballot in an envelope.

“You have to pay attention to these two holes that are in front of the envelope,” she said, bringing the holes close to the camera so viewers could see them. “You can see if someone has voted ‘yes’ to recall Newsom. This is very sketchy and irresponsible in my opinion, but this is asking for fraud.”

The idea that the ballot envelope’s holes were being used to weed out the votes of those who wanted Gov. Newsom, a Democrat, to be recalled rapidly spread online, according to a review by The New York Times.

The Instagram video collected nearly half a million views. On the messaging app Telegram, posts that said California was rigging the special election amassed nearly 200,000 views. And an article about the ballot holes on the far-right site The Gateway Pundit reached up to 626,000 people on Facebook, according to data from CrowdTangle, a Facebook-owned social media analytics tool.

State and local officials said the ballot holes were not new and were not being used nefariously. The holes were placed in the envelope, on either end of a signature line, to help low-vision voters know where to sign it, said Jenna Dresner, a spokeswoman for the California Secretary of State’s Office of Election Cybersecurity.

The ballot envelope’s design has been used for several election cycles, and civic design consultants recommended the holes for accessibility, added Mike Sanchez, a spokesman for the Los Angeles County registrar. He said voters could choose to put the ballot in the envelope in such a way that didn’t reveal any ballot marking at all through a hole.

Instagram has since appended a fact-check label to the original video to note that it could mislead people. The fact check has reached up to 20,700 people, according to CrowdTangle data.

Rumor No. 2: A felon stole ballots to help Governor Newsom win the recall election.

On Aug. 17, the police in Torrance, Calif., published a post on Facebook that said officers had responded to a call about a man who was passed out in his car in a 7-Eleven parking lot. The man had items such as a loaded firearm, drugs and thousands of pieces of mail, including more than 300 unopened mail-in ballots for the special election, the police said.

Far-right sites such as Red Voice Media and Conservative Firing Line claimed the incident was an example of Democrats’ trying to steal an election through mail-in ballots. Their articles were then shared on Facebook, where they collectively reached up to 1.57 million people, according to CrowdTangle data.

Mark Ponegalek, a public information officer for the Torrance Police Department, said the investigation into the incident was continuing. The U.S. postal inspector was also involved, he said, and no conclusions had been reached.

As a result, he said, online articles and posts concluding that the man was attempting voter fraud were “baseless.”

“I have no indication to tell you one way or the other right now” whether the man intended to commit election fraud with the ballots he collected, Mr. Ponegalek said. He added that the man may have intended to commit identity fraud.

Advertisement

SKIP ADVERTISEMENT
Davey Alba
Aug. 10, 2021, 3:22 p.m. ET

Facebook removes Russian-based network that spread vaccine misinformation.

Image
A drive-in Covid-19 vaccination center in New Delhi. Accounts based in Russia spread vaccine misinformation in India, Latin America and the United States, Facebook said.Credit...Atul Loke for The New York Times

Facebook said on Tuesday that it had removed a network of accounts based in Russia that spread misinformation about coronavirus vaccines. The network targeted audiences in India, Latin America and the United States with posts falsely asserting that the AstraZeneca vaccine would turn people into chimpanzees and that the Pfizer vaccine had a much higher casualty rate than other vaccines, the company said.

The network violated Facebook’s foreign interference policies, the company said. It traced the posts to a marketing firm operating from Russia, Fazze, which is a subsidiary of AdNow, a company registered in Britain.

Facebook said it had taken down 65 Facebook accounts and 243 Instagram accounts associated with the firm and barred Fazze from its platform. The social network announced the takedown as part of its monthly report on influence campaigns run by people or groups that purposely misrepresent who is behind the posts.

“This campaign functioned as a disinformation laundromat,” said Ben Nimmo, who leads Facebook’s global threat intelligence team.

The influence campaign took place as regulators in the targeted countries were discussing emergency authorizations for vaccines, Facebook said. The company said it had notified people it believed had been contacted by the network and shared its findings with law enforcement and researchers.

Russia and China have promoted their own vaccines by distributing false and misleading messages about American and European vaccination programs, according to the State Department’s Global Engagement Center. Most recently, the disinformation research firm Graphika found numerous antivaccination cartoons that it traced back to people in Russia.

Security analysts and American officials say a “disinformation for hire” industry is growing quickly. Back-alley firms like Fazze spread falsehoods on social media and meddle in elections or other geopolitical events on behalf of clients who can claim deniability.

The Fazze campaign was carried out in two waves, Facebook said. In late 2020, Fazze created two batches of fake Facebook accounts that initially posted about Indian food or Hollywood actors. Then in November and December, as the Indian government was discussing emergency authorization for the AstraZeneca vaccine, the accounts started pushing the false claim that the vaccine was dangerous because it was derived from a chimpanzee adenovirus. The campaign extended to websites like Medium and Change.org, and memes about the vaccine’s turning its subjects into chimpanzees proliferated on Facebook.

The Fazze campaign went silent for a few months, then resumed in May when the inauthentic accounts falsely claimed that Pfizer’s vaccine had caused a much higher “casualty rate” than other vaccines. There were only a few dozen Facebook posts targeting the United States and one post by an influencer in Brazil, and there was almost no reaction to the posts, according to the company. Fazze also reached out to influencers in France and Germany, who ultimately exposed the disinformation campaign, Facebook said.

“Influence operations increasingly target authentic influential voices to carry their messages,” Facebook said in its report. “Through them, deceptive campaigns gain access to the influencer’s ready-made audience, but it comes with a significant risk of exposure.”

AdNow, the parent company of Fazze, did not immediately respond to a request for comment.

Facebook said it had also removed 79 Facebook accounts, 13 pages, eight groups and 19 accounts in Myanmar that targeted domestic citizens and were linked to the Myanmar military. In March, the company barred Myanmar’s military from its platforms, after a military coup overthrew the country’s fragile democratic government.

Davey Alba
Aug. 10, 2021, 10:04 a.m. ET

Twitter suspends Marjorie Taylor Greene for 7 days over vaccine misinformation.

Image
Twitter said this was Representative Marjorie Taylor Greene’s fourth “strike.”Credit...Stefani Reynolds for The New York Times

Twitter on Tuesday suspended Representative Marjorie Taylor Greene, Republican of Georgia, from its service for seven days after she posted that the Food and Drug Administration should not give the coronavirus vaccines full approval and that the vaccines were “failing.”

The company said this was Ms. Greene’s fourth “strike,” which means that under its rules she can be permanently barred if she violates Twitter’s coronavirus misinformation policy again. The company issued her third strike less than a month ago.

On Monday evening, Ms. Greene said on Twitter, “The FDA should not approve the covid vaccines.” She said there were too many reports of infection and spread of the coronavirus among vaccinated people, and that the vaccines were “failing” and “do not reduce the spread of the virus & neither do masks.”

The Centers for Disease Control and Prevention’s current guidance states, “Covid-19 vaccines are effective at protecting you from getting sick.”

In late July, the agency also revised its indoor mask policy, advising that people wear a mask in public indoor spaces in parts of the country where the virus is surging to maximize protection from the Delta variant and prevent possibly spreading the coronavirus. A recent report by two Duke University researchers who reviewed data from March to June in 100 school districts and 14 charter schools in North Carolina concluded that wearing masks was an effective measure for preventing the transmission of the virus, even without six feet of physical distancing.

Ms. Greene’s tweet was “labeled in line with our Covid-19 misleading information policy,” Trenton Kennedy, a Twitter spokesman, said in an emailed statement. “The account will be in read-only mode for a week due to repeated violations of the Twitter Rules.”

In a statement circulated online, Ms. Greene said: “I have vaccinated family who are sick with Covid. Studies and news reports show vaccinated people are still getting Covid and spreading Covid.”

Data from the C.D.C. shows that of the so-called breakthrough infections among the fully vaccinated, serious cases are extremely rare. A New York Times analysis of data from 40 states and Washington, D.C., found that fully vaccinated people made up fewer than 5 percent of those hospitalized with the virus and fewer than 6 percent of those who had died.

Twitter has picked up enforcement against accounts posting coronavirus misinformation as cases have risen across the United States because of the highly contagious Delta variant. In Ms. Greene’s home state, new cases have increased 171 percent in the past two weeks, while 39 percent of Georgia’s population has been fully vaccinated against the virus.

Ms. Greene’s Facebook account, which has more than 366,000 followers, remains active. Her posts on the social network are different from her posts on Twitter. She also has more than 412,000 followers on Instagram, which Facebook owns.

On Telegram, the encrypted chat app that millions flocked to after Facebook and Twitter removed thousands of far-right accounts, Ms. Greene has 160,600 subscribers.

Advertisement

SKIP ADVERTISEMENT
Linda Qiu
Aug. 6, 2021, 5:44 p.m. ET

No, there is no evidence that migrants are driving the surge in coronavirus cases.

Image
Migrants crossed the Rio Grande River at Ciudad Juárez, Mexico, in March, trying to request asylum from U.S. Border Patrol agents.Credit...Daniel Berehulak for The New York Times

As coronavirus cases and hospitalizations surge across the country, wrought by the spread of the Delta variant, some conservatives have pinned the blame on migrants crossing the southern border — without providing any evidence.

Faced with rapidly rising cases in their states and criticized by President Biden for their opposition to mask mandates, the governors of Florida and Texas have pointed to the administration’s border policies as a primary cause of the new cases. That sentiment has also echoed on social media, among members of Congress and among the unvaccinated.

He’s imported more virus from around the world by having a wide open southern border,” Gov. Ron DeSantis of Florida said of Mr. Biden on Wednesday. “Whatever variants are across the world, they’re coming through that southern border.”

Gov. Greg Abbott of Texas made a similar claim on Fox News on Monday: “The Biden administration is allowing people to come across the southern border, many of whom have Covid, most of whom are not really being checked for Covid.”

Officials have said that positive test results among migrants have increased in recent weeks. A spokesman for Hidalgo County in Texas, which is in the Rio Grande Valley, where many migrants cross the border, said that the positivity rate for migrants was about 16 percent this week, as of Thursday.

But public health experts said there was no evidence that migrants were driving the surge of coronavirus. The positivity rate for residents of Hidalgo County — excluding migrants — was 17.59 percent this week.

While Texas is experiencing many more cases than a couple of months ago, many of the major outbreaks are occurring in states — such as Missouri and Arkansas — that do not border Mexico, said Dr. Jaquelin P. Dudley, associate director of the LaMontagne Center for Infectious Disease and a professor of molecular biosciences at the University of Texas at Austin.

Max Hadler, the Covid-19 senior policy expert at Physicians for Human Rights, a nonprofit advocacy group, said positive rates were increasing in every state in the country.

“It’s not a border issue or a migrant issue, it’s a national issue. And it’s a particularly major issue in states with lower vaccination rates,” Mr. Hadler said. “That’s the clearest and most important correlation, and it has nothing to do with migrants but rather with rates of vaccination among people living in those states.”

A recent report from the Kaiser Family Foundation found that those not fully vaccinated accounted for between 94 percent and 99.8 percent of reported coronavirus cases in the 23 states and Washington, D.C., that collect breakthrough case data.

There is not evidence that any of four variants of concern tracked by the Centers for Disease Control and Prevention initially entered through the southern border. The four variants of concern, which are those that are more transmittable or cause more severe cases, are called Alpha, Beta, Gamma and Delta.

Dr. Benjamin Pinsky, the director of the Clinical Virology Laboratory for Stanford Health Care, which tracks new variants, said the lab’s findings did not support Mr. DeSantis’s assertion that variants were “coming through” the southern border.

The first identified cases of the Alpha and Beta variants in the United States were patients in Colorado and South Carolina with no travel history, according to the C.D.C. The first identified case of the Gamma variant was a patient in Minnesota, who had traveled to Brazil.

Dr. Katherine Peeler, an instructor at Harvard Medical School, noted that the Delta variant — first identified in India last year — is more widespread in the United States than in most of Latin America. The first case of the Delta variant detected in the United States occurred in March, according to the C.D.C.

Mexico did not report its first case of Delta until July, and El Salvador confirmed its first case last week.

“As such, this is not an issue of increasing Delta variant from the southern border and those seeking asylum,” Dr. Peeler said.

Mr. Abbott’s office did not respond when asked for evidence that migrants were not being tested.

Christina Pushaw, Mr. DeSantis’s press secretary, said the governor never implied that migration was the only reason for the spread of the virus, but rather he was simply highlighting “the paradoxical nature of the Biden administration’s support for additional restrictions on Americans and lawful immigrants,” like vaccine passports, “while allowing illegal migrants to cross the border and travel through the country freely.”

Most migrants trying to cross the southern border are turned away by officials. Out of 1.1 million encounters on the southern border so far this fiscal year, more than 768,000 have led to expulsions.

Of the remaining apprehended migrants, some are detained and some are released as they await decisions on their asylum applications. Several local governments and charities across the Texas border where migrants have been released told The New York Times that many, if not most, migrants in their care are tested and then quarantined if they test positive.

A spokesman for Customs and Border Protection said that the agency provided migrants with personal protective equipment as soon as they were taken into custody, and the migrants were required to keep their masks on at all times. Anyone who exhibits signs of illness is taken to a local health center and is tested and treated there. Once migrants are transferred out of C.B.P. custody, they are released to a nongovernmental organization, a local government, Immigration and Customs Enforcement or, in the case of unaccompanied minors, the Department of Health and Human Services.

The Department of Homeland Security has “taken significant steps to develop systems to facilitate testing, isolation, and quarantine of those individuals who are not immediately returned to their home countries after encounter,” David Shahoulian, the assistant secretary for border and immigration policy at the department, said in a government court document filed this week. Mr. Shahoulian said that the department and I.C.E. had set up processing and testing centers along the border to aid with the surge in migrants.

Davey Alba
Aug. 4, 2021, 1:36 p.m. ET

A top spreader of coronavirus misinformation says he will delete his posts after 48 hours.

Joseph Mercola, who researchers say is a chief spreader of coronavirus misinformation online, said on Wednesday that he would delete posts on his site 48 hours after publishing them.

In a post on his website, Dr. Mercola, an osteopathic physician in Cape Coral, Fla., said he was deleting his writings because President Biden had “targeted me as his primary obstacle that must be removed” and because “blatant censorship” was being tolerated.

Last month, the White House, while criticizing tech companies for allowing misinformation about the coronavirus and vaccines to spread widely, pointed to research showing that a group of 12 people were responsible for sharing 65 percent of all anti-vaccine messaging on social media. The nonprofit behind the research, the Center for Countering Digital Hate, called the group the “Disinformation Dozen” and listed Dr. Mercola in the top spot.

Dr. Mercola has built a vast operation to disseminate anti-vaccination and natural health content and to profit from it, according to researchers. He employs teams of people in places like Florida and the Philippines, who swing into action when news moments touch on health issues, rapidly publishing blog posts and translating them into nearly a dozen languages, then pushing them to a network of websites and to social media.

An analysis by The New York Times found that he had published more than 600 articles on Facebook that cast doubt on Covid-19 vaccines since the pandemic began, reaching a far larger audience than other vaccine skeptics. Dr. Mercola criticized The Times’s reporting in his post on Wednesday, saying it was “loaded with false statements,” (not “false facts” as was previously reported here).

Dr. Mercola said in his blog post that he would remove 15,000 past posts from his website. He will continue to write daily articles, he said, but they will only be available for 48 hours before being removed. He said it was up to his followers to help spread his work.

Rachel E. Moran, a researcher at the University of Washington who studies online conspiracy theories, said the announcement by Dr. Mercola was “him trying to come up with his own strategies of avoiding his content being taken down, while also playing up this martyrdom of being an influential figure in the movement who keeps being targeted.”

Aaron Simpson, a Facebook spokesman, said, “This is exactly what happens when you are enforcing policies against Covid misinformation — people try extreme ways to work around your restrictions.”

Facebook, he said, “will continue to enforce against any account or group that violates our rules.”

YouTube said that it had clear community guidelines for Covid-19 medical misinformation, that it had removed a number of Dr. Mercola’s videos from the platform and that it had issued “strikes” on his channel. The company also said it would terminate Dr. Mercola’s channel if it violated its three strikes policy.

Twitter said that it had taken enforcement action on Dr. Mercola’s account in early July for violations of its Covid-19 misinformation policy, putting his account for in read-only mode for seven days.

“Since the introduction of our Covid-19 misinformation policy, we’ve taken enforcement action on the account you referenced for violating these rules,” said Trenton Kennedy, a Twitter spokesman. “We’ve required the removal of tweets and applied Covid-19 misleading information labels to numerous others.”

A correction was made on 
Aug. 5, 2021

Because of an editing error, an earlier version of this article misstated a portion of the post from Joseph Mercola. He said a previous article in The New York Times was full of false "statements," not false "facts."

How we handle corrections

Advertisement

SKIP ADVERTISEMENT