Digital Directions: January 26, 2022

 

Insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy.

 
  • In our most recent publications, Forum staff and leading outside experts explore the crucial role of networked civil society in countering disinformation.
  • With the rise of audio-based chat apps, encrypted messaging services, and new authoritarian efforts to silence critics, platforms are confronting new types of moderation challenges.
  • Government bodies in the People’s Republic of China are writing new rules for AI systems in online platforms and beyond.
Share on Facebook Share on Facebook
Share on Twitter Share on Twitter
Forward to Friend Forward to Friend

 

In the 2021 Seymour Martin Lipset Lecture on Democracy in the World, Ronald Deibert discussed how spyware and other forms of digital subversion threaten democracy.

How Pegasus Shows the Gaps in Global Tech Governance

Citizen Lab’s Ron Deibert describes the offerings of the lucrative private spyware industry as “a new kind of ‘despotism-as-a-service,’ enabling buyers to reach across borders and undermine their adversaries.” Recent revelations around the continuing abuse of the Israeli firm NSO Group’s Pegasus spyware to surveil and pressure government critics underscore the transnational nature of the digital authoritarian challenge. While some authoritarian states that are pioneering and exporting their own comprehensive paradigms of digital repression, illiberal actors worldwide are also expertly leveraging the power and vulnerabilities of Western tech platforms to gather information, shape perceptions, and target critics. Critically, they are bolstering their repressive capacities using hardware and surveillance tools produced in democracies. Thus far, export control regimes have failed to keep pace with the challenge of “contain[ing] technologies that can enable human rights abuses.”

Our digitally connected lives enable new forms of control that blur not only national borders, but also the lines between public and private power. As Deibert has emphasized, malicious actors looking to surveil, target, or manipulate others take advantage of a digital ecosystem that is “invasive by design, fundamentally insecure, exploitative of human cognitive biases, poorly regulated, and prone to abuses.” Spying on political targets is easier when tech companies have already amassed vast quantities of data on users. Control over digital tools allows private companies to set the boundaries of acceptable speech, or to augment government capacities for surveillance and propaganda. All this in turn raises profound questions about how and where rules are to be made regarding the exercise of digital power.

The lack of clarity around these questions has left the door open to abuses that are eroding democratic norms, and closing it again will be no simple matter. The case of Pegasus illustrates the magnitude of this challenge. Last summer, a consortium of networked journalists disclosed that state clients sought to use this powerful phone hacking tool to target journalists, politicians, and members of civil society across dozens of countries. These revelations prompted multiple political scandals and helped to land NSO Group on the U.S. government’s Entity List (restricting its access to U.S. exports). Nonetheless, a joint investigation by digital rights groups has revealed that Pegasus was still being used as late as November 2021 to target journalists in El Salvador. Among the targets were twenty-three people working at one outlet critical of the current president. As Samuel Woodhams outlined for the Center for International Media Assistance, spyware attacks impede the flow of information by facilitating attacks on journalists, fostering self-censorship, and harming relationships with sources. While Pegasus has gained particular notoriety, it is only one among many digital surveillance tools that have been exported from democracies and used against civil society or political targets.

Yet by bringing Pegasus into the public eye, journalists, activists, and researchers have spurred new movement toward curbing the abuses that flourish in this murky corner of the digital domain. Global norms addressing commercialized spyware are sorely lacking. Sustained civil society attention, however, has lent some democracies a new interest in the issue. A 2021 update to the EU’s dual use export control regime traditionally focused on military applications—adds new requirements that include making human rights a consideration in exporting surveillance systems. More recently, the U.S. government has also begun articulating principles to govern the export of hacking tools. At last month’s Summit for Democracy, several leading democracies announced the launch of a new initiative on export controls and human rights. Though the path to effectively checking abuses remains a long one, a more robust response to the booming market for digital despotism may finally be gaining momentum.

– Elizabeth Kerley, International Forum for Democratic Studies

 

 

RUSSIAN DISINFORMATION SPREADING IN AFRICA: The U.S. State Department recently charged that Russia has been carrying out a disinformation campaign as part of broader efforts to devise a “pretext” for invading Ukraine. While this case has drawn new public attention, the use of disinformation to achieve geopolitical goals represents a longstanding strategy also evident in the Kremlin’s growing involvement on the African continent. Operations linked to Russia reportedly fostered pro-Russian and anti-French sentiment amid Mali’s May 2021 coup; helped to generate disproportionately positive media coverage of Russia’s Sputnik V COVID vaccine while downplaying the safety and efficacy of Western vaccines; and promoted a Sudanese paramilitary group. As Russia continues to mask its involvement by leveraging local actors with native language expertise and as domestic disinformation operators adopt tactics similar to Moscow’s, experts note that the lack of relevant language expertise among platform moderators remains an obstacle to counter-disinformation efforts.

ENCRYPTED MESSAGING APPLICATIONS AND POLITICAL PROPAGANDA: As encrypted messaging applications (EMAs) like Viber become more popular, experts are concerned that these apps will become breeding grounds for government-backed disinformation. For example, officials from the Philippines’ Presidential Communications Operations Office told researchers that the government has established groups of over two million Viber users to distribute government announcements and spread content promoting positive views of President Rodrigo Duterte. However, EMAs also present new opportunities for citizens to push back against government propaganda through humor, private conversation, and fact checking initiatives.

DEMOCRACIES CEDING GROUND ON DISINFORMATION AT UNJacob Shapiro and Alicia Wanless argue that authoritarian and hybrid regimes may be looking to exploit a newly approved UN resolution on disinformation, introduced by Pakistan and co-sponsored by the Central African Republic, Côte d’Ivoire, Eritrea, and the Russian Federation. The resolution underscores states’ role in addressing disinformation. Several of the resolution’s sponsors are among the many governments worldwide that have alarmed free expression advocates by introducing or implementing “fake news” laws, and members of the group also have a record of cracking down on independent media. Shapiro and Wanless encourage democracies to take back control of the agenda by putting forward their own proposals for international principles on combatting disinformation.

 

 

TWITTER SPACES FLOODED BY HATE SPEECH AND EXTREMISM: Twitter’s audio chat service, Spaces, has been used by Taliban supporters, white nationalists, and anti vaccine activists to spread hate speech, extremist content, and misinformation. The platform’s struggle with this content highlights the unique moderation challenges posed by audio-based social media. According to a Twitter spokeswoman, real-time moderation for audio is “not something that we have available at this time.” Instead, the company uses text-based software to detect problematic keywords in Spaces titles, saves recordings of Spaces conversations for thirty to 120 days for content review (if necessary), and relies on users to report rules violations. Emerson Brooking at the Atlantic Council’s Digital Forensics Lab notes: “Twitter spent six years creating a strong set of procedures to take dangerous content off its platform . . . [but] Spaces is totally ungoverned.”

META REPORT NAMES TWO NEW SECURITY POLICY VIOLATIONS: An end-of-year adversarial threat report from Meta, the company formerly known as Facebook, identified two new types of security policy violations: brigading and mass reportingMeta defines brigading as networks of people working together to “mass comment, mass post, or engage in other types of repetitive mass behaviors to harass others or silence them.” Mass reporting is when people cooperate to flag an account or post for supposed rules violations in order to get it incorrectly removed. These practices occur across many social media services and have been used by authoritarian regimes to stifle independent voices. Meta explained that it had removed networks practicing these behaviors in Italy, France, and Vietnam.

A ROAD MAP FOR ACCOUNTABLE CONTENT MODERATION? Last month, a coalition of civil society and research organizations released an updated version of the 2018 Santa Clara Principles on Transparency and Accountability in Content Moderation. Apple, Google, Facebook, Twitter, and eight other tech giants previously signed on to the 2018 version of these rules, which covered transparency reporting as well as notice and appeal procedures for prohibited content. The updated principles, formulated over 2020–21, address recently prominent concerns such as the use of automated content moderation tools and the need for cultural competence (including linguistic skills) on the part of moderation teams. The Principles also note the hazards of state involvement in content moderation.

 

 

WILL CHINA’S AI GOVERNANCE PUSH SHIFT GLOBAL NORMS? Amid the country’s broader crackdown on large tech companies, regulatory bodies in China are seeking to establish new rules and ethical guidelines for artificial intelligence (AI) systems, including the recommendation algorithms that drive online platforms. Matt Sheehan at the Carnegie Endowment breaks down these initiatives, showing that in addition to rules specifically designed to reinforce PRC ideological norms, regulators in China are also introducing principles—for instance, those concerning algorithmic explainability that speak to ongoing global discussions on AI ethics. A recent report from AEI’s Elisabeth Braw argues that in the absence of clear international governance structures or principles for AI, China’s extensive investment in AI research and development as well as its push to influence technical standards bodies will make it a contender to shape the global rules in this area and diminish human rights norms.

RUSSIA CONTINUES SOCIAL MEDIA CENSORSHIP: BBC reporting shows that from 2011 through 2020, Russia made 123,606 requests to remove content from Google or YouTube—more than every other country combined. Despite claims by Russian media regulators that their takedown requests mainly concerned “child pornography, suicide, drugs, extremism and fake news,” BBC’s research indicates otherwise. Of the approximately 400 pieces of content referenced in court proceedings against Google, Facebook, Instagram, and Twitter, only 21 relate to child abuse, drugs, or suicide. The majority called for Russians to attend demonstrations in support of prominent opposition activist Alexei Navalny. If companies deny such requests they can be fined, as Facebook and Twitter were in September 2021.

 

 

In a new article at War on the Rocks, Forum Associate Director Kevin Sheives calls for civil society to take a leading role in countering disinformation operations. Sheives explains that tech companies have struggled with self-regulation and content moderation, and governments are naturally inclined to prioritize national security over democratic values in decision-making. On the other hand, entrepreneurial and collaborative civil society organizations have been pioneering more effective approaches to countering authoritarian influence and hate speech.

Examples of these ground-breaking approaches can be found in the Forum’s latest Global Insight series, Innovation in Counter-Disinformation: Toward Globally Networked Civil SocietyThese essays by journalists, civil society activists, and scholars focus on a globally networked response to disinformation that goes beyond platform-centered solutions, combats it in the non-digital sphere, and addresses the information integrity challenges of illiberal regimes in sub-Saharan Africa and Southeast Asia.  In the overview essay, Sheives outlines how “a loosely connected, constantly learning global network of counter-disinformation responders—with the benefit of greater access to platforms and additional resources from funders—can serve as a bulwark against evolving threats to the integrity of the information space.”


Thanks for reading Digital Directions, a biweekly newsletter from the International Forum. 

Sign up  for future editions here!

 

Twitter
Facebook
Instagram
YouTube
International Forum's Website
Share