Introduction

Media policy is designed in large part to support high-fidelity information—news with a signal-to-noise ratio necessary for self-government. Federal broadcast regulations, the Supreme Court precedents upholding them, investments in public media, and journalistic norms all seek to support an informed citizenry and glorify the predicate values of truth and robust debate. “Signal,” in this context, is information that is truthful and supportive of democratic discourse. “Noise” misinforms and undermines discursive potential. When signal overpowers noise, there is high fidelity in the information environment.

Policymakers and the public are outraged at digital information platforms (“platforms” or “digital platforms”) variously for the platforms’ roles in promoting noise via disinformation and hate speech. This rage is fomenting calls to break up the platform companies. Reducing platform size may address some aspects of overweening power, but antitrust law will not correct problematic information dynamics. For one thing, splintered companies are likely to re-consolidate. For another, if more small companies simply replicate the same practices, similar patterns are likely to emerge with different owners. Diverse ownership is a justly enduring value in media policy, but not a panacea.

A distinct and often complementary approach to antitrust is regulation. Digital platforms have operated largely free from the media regulations. So too, they have been untethered by the norms around media responsibility, and associated legal liability, that have constrained publishers. Transparency rules and norms are among those that have never applied, and have long been useful for fidelity. In analog commercial and political advertising, rules require sponsorship identification on the theory that if people know who is speaking, they will better be able to filter out noise. Because disclosure mandates increase information, rather than suppress it, transparency is a light lift from the free speech perspective. It is thus natural that transparency would top policy agendas as the cure-all or at least a “cure-enough” for online harms. As governments now begin to close the digital loophole and extend analog-era regulations to digital flows of information, we should understand the limits of these moves.

Transparency alone is no match for platform design choices that degrade fidelity. Algorithmic amplification creates a digital undertow that weakens cognitive autonomy and makes it difficult for people to sift signal from noise. Merely importing analog-era regulations into the digital realm will not adequately reckon with how meaning is made online. If the internet is a stack of functions, with data transmission at the bottom, and content at the top, traditional transparency happens at the surface where content emerges. But it is lower down the stack where cascades of individual actions, paid promotions, and platform priorities determine how messages move. Meaning is made where likes and shares and algorithmic optimizing minutely construct audiences, where waves of disinformation swell and noxious speech gathers energy. Increasing fidelity by empowering individual autonomous choice will require both transparency and other interventions at the level of system architecture. To this end, disclosures should cover the reach and targeting of recommended and sponsored messages.

One way to understand disclosure rules is that they create friction in digital flow—friction that opens pathways for reflection. Disclosure is not the only, and may not be the best, frictive intervention. Media policy should introduce other forms of salubrious friction to disincentivize and disrupt practices that addict, surveil, and dull critical functions. New sources of friction can slow the pull of low-fidelity information and equip people to resist it.

The first section of this essay briefly describes the historic relationship between American media policy and information fidelity, focusing on transparency rules and the reliance on listener cognitive autonomy. The second shows how analog-era transparency rules are being adapted for digital platforms with a view toward restoring and protecting autonomy. The third discusses the ways in which these transparency solutions alone cannot cope with algorithmic noise and suggests that more systemic transparency is necessary. The fourth proposes that new sources of friction in information flows may be needed to foster information fidelity amidst the algorithmic production of salience.

High-Fidelity Information and Media Policy

The development of American 20th century media, was, as Paul Starr argues, inextricably tied to liberal constitutionalism and its values of truth, reasoned discourse, and mental freedom. This linkage was reflected in media policies that yoked regulation to safeguarding autonomy and encouraging democratic participation. A principal media policy goal has been to boost information fidelity, or the signal-to-noise ratio, in the service of democratic processes. The signal is information necessary to self-government characterized by accuracy, relevance, diversity of views, and similar values. As Justice Stephen Breyer put it, “[C]ommunications policy ... seeks to facilitate the public discussion and informed deliberation which ... democratic government presupposes.”

Digital platforms can overwhelm signal with noise. Scale and speed, user propagation, automated promotion, inauthentic and hidden amplification, and the mixture of sponsored and organic speech all make digital discourse different. Alongside these technical differences are sociopolitical ones. Digital platforms emerged from the world of software engineering, not the press. They are not inextricably tied to liberal constitutionalism. They stumbled into media without the norms or bonds of 20th century professionalized press traditions or regulatory pressures. It is therefore not shocking that platform architecture not only tolerates but even favors low-fidelity speech. Accuracy has little structural advantage in the attention economy. Deepfakes, bot-generated narratives masquerading as groundswell truths, and other social media contrivances amplify disinformation and can create epistemic bubbles. Algorithmic systems deliver content to audiences deemed receptive based on data-inferred characteristics. This delivery system has design features like the infinite scroll or social rewards of provocation that bypass listeners’ cognitive checks and autonomous choice. The result is a noisy information environment that is inhospitable to the production of shared truths and the trust necessary for  self-government.

American media policy can do very little to eradicate noise. For the most part, the First Amendment is hostile to bureaucratic judgments about information quality. Aside from defamation actions and outside of advertising, the law generally protects falsehoods from government censure. So strong is the aversion to policing truth that investigative journalists who break the law in order to reveal truths enjoy less protection than those who misinform in order to deceive. The constitutional tolerance for lies rests on the assumption that people can and will privilege truth if given the chance. This is the classic “marketplace of ideas” formulation of a free speech contest for mindshare. Truth is expected to outperform lies so long as people are equipped to choose it.

A high-fidelity information environment in liberal democracies thus depends heavily on the exercise of cognitive autonomy: people reasoning for themselves. Respect for autonomy is at the root of the First Amendment’s guarantees of free speech, religion, assembly, and petition. So in a decision interpreting the First Amendment, Justice Louis Brandeis observed, “Those who won our independence ... believed that freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth.” From the heart of the First Amendment, the impulse to safeguard autonomous thought runs straight through the Fourth Amendment’s protection against unreasonable government searches. By impeding entry to the house, the Constitution made it harder for government to enter the mind. Justice Brandeis again: the Founders “sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations.”

Law developed over the 20th century to safeguard the free mind from deceptive messaging conveyed by mass communication, especially via the mechanism of the disclosure requirements discussed below. There were other broadcast law interventions—significant more for their rhetorical weight than their operative force—that sought to prevent manipulation. Federal Communications Commission (FCC) rules prohibit broadcast hoaxes and the intentional slanting of news: “[A]s public trustees, broadcast licensees may not intentionally distort the news ... ‘rigging or slanting the news is a most heinous act against the public interest.’” Long ago, the FCC banned the broadcast of “functional music”—something like muzak—for fear that it would subliminally seduce the public into a buying mood. What these regulatory examples show is a concern for listener autonomy: that listeners not be deceived or lured into false consciousness. So freed, the listener can presumably ensure for herself a high-fidelity information diet.

The operation of autonomous choice to filter signal from noise, as it developed in the analog world, has to be understood against the backdrop of signal-supporting government policies and industry practices. Broadcasters have been subject to public service requirements of various kinds, affirmative programming requirements for news, and the erstwhile “fairness requirements” to ventilate opposing viewpoints on controversial issues. The Public Broadcasting Act established subsidies for noncommercial media that would “be responsive to the interests of people, ... constitute an expression of diversity and excellence, [develop] programming that involves creative risks and that addresses the needs of unserved and underserved audiences ... [and] constitute valuable local community resources for utilizing electronic media to address national concerns and solve local problems.” Broadcasters who are inclined to amplify deceptive messages might also be deterred by the spectrum licensing system, which at least formally subjects broadcasters to the risk that they will lose their licenses through petitions to deny renewals.

More signal-boosting work was done by press norms and business structures. Defamation law incentivizes publishers to take care with the truth. Newspaper mastheads lay responsibility for content at the feet of named publishers and editors. The professionalization of the news business led to norms of fact-checking and fidelity. Analog-era media economics tended to reward high-fidelity news production. Local newspapers enjoyed near-monopoly claims on advertising revenue that supported investigative journalists. Because news was bundled with entertainment and sports, media outlets cross-subsidized one with revenue from the other. These economics have of course been upended in the digital world, where content bundles are disaggregated and digital platforms absorb the advertising revenue needed for journalism—without in fact producing it.

One other thing to note about the analog media environment that birthed media policy is that analog information flows were much slower. The task of filtering signal from noise was made easier simply by virtue of analog system constraints. Attention abundance and content scarcity meant that more cognitive resources could be allocated to evaluating a particular piece of content. The information flow through newspapers and broadcast channels left time enough to absorb disclosures or discriminate among messages. Perhaps most significantly, the flow was not narrowcast. Noise in the form of lies or manipulation would be exposed to a large audience, which was itself a form of discipline and an opportunity for collective filtering.

With this background, we can turn to the transparency rules that developed in the analog environment to safeguard cognitive autonomy and enhance information fidelity. It is the translation of these rules for digital platforms that is the first order work of platform regulation.

Fidelity of Message—Know Who’s Talking to You

In reaction to the social media disruptions of 2016—including foreign interference in the messaging around the Brexit vote in the United Kingdom and the presidential election in the United States—western democracies are considering or adopting laws to try to limit foreign political advertising and surreptitious messaging of all kinds. These interventions are forward-looking as well, with an eye toward the expected onslaught of disinformation in future campaigns. At the same time, the largest social media platforms, including Facebook, Twitter, and YouTube, have taken voluntary steps to police inauthentic accounts that violate their terms of service and to be more transparent about the sources of political advertising.

For the most part, the notion of transparency reflected in both mandated and self-imposed measures is an old one: Individuals can be manipulated into mistaking noise for signal if they don’t know who is speaking to them. Analog-era transparency requirements took hold at the level of the message. That is, disclosures about a particular advertisement or program were displayed simultaneously with the message in order to allow listeners to exercise autonomous judgment about that message. The following shows how analog-era media transparency rules tried to increase information fidelity and how these rules are being adapted for digital flows.

Analog-Era Transparency Rules

Twentieth-century advertising and media law sought to advance information fidelity by increasing transparency of authorship, essentially to help listeners filter out noise. Without knowing who is behind a message, people might be manipulated into believing what, in the light of disclosure, is unbelievable. Concealed authorship slips messages past cognitive checks that safeguard freedom of mind. Disclosure mandates aim to restore these checks and enable listeners to apply cognitive resistance.

Most of the analog-era source disclosures are tied to the message itself. For example, print, radio, and television political advertising messages are subject to disclosure requirements under the Federal Election Campaign Act. A “clear and conspicuous” disclaimer is required to accompany certain “public communications” that expressly advocate for a candidate. The disclaimer identifies who paid for the message and whether it was authorized by the candidate. The Supreme Court, in Citizens United v. FEC, found these requirements to be justified by the government interest in ensuring that “‘voters are fully informed’ about who is speaking.” In an earlier decision, Justice Antonin Scalia celebrated the virtue of transparent political speech, writing, “Requiring people to stand up in public for their political acts fosters civic courage, without which democracy is doomed.”

Disclosure law is also entrusted to the Federal Communications Commission, whose predecessor agency started requiring sponsorship identification under the 1927 Radio Communications Act. The most notable expansion of these rules followed not a political event but the payola scandals in the 1950s when record labels bribed DJs to play their music, thus surreptitiously appropriating the editorial role. It was then that Congress authorized the FCC to require broadcasters to disclose paid promotions. Disclosure is required when “any type of valuable consideration is directly or indirectly paid or promised, charged or accepted” for the inclusion of a sponsored message in a broadcast. For controversial or political matters, disclosure is required even when no consideration is paid. Behind this requirement is the idea that faked provenance prevents people from engaging with speech on the level and thereby from exercising cognitive autonomy. As discussed below, these rules only apply to the broadcast media, not to the internet.

Another set of source disclosure rules comes from the Federal Trade Commission. Once Madison Avenue had perfected techniques to bypass critical resistance to commercial messages, it became the job of the FTC to protect consumers from being duped. Section 5(a) of the Federal Trade Commission Act empowers the agency to police sponsored messages for unfairness or deception. To reduce the likelihood that advertising would deceive by concealing motive or authorship, the FTC issued guidance about source disclosures for paid product endorsements. These disclosures must be “clear and conspicuous ... to avoid misleading consumers.” Here, in theory, there is no digital loophole. Clear and conspicuous guidelines also apply to digital advertisements and to digital influencer sponsorship.

Some analog-era disclosure rules, while still operating at the message level, are meant for information intermediaries, rather than the listener. For example, the FCC requires various kinds of “public file” submissions so that the public can be made aware of how broadcasters approach their public interest obligations. Broadcasters also have to make disclosures about their ownership structure so as to inform the public who really holds their communicative power. So too, the FEC requires this kind of intermediary-focused disclosure about campaign contributions and spending. Though aimed at intermediaries, the objective of these disclosures is still to help listeners understand who is speaking to them.

Adaptation to Digital

The first rounds of proposals to regulate digital platforms more or less adapt analog-era transparency requirements to the internet. They attack manipulation in the form of source concealment at the level of the message.

Most internet messaging is not covered under the election law term public communications, and therefore there has been no FEC-required sponsorship disclosure on digital platforms. Closing this sort of digital loophole is a straightforward, though still unrealized, policy project. One of the first attempts to translate analog transparency regimes to the digital world in the United States was the Honest Ads Act, introduced for a second time in March 2019. Seeking to uphold the principle that “the electorate bears the right to be fully informed,” the Act would close the digital loophole for online campaign ads. Platforms would have to reveal the identities of political ad purchasers. While the Honest Ads Act is stalled in Congress as of this writing, several states have moved forward to adopt similar legislation, including California, Maryland, and New York.

California’s Social Media DISCLOSE Act of 2018 extends political advertising sponsorship disclosure requirements to social media. New York's Democracy Protection Act of 2018 requires paid internet and digital political ads to display disclaimers stating whether the ad was authorized by a candidate as well as who actually paid for the ad. Washington State has altered its campaign finance laws to require disclosure of the names and addresses of political ad sponsors and the cost of advertising. Canada enacted a law requiring that platforms publish the verified real names of advertising purchasers.

New technologies have created new threats to information fidelity. Bots enable massive messaging campaigns that disguise authorship and thereby increase the perceived value or strength of an opinion. A substantial number of tweeted links originate from fake accounts designed to flood the information space with an opinion expressed so frequently that people believe it. Deepfakes create fraudulent impressions of authorship through ventriloquy, using artificial intelligence to fake audio or video. Proposed and adopted laws to address deepfakes and bot-generated speech are in the same tradition as the political and advertising disclosure requirements advanced to close the digital loopholes. They seek to ensure that people are informed about who is speaking to them (in the case of bots) and whether the speech is real (in the case of deepfakes).

California SB 1001 makes it illegal for a bot to communicate with someone with “the intention of misleading and without clearly and conspicuously disclosing that the bot is not a natural person” and requires removal of offending accounts. It requires that any “automated online [“bot”] account” engaging a Californian on a purchase or a vote must identify itself as a bot. Notably, the law makes clear that it “does not impose a duty on service providers of online platforms.”

At the federal level, the proposed Bot Disclosure and Accountability Act would clamp down on the use of social media bots by political candidates. Candidates, their campaigns, and other political groups would not be permitted to use bots in political advertising. Moreover, the FTC would be given power to direct the platforms to develop policies requiring the disclosure of bots by their creators/users. Another federal proposal would require platforms to identify inauthentic accounts and determine the origin of posts and/or accounts. Finally, the European Commission’s artificial intelligence ethics guidelines include a provision that users should be notified when they are interacting with algorithms rather than humans.

Deepfakes are another technique to distort democratic discourse by concealing authorship. Facebook is entreating developers to produce better detection systems for deepfakes. Early legislative efforts at the federal level and the state level would penalize propagators of deepfakes in various circumstances. The most notable federal proposal—the DEEPFAKES Accountability Act—would address the manipulative possibilities of deepfakes by requiring anyone creating synthetic media featuring an imposter to disclose that the media was altered or artificially generated. Such disclosure would have to be made through “irremovable digital watermarks, as well as textual descriptions.” This sort of “digital provenance” only works if the marks are ubiquitous and unremovable—both of which are unlikely. As Devin Coldewey critically observes, “[T]he law here is akin to asking bootleggers to mark their barrels with their contact information.” If it is not effective or enforceable, at the very least the law serves an expressive purpose by stating (or restating) that informational fidelity is worth pursuing.

While most of these proposals deal with direct-to-consumer transparency, there are also new proposed and adopted rules to benefit information intermediaries. There are many versions of an advertising archive requirement. The Honest Ads Act would require platforms to maintain a political ad repository of all political advertisers that have spent more than $500 on ads or sponsored posts. Canada’s political advertising law also mandates an ad repository. On the state level, the California Disclose Act requires political campaign advertisers to list their top three contributors and requires platforms to maintain a database of political ads run in the state. The New York State Democracy Protection Act mandates that political ads be collected in an online archive maintained by the State Board of Elections. Washington State requires disclosure of who paid for a political ad, how much the advertiser spent, the issue or candidate supported by the ad, and the demographics of the targeted audience.

Much about this adaptation of analog-era transparency rules to digital is good and necessary. But it will not be sufficient, either as a matter of transparency policy or as more general instrument of digital information fidelity.

Fidelity of System—Know Who the System Is Talking To

Digital platforms serve up content and advertising to listeners to capitalize on cognitive vulnerabilities that have surfaced through pervasive digital surveillance. The noise problem on digital platforms is different than on analog ones in part because the business model pushes content to soft targets, where cognitive resistance is impaired. Merely updating analog-era transparency rules as an approach to information fidelity misses this fundamental point about how digital audiences are selected for content. Analog mass media and advertising transparency regimes, embodied in such practices as sponsorship identification, seek to combat manipulation at the level of the message. But digital manipulation transcends the message. It is systemic. The actual message is only the end product of a persuasive effort that starts with personal data collection, personal inferences, amplification, and tailoring of messages to the “right” people, all of which happens in the dark.

Advertisers always tried to target segmented audiences with persuasive messages, but analog technologies offered only scattershot messaging to the masses. System architecture made it impossible to hide where the messages went; distribution was evident. All listeners of channel x were exposed to y content at z moment (give or take some time shifting). On social media platforms like Facebook and Twitter, obfuscation and manipulation are emergent properties of algorithmically mediated speech flows that surface communications based on microtargeting and personal data collection. In the current environment, no one can easily solve for x, y, and z. Moreover, people are ill equipped to filter out noise in light of digital design features that depress cognitive autonomy, as discussed below. Manipulation in this context resides not only in the individual messages but also in the algorithmic production of salience. Transparency mechanisms designed mainly to strengthen cognitive resistance to discrete messages will not be enough to secure freedom of mind. Policy should boost signal throughout the system, through transparency and other means.

Algorithmic Noise

As Julie E. Cohen observes,

Algorithmic mediation of information flows intended to target controversial material to receptive audiences, ... inculcating resistance to facts that contradict preferred narratives, and encouraging demonization and abuse. ... New data harvesting techniques designed to detect users’ moods and emotions ... exacerbate these problems; increasingly, today’s networked information flows are optimized for subconscious, affective appeal.

She is touching on a complex of problems related to polarization, outrage, and filter bubbles. Platforms systematically demote values of information fidelity. There is a collapse of context between paid advertisements and organic content, between real and false news, between peer and paid-for recommendations. Jonathan Albright describes a “micro-propaganda machine” of “behavioral micro-targeting and emotional manipulation—data-driven ‘psyops’” ... that can “tailor people’s opinions, emotional reactions, and create ‘viral’ sharing (LOL/haha/RAGE) episodes around what should be serious or contemplative issues.”

Platform algorithms boost noise through the system as a byproduct of the main aim: engagement (subject to some recent alterations to content moderation practices). In order to maximize and monetize attention capture, the major digital platforms serve up “sticky” content predicted to appeal based on personal data. Dipayan Ghosh writes that “[b]ecause there is no earnest consideration of what consumers wish to or should see in this equation, they are subjected to whatever content the platform believes will maximize profits.” Platforms understand what content will maximize engagement through a process of data harvesting that Mark Andrejevic has called “digital enclosure.” Algorithmic promotion is abetted, often unwittingly, by the users themselves, who are nudged to amplify messages that on reflection they might abjure. In this respect, users are manipulated not (or not only) via a specific message but through technical affordances that drive them into message streams without care for message quality. This production of salience happens below the level of the message. Listeners relate to information unaware of the digital undertow.

The Council of Europe directly confronted the ways in which platform design undermines cognitive autonomy in its 2019 Declaration on the Manipulative Capabilities of Algorithmic Processes. Machine learning tools, the Council said, have the “capacity not only to predict choices but also to influence emotions and thoughts and alter an anticipated course of action, sometimes subliminally.” The Declaration further states that “fine grained, sub-conscious and personalized levels of algorithmic persuasion may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions.” The Declaration’s supposition is supported by research showing how digital speech flows are shaped by data harvesting and algorithmically driven and relentlessly monetized platform mediation.

Platform priorities and architecture have reshaped public discourse in ways that individual users cannot see and may not want. Platforms flatten out the information terrain so that all communications in theory have equal weight, with high-fidelity messages served up on a par with misinformation of all kinds. This is sometimes called context collapse. Stories posted on social media or surfaced through voice command are often denuded of credibility tokens or origination detail, like sponsorship and authorship, making it hard to distinguish between fact and fable, high fidelity and low.

Listeners face this material in vulnerable states, by design. Platforms in pursuit of engagement may pair users with content in order to exploit users’ cognitive weaknesses or predispositions. Design tricks like the “infinite scroll” keep people engaged while blunting their defenses to credibility signals. YouTube autoplay queues up video suggestions to carry viewers deeper into content verticals that are often manipulative or otherwise low-fidelity. Social bots exploit feelings of tribalism and a “hive mind” logic to enlist people into amplifying information, again without regard to information fidelity.

Other design features like notifications and the quantification of “likes” or “follows” trigger dopamine hits to hook users to their apps. Gratification from these hits pushes people to share information that will garner a reaction. On top of this, Facebook’s News Feed and YouTube’s Suggested Videos use predictive analytics to promote virality through a user’s network. These tricks are among what are called “dark pattern” design elements. They are hidden or structurally embedded techniques that lower cognitive resistance, encouraging a sort of numb consumption and automatic amplification while at the same time facilitating more data collection, which supports more targeted content delivery, and so on.

That these design features can be responsible for lowering information fidelity is something the platforms themselves recognize. Under pressure from legislators, Facebook in 2017 said that it would block the activity of government and nonstate actors to “distort domestic or foreign political sentiment” and “coordinated activity by inauthentic accounts with the intent of manipulating political discussion.” In other words, the platform would work to depress noise. But this reference to “distortion” assumes a baseline of signal that the platform has not consistently supported. Its strategies with respect to news zig-zag in ways that have undermined the salience of high-quality information. Emily Bell and her team at Columbia University’s Tow Center for Digital Journalism have chronicled how Facebook policies influence news providers, getting them to invest in algorithmically desirable content (including, for a while, video), only to abruptly change directions, scrambling editorial policies and wasting resources. Facebook decided in 2018 to demote news as compared with “friends and family” posts and then the next year created a privileged place for select journalism outlets in the News Tab. Policies that are both erratic and truth-agnostic allow noise generators, through the canny use of amplification techniques, to manipulate sentiment without resorting to inauthenticity. Facebook’s editorial policies and their fluidity have led to criticisms that the process is lacking in transparency and accountability.

Platform design features have to be understood against the platforms’ background entitlements and resulting norms. The most significant entitlement is their immunity under Section 230 of the Communications Decency Act. This provision holds platforms harmless for most of the content they transmit, freeing them from the liability that other media distributors may face for propagating harms. It is not surprising, then, given the legal landscape, that the platforms have not developed a strong culture of editorial conscience. They have grown up without anything like a robust tradition of making editorial choices in the public interest, of clearly separating advertising from other content, of considering information needs, or of worrying that they might lose their license to operate.

All of these features—business models, architecture, traditions, and regulation—combine with the sheer volume of message exposure to limit the effectiveness of message-level disclosure in digital flows.

Systemic Transparency

For disclosures to enhance digital information fidelity, it will require more than message-level transparency. There are at least two reasons to look further down the stack toward greater system-level transparency.

The first reason is that message labels may not be effective counters to manipulation, given the volume and velocity of digital messaging. In studies of false news, researchers have found that users repeatedly exposed to false headlines on social media perceive them as substantially accurate even when they are clearly implausible. Warning labels about the headlines being incorrect had no effect on perceptions of credibility or even caused people to share the information more often. The frictionless sharing that digital platforms enable may simply overwhelm signifiers of compromised informational integrity delivered at the point of consumption.

In important ways, by the time the message is delivered to the user, meaning has already been made. The messages on the surface are epiphenomenal of algorithmic choices made below. This is the second reason to push transparency mandates to lower down in the stack where algorithmic amplification decisions reside. How can we render visible the “authorship” of information flows? It’s not enough for the individual to know who is messaging her. What is trending and what messages are reaching which populations are a function of algorithmic ordering and behavioral nudges hidden from view. Salience is a product of these systemic choices.

European governments are trying to address algorithmic manipulation through transparency rules geared to the algorithmic production of salience. Among other regulators, the UK Electoral Commission aspires to fill in the lacunae of campaign ad microtargeting, where “‘[o]nly the company and the campaigner know why a voter was targeted and how much was spent on a particular campaign.’” A report commissioned by the French government has proposed “prescriptive regulation” that obliges platforms to be transparent about “the function of ordering content,” among other features. This includes transparency about “the methods of presentation, prioritisation and targeting of the content published by the users, including when they are promoted by the platform or by a third party in return for remuneration.” Similarly, a UK Parliament Committee report in the aftermath of the Cambridge Analytica scandal has recommended that “[t]here should be full disclosure of the targeting used as part of advertising transparency. ... Political advertising items should be publicly accessible in a searchable repository—who is paying for the ads, which organisations are sponsoring the ad, who is being targeted by the ads.” Maryland’s electioneering transparency law would also have mandated extensive disclosure of election ad reach but was held unconstitutional on First Amendment grounds by the Fourth Circuit Court of Appeals.

Drawing on these and other proposed interventions, we can identify systemic transparency touchstones. Some of these can be addressed by platform disclosure, others only by making data available for third-party auditing. When Facebook was interrogated by the U.S. Congress over Russian interference in the 2016 election, it showed itself capable of disclosing a lot of information about data flows. This is the kind of information that should be routinely disclosed at least with respect to certain categories of paid promotion.

Items that should be made known or knowable by independent auditors include:

  • The reach of election-related political advertisements, paid and organic, and revenue figures;
  • The reach of promoted content over a certain threshold;
  • The platforms’ course of conduct with respect to violations of their own terms of service and community standards, including decisions not to downrank or remove content that has been flagged for violations;
  • The use of human subjects to test messaging techniques by advertisers and platforms (also known as A/B testing);
  • Change logs recording the alterations platforms make to their content and amplification policies;
  • “Know Your Customer” information about who really is behind the purchases of political advertising.

Noise Reduction Via Friction

Alongside new forms of systemic transparency, other changes to system design are needed to promote signal over noise. Of course, investing in and promoting fact-based journalism is important to boosting signal. Changes to platform moderation, amplification, and transparency policies can help to depress noise. But ultimately, it is the individual who must identify signal; communications systems can only be designed to assist the exercise of cognitive autonomy. I suggest that communicative friction is a design feature to support cognitive autonomy. Indeed, one way to see analog-era transparency requirements is as messaging ballast—cognitive speed bumps of sorts. Slow media, like slow food, may deliver sociopolitical benefits that compensate for efficiency losses. What might such speed bumps look like in the digital realm? This section briefly characterizes the shift to frictionless digital communications and concludes with some ideas for strategically increasing friction in information flows to benefit information fidelity.

From Analog-Era Friction to Digital Frictionlessness

The analog world was naturally frictive in the delivery of information and production of salience. Sources of friction were varied, including barriers to entry to production and distribution, as well as inefficient markets. It was costly to sponsor a message and to distribute content on electronic media. And it was a “drag”—as in, full of friction—for an individual to circulate content, requiring as it did access to relatively scarce distribution media. Friction protected markets for legacy media companies. This was undesirable in all kinds of ways. But one of the benefits was that these companies invested in high-cost journalism and policed disinformation.

Friction was built into the analog-era business models and technology, some of which was discussed earlier. Relatively meager (by comparison to digital) content offerings were bundled for mass consumption and therefore were imperfectly tailored to individual preferences. By dint of this bundling in channels, networks, and newspapers, advertisers ended up supporting high-fidelity information along with reporting on popular topics like sports and entertainment. Content scarcity, crude market segmentation, and imperfect targeting of advertising support all served as impediments to the most efficient matching of taste and message; technological friction impeded virality. Analog communications system inefficiencies and limitations did not necessarily promote information fidelity. After all, both information and disinformation campaigns, truth-tellers and liars, would have to overcome obstacles to persuasion. But the friction slowed message transmission to allow for rational consideration. Research on polarization suggests that when people have more time for deliberation, they tend to think more freely and resist misleading messaging.

Some of the friction in analog media was regulatory, including the message-level sponsorship disclosure requirements described above. A message that says “I’m Sally Candidate, and I approved this ad” forces the listener to stop before fully processing the ad to consider its meaning. It is a flag on the field, stalling the flow of information between message and mind. That disclosures have the effect of cluttering speech is a knock against them in the literature on transparency policy. Listeners may be so overloaded with information that they don’t heed the disclosures. Their minds may not be open to hearing whatever it is the disclosure wants them to know. It is nevertheless possible that disclosures can function as salubrious friction, simply by flashing warning. In their paper on online manipulation, Daniel Susser and co-authors note that disclosures serve just such a function, encouraging “individuals to slow down, reflect on, and make more informed decisions.”

Digital platforms dismantle cognitive checkpoints along with other obstacles to information flows. For the engineer, friction is “any sort of irritating obstacle” to be overcome. This engineering mindset converged with democratic hopes for an open internet to produce a vision of better information fidelity. For example, by tearing down barriers to entry, digital could amplify “We the media,” to cite Dan Gillmor’s 2005 book of the same name. Decentralized media authority, it was hoped, would reveal truths through distributed networks, leading to a kind of collaborative “self-righting.” Building on his earlier work on networked peer production, Yochai Benkler conceptualized a “networked Fourth Estate” that took on the watchdog function of the legacy press. Reduced communicative friction did open opportunities for the voiceless. But the optimism of the early 2010s did not account for the collapse of legacy media as a source of signal or for how commercial platforms would amplify noise. Citizen journalists might take advantage of frictionless communications, but not nearly to the same degree as malicious actors and market players, whose objectives were very different.

New Frictions

Digital enclosure seals communicators in feedback loops of data that are harvested from attention and then used to deliver content back to data subjects in an endless scroll. Platforms have bulldozed the sources of friction that were able to disrupt the loop. When 20th century highway builders bulldozed neighborhoods to foster frictionless travel, place-making urbanists like Jane Jacobs articulated how the collision of different uses—something many planners considered inefficient—improves communities. The sociologist Richard Sennett used “friction” to describe aspects of this urban phenomenon, which he viewed favorably. In communications as in urbanism, a certain degree of friction can disrupt the most efficient matching of message and mind in ways that promote wellbeing. Specifically, new frictions can promote information fidelity. Indeed, given the First Amendment limitations on any regulatory response to noisy communications, the introduction of content-neutral frictions may be one of the very few regulatory interventions that are consistent with American free speech traditions.

The use of friction already is both a public policy and private management strategy in the digital realm. Paul Ohm and Jonathan Frankle have explored digital systems that implement inefficient solutions to advance non-efficiency values—what they term desirable inefficiency. The platforms themselves are voluntarily moving to implement fricative solutions. For example, WhatsApp decided in 2019 to limit bulk message forwarding so as to reduce the harms caused by the frictionless sharing of disinformation. The limit imposes higher cognitive and logistical burdens on those who would amplify the noise. At the extreme, friction becomes prohibition, which is one way to think about Twitter’s decision to reject political advertising because it did not want to, or believed it could not, reduce the noise.

Forms of friction that could enhance information fidelity and cognitive autonomy include communication delays, virality disruptions, and taxes.

Communication Delays. The columnist Farhad Manjoo has written, “If I were king of the internet, I would impose an ironclad rule: No one is allowed to share any piece of content without waiting a day to think it over.” He assumes that people will incline toward information fidelity if encouraged to exercise cognitive autonomy. This intuition is supported by research showing that individuals are more likely to resist manipulative communications when they have the mental space and inclination to raise cognitive defenses. Are there ways to systematize this sort of “pause” to cue consideration? Other examples of intentional communications delays adopted as sources of felicitous friction suggest that there are. For reasons of quality control, for example, broadcasters have imposed a short delay (usually five to seven seconds) in the transmission of live broadcasts. Frictionless communications, when it is only selectively available, can reduce faith in markets. For this reason, the IEX stock exchange runs all trades through extra cable so that more proximate traders have no communications advantage, thereby protecting faith in the integrity of their market.

As discussed above, platforms deploy dark patterns to spike engagement. Businesses routinely ask, “Are you sure you want to unsubscribe?” It should be possible for platforms to use these techniques to slow down communications: “Are you sure you want to share this?” Senator Josh Hawley’s proposed Social Media Addiction Reduction Technology Act would require platforms to slow down speech transmission as a matter of law. The Act would make it unlawful for a “social media company” to deploy an “infinite scroll or auto refill,” among other techniques that blow past the “natural stopping points” for content consumption. While the bill has problems of conception and execution, it touches on some of the ways that platforms might be redesigned with friction to enhance cognitive autonomy. Commentators have suggested other ways that Congress could deter platform practices that subvert individual choice.

Virality Disruptors. Many forms of noise overwhelm signal only at scale, when the communications go viral. One way to deal with virality is to impose a duty on platforms to disrupt traffic at a certain threshold of circulation. At that point, human review would be required to assess the communication for compliance with applicable laws and platform standards. Pausing waves of virality could stem disinformation, deepfakes, bot-generated speech, and other categories of information especially likely to manipulate listeners. The disruption itself, combined with the opportunity to moderate the content or remove it, could reduce the salience of low-fidelity communication. Another approach is something like the sharing limit that WhatsApp imposed to increase friction around amplification. Substitute volatility for virality, and it’s easy to see how the U.S. Securities and Exchange Commission reserves to itself friction-creating powers. At a certain threshold of volatility in financial markets, it will curb trading to prevent market panic, in effect imposing a trip wire to stop information flows likely to overwhelm cognitive checks. The New York Stock Exchange adopted these circuit-breakers in reaction to the 1987 market crash caused by high-volatility trading . Other countries quickly followed suit to impose friction on algorithmic trading when it moves so fast as to threaten precipitous market drops. The purpose of these circuit-breakers, in the view of the New York Stock Exchange, is to give investors “time to assimilate incoming information and the ability to make informed choices during periods of high market volatility.” That is, it was expressly to create the space for the exercise of cognitive autonomy.

Taxes. Taxes are also sources of friction that can be deployed to disincentivize business practices that boost noise over signal. Tal Zarsky has called data the “fuel” powering online manipulation. If so, a tax on data could aid in resistance to manipulation. There are a number of nascent proposals to put a price on exploitative data practices. One possibility, for example, would be to impose a “pollution tax” on platform data sharing. Another is to have a transaction tax for advertising on platforms. These kinds of taxes would begin to make companies internalize the costs of exploitative data practices. If set to the right level, they could attract platforms and online information providers away from advertising models that monetize attention and finance the noisy digital undertow. Taxes would have the additional benefit of raising revenue that could be used to support signal-producing journalism, resulting in higher-fidelity speech.

Conclusion

It is long overdue that media transparency requirements from the analog world be adapted for digital platforms. Informing listeners about who is speaking to them—whether candidate, company, or bot—helps them to make sense of messages and discern signal from noise. But this kind of message-level transparency will not suffice either to protect cognitive autonomy or to promote information fidelity in the digital world. The sources of manipulation and misinformation often lie deeper in digital flows. By serving up content to optimize time spent on the platform and segment audiences for advertisers, at a volume and velocity that overwhelms cognitive defenses, digital platform design prioritizes content without regard to values of truth, exposure to difference, or democratic discourse. The algorithmic production of meaning hides not only who is speaking but also who is being spoken to. To really increase the transparency of communications in digital flows, interventions should focus on system-level reach and amplification, along with message-level authorship. Research suggests that transparency may have limited impact, especially in light of the volume and velocity of speech. Thus, in addition to transparency, policymakers and platform designers should consider introducing forms of friction to disrupt the production of noise in a way that respects First Amendment traditions. These could include communications delays, virality disruptors, and taxes.

 

Printable PDF

 

© 2020, Ellen P. Goodman.

 

Cite as: Ellen P. Goodman, Digital Information Fidelity and Friction, 20-05 Knight First Amend. Inst. (Feb. 26, 2020), https://knightcolumbia.org/content/digital-fidelity-and-friction [https://perma.cc/97BC-HA5G].