Professional Documents
Culture Documents
SOCIAL MEDIA
The Purpose of this Report
It is time for greater action toward improving social media.
This report, Improving Social Media: The People, Organizations and Ideas for
a Better Tech Future is directed at policymakers and social media platforms
with the express purpose of creating a more holistic, collective approach
to improving social media. It also foregrounds the diverse ecosystem that
affects and is affected by social media and aims to promote a culture of
knowledge-sharing and collaboration that is necessary for this proactive
approach.
Social media dramatically impacts our future. This report will help us move
toward co-creating a tech future that we want to live in.
7 Interlocking Roles
124 In Summary
12-14 Overview of the Issues
"The conundrum is not exactly whether platforms moderate too much or too little. it is more about
whether platform policies around moderation are guided by the kinds of considerations that social
media companies operating in democratic, decidedly anti-racist societies should prioritize."
-Oumou Ly, Staff Fellow, Berkman Klein Center for Internet and Society
"While I'd much rather use the carrot, there are times the stick is definitely needed. And, as expressed
through our commitment to safety by design, we absolutely believe that industry has to do better in
making their platforms safer, more secure and that they need to be both more transparent and
accountable for harms that take place on their platforms."
-Julie Inman-Grant, eSafety Commissioner of Australia
"This cannot just be left to technologists to solve and lawyers to debate. Social media has a
profound impact on all of society, and we are all stakeholders in the ultimate solutions."
-Yael Eisenstat, Democracy activist; former Facebook elections integrity head; former diplomat,
intel officer and White House advisor
It is quite clear that there is a problem; what has been more difficult, however, has
been the transition from this "awareness stage" to the far more challenging stage of
working out solutions and improvement. This involves a far greater degree of
participation across a diverse range of groups to ensure that we are truly co-
creating our tech future. This also involves approaching the issues of social media in
a more collective, holistic fashion.
I hope that you find our report on Improving Social Media to be a valuable resource.
Our aim is to provide a more thorough understanding of the complex issues facing
social media, while highlighting a range of ideas for improving social media and
showcasing a variety of passionate people and organizations working toward this
goal. By involving such a wide range of individuals and organizations to make this
report, our intention is also to promote the knowledge-sharing and collaboration
that is necessary in order to tackle such an intractable problem. We need more
voices, more perspectives, more participation.
If you are new to our organization, welcome. The mission of All Tech Is Human is to
build the Responsible Tech pipeline. I believe that we can improve our tech future by
dramatically changing those involved in it; making sure that the pipeline is diverse,
multidisciplinary, and aligned with the public interest. Our organization has been
actively building a broad community across civil society, government, and industry
since 2018. It's this type of diversity and collaboration that seems utterly necessary
as we work toward improving
social media.
KNOWLEDGE BASE
To inform and influence every aspect of the overall social media ecosystem;
develop a culture of knowledge-sharing and collaboration to increase overall
quality and ability to affect change; consisting of researchers, academics,
advocates, and activists.
TECH WORKERS
To build and maintain awareness of ethical considerations in technology;
enrich the definition of product success to include user wellbeing; utilize an
ethics-by-design approach to proactively plan for ethics within the entire
product development lifecycle to shape better design decisions.
POLICYMAKERS
To recognize that social media is a dynamic landscape and will require
ongoing monitoring and regulatory oversight/guidance; consider the differing
needs and experiences within the population; connect multiple stakeholders
in creating legislation and regulation.
USERS
To be educated on the basic structural elements of social media and tech design;
embrace the power they have to affect change online while also learning about
the ways in which their power is mediated through platforms.
ADVERTISERS
To hold platforms accountable as business partners and through monetary
pressure; have brands advocate for and ally with their consumers.
NEWS MEDIA
To offer accountability through journalism that educates the general public and
adequately exposes the mechanisms of social media.
FUNDERS
To vet startups for sound privacy and safety practices and features; be able to
alter social media future through its investment decisions.
ORGANIZATIONS
interviewees and collaborators.
IDEAS
instead appreciate how it is intertwined with
messy social, economic, and historical
underpinnings.
At their root, the fundamental business model of these platforms is built on generating advertising
revenue from user data and attention—enabling targeted advertising at a large scale. Labeled
“surveillance capitalism,” this system harvests and leverages personal user data for precision
marketing, maximal engagement, and profit. In a Faustian bargain, users give up information about
themselves and their viewing preferences in exchange for “free” access to the content and
connections these platforms provide. In turn, the platforms use our engagement patterns to
determine what we will see.
On social media, our natural tendency to click and share content that triggers intense emotions,
reaffirms our preconceptions and cognitive biases, and signals our group affiliations is coupled with a
business model of opaque algorithms designed to learn from these choices and continuously feed us
what they think we want to see. The outcome is a feedback loop that fragments online spaces into
silos of like-minded content shaped by user profiles and choices.
Designed to optimize for attention and sharing, our digital public spaces have become dominated by
emotion-triggering content, affiliation and “in group” signaling, knee-jerk reactions, and outrage. The
fact that this content is not limited to personal or entertainment posts, but also includes posts
intended to inform or mislead, exacerbates the problem even more. By optimizing for clicks and
sharing, platforms are doing more than simply “giving us what we want”—they are, by their very
design, fostering an environment that incentivizes the spread of "junk food" informational content
over potentially less engaging but more “nutritious” or healthy information.
Ultimately, social media platforms are not the neutral and value-free channels for content that they
have often been positioned as. The algorithmic processes that determine what we see are not
transparent, and users have little insight into how or why they are served the content they receive,
how it differs from what appears on someone else’s screen, or how their digital data is being used.
In the end, these two key elements—platform business models and human nature—work together to
create the flawed and painfully polarized social and informational digital ecosystem we see today.
Grid Approach
leveraging human rights principles can help both
governments and companies balance freedom of
expression with user protection.
PLATFORMS USERS KNOWLEDGE
BASE Another potentially effective solution is to
outsource content policy, moderation and
oversight to expert third parties or “social media
councils." These multi-stakeholder councils should
include input from government, industry, and civil
society members to help shape the difficult
TECH SCENARIO POLICYMAKERS content moderation decisions necessary for
WORKERS
fostering healthy online spaces.
CONNECTED ISSUES
Accessibility
Affinity Groups
Age Gates
Anti-Racist Technology
Cultural Intelligence
Data Access for academic researchers
Data Literacy
Data Minimization
Data Transparency
Digital Citizenship (aka Cyber Citizenship)
Digital Colonialism
Digital Divide
Digital Human Rights
Diversity, Equity, Inclusion, Belonging
Ethics by Design
Ethically Aligned Design
Freedom of Expression
Friction
Human-Centered Design
Inclusive Design
Intersectionality
Privacy by Design
Power Asymmetries
Safety by Design
Surveillance Economy
Stakeholder Engagement
Workers' Rights in the Digital Economy
IMPROVING SOCIAL MEDIA | 14
Grid Approach
PLATFORMS USERS KNOWLEDGE
Partner with experts to Demand universal media BASE
optimize algorithmic literacy education; Engage in research in
detection & removal; recognize role effective media literacy
employ or contract with influencing and being education; increased
media literacy experts. influenced by research around
information ecosystem; effective ways to reduce
demand greater political early misinfo +
movement. superspreaders.
Community
Interviews
Hear from a broad range of leaders about
their role in improving social media
AllTechIsHuman.org | ImprovingSocialMedia.com
Dona Bellow
there is little opportunity to influence
how things are built—I wanted to support
teams in considering risks at the
beginning of their process and building
mitigations directly into the product.
Responsible Innovation Manager, Facebook In your opinion, what are the biggest
issues facing social media?
Tell us about your role: completed my education in I am excited about the growing focus on
International Human Rights Law in product equity and building social media
I closely partner with product France. I started working in tech experiences on the basis of our most
teams across the Facebook family about 8 years ago, at Google, where vulnerable populations. I heard someone
of apps to help surface and address I joined the Legal Online mention before, "If you manage to build a
potential negative impacts to Operations team, which managed social experience that supports the user
society in all that we build, early in legal requests for content removal needs, safety and privacy of a Black trans
the development process. My team and operationalized Google's disabled woman, you've created an
develops frameworks and internal legal policies. During my experience that will benefit the entire
methodologies to help teams build time there, I co-developed a population."
responsibly. program supporting product teams
in surfacing and mitigating abuse- Other interesting ideas: rethinking the
Tell us about your career path and related risks through integrated role and scope of social media platforms
how it led you to your work’s anti-abuse systems. Following that and whether they should be more
focus: experience, I worked at Airbnb and
Twitter in various policy-related Continued on next page
I have a legal background and roles; however, what led me to join
IMPROVING SOCIAL MEDIA | 18
ALL TECH IS HUMAN
centered around small communities; exploring decentralized autonomous orgs and moderation models; experimenting
with non-ad-based business models; and implementing usage limits to prompt people to disconnect.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
I wonder if it is even possible to single out a "main" area of improvement across these issues, because of their
interdependency. For example, increased digital literacy for internet citizens does not exist in isolation of platforms'
responsibility to provide accessible tools, transparent use and controls. Meanwhile, citizens also have a role in defining
socially acceptable outcomes and demanding that their governments implement structural social changes that could
support better digital norms. None of these groups, on their own, hold the keys to "fixing" social media and the internet,
so we need to establish what a universal system of accountability would look like.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
I think we can all agree that having as multidisciplinary an approach as possible is the way to go. Since I have been in this
industry, I have worked with such a wide array of professionals (from veterans to former teachers to economics experts,
etc.). For my part, I would be encouraged to see more people involved with a deep and practical understanding of human
psychology and the ways to support our most vulnerable populations — maybe more therapists, social workers.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
I may be a cynic in thinking that this may remain a conundrum forever, or at least for as long as there isn't a universal
agreement on what "too much or too little" means. There is an expectation, there, for platforms to be the arbiter of right
vs wrong, when those very notions are not agreed upon across our communities and global societies, which creates a
"losing side" for every one of those complex policy enforcement decisions. Where I think platforms have an opportunity
is in establishing further transparency on what the rules are, how enforcement happens, and being prepared to evolve
those rules as social norms are shifting: collaborating with governments to establish standards on these parameters may
be a good approach to this problem.
What makes you optimistic that we, as a society, will be able to improve social media?
What I am observing is that we're starting to shift social norms in terms of how much we expect from our communities
and governments, as well as the corporations whose products impact us on a day-to-day basis. I do think that people are
slowly building a better understanding of how their information is collected and used on social media, and what that
means is that they are also building new expectations, demanding more transparency and establishing informal channels
for accountability. This mounting social pressure is coming from all of us, including the professionals who research, build
and create rules for this technology, and I am very encouraged by all the conversations I see happening in industry. The
progressive expansion of these conversations to multidisciplinary groups, impacted stakeholders and culturally diverse
voices is what makes me optimistic.
Rana Sarkar
it led you to your work’s focus:
way. Antitrust competition authorities, privacy and security authorities worldwide have started to act. Tech companies
also are beginning to lose the faith of their employee and user bases, which is where it really hurts.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
There is wide agreement that there is no silver bullet, here – but perhaps silver buckshot. I am encouraged by the efforts
of social media platforms to finally restrict or slow the spread of misinformation and disinformation. Building in features
to the platforms takes some of the pressure off users, while also encouraging more thoughtful discourse online. Product
innovation away from “enrage-and-engage” to pro-social forms is also good news. Consumers, employees and
governments are all pushing in this direction. Governments are getting better at tracking, naming and eventually pricing
harms; for instance, there is terrific work being done by Global Affairs Canada’s Digital Inclusion Lab. The lab’s most
recent work consists of social media data analysis that reveals how hate and discrimination are being deployed against
targeted groups online. Governments, regulators and activists are learning to act together and across their own silos to
granulate and create the scale effects necessary to incent industry behavior. There is huge pent up consumer demand for
better, which we can never forget, and if incumbents do not fill it, others will.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
There are a number of Canadian organizations working to improve social media and online platform governance.
The Citizen Lab at the University of Toronto operates at the intersection of tech, human rights and global security and is
a global leader in conducting research on digital espionage, internet filtering and the impact on freedom of expression
online and privacy, security and information controls on applications.
The Centre for International Governance Innovation, also in the research field, is a think tank that has put a renewed
focus on pressing digital and technology issues such as platform governance, internet governance and big data.
Another organization to watch is the Centre for Media Technology and Democracy at McGill University, which has
remarkable researchers bridging the Canadian and global conversations.
In the non-profit space, OpenMedia aims to maintain a safe, free and open internet, while MediaSmarts helps children
and youth develop critical thinking skills to engage with the media through digital literacy programs. These organizations
operate at the community level to inform Canadians of all ages about the role of digital technologies in society and
actions that the government can take to improve internet and social media standards.
How does social media look different five years from now?
Ad-supported data extractive businesses won’t fade easily and will remain the focus of scrutiny from governments,
consumers and employees. But I suspect we’ll see the growth of additional small, niche and decentralized platforms.
Blockchain and ledger systems have done a great job of embedding accountability in some burgeoning social platforms
and marketplaces, but have yet to see scalable social use cases.
In the next five years, we can expect to see a further consumer migration to video and audio rather than text-based
platforms. These will provide even closer and more intimate communication between users and growing “intentional
communities.” Clubhouse is already part of this trend, and we’ll likely see other innovative products in this space,
including the long-promised AR and properly digitally native applications. Given pervasive social media exhaustion,
particularly with younger users, I expect to see a spate of new ventures focused on more algorithmically “pro human”
platforms that amplify strengths rather than weaknesses of cognition. Gaming might lead the way here, with business
models based on subscription and tokening versus ads. This also reflects step change in digital norms. Additionally, I
expect to see more variety globally, given the splintering of digital norms.
Oumou Ly
mirror the ways we've seen it play out in
offline fora.
Center on Media, Politics and Public Policy at the Harvard Kennedy School does insightful and incisive writing and
analysis on these issues.
What do you see as the risk of doing nothing to address the shortcomings of social media?
It is hard to overstate how harmful doing nothing would be. First, disinformation is a major harmful side effect of social
media at its current scale. As the January 6 attacks on the Capitol showed, disinformation has a massive,
incontrovertible, corrosive impact on democracy. Moreover, when disinformation takes hold in a democratic society, it
works to both expose and accelerate the decay of democratic institutions. In this way, to do nothing is to accelerate
major democratic collapse.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Several: computer scientists and data design engineers, statisticians, information scientists, foreign policy and
international relations experts, psychologists, educators, political philosophers, policy experts and sociologists all belong
at the table, to name a few.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
The conundrum is not exactly whether platforms moderate too much or too little. it is more about whether platform
policies around moderation are guided by the kinds of considerations that social media companies operating in
democratic, decidedly anti-racist societies should prioritize. It is about how often or not often platforms make important
moderation decisions informed by these considerations. At this time, in the United States at least, moderation decisions
are not guided by these considerations to a sufficient extent. One difficulty with changing the current state of play is that
while we (all) tacitly agree that the responsibility of taking moderation action should remain the domain of platforms
(and not the government), platforms often moderate in a way that minimizes their exposure to certain risk, and this does
not always yield moderation outcomes that protect democratic interests. It is critical that our thinking on this question
takes into account the structural factors that create these kinds of trade-offs.
What makes you optimistic that we, as a society, will be able to improve social media?
I have been heartened by the significant academic and civil society efforts toward workable solutions, as well as their
readiness to so vocally hold the powerful accountable when they've failed to defend democratic interests. I know that,
when mobilized effectively, efforts of this kind have the potential to effect sweeping change in both government and in
private industry. What makes me optimistic is the groundswell of time and expertise that's been dedicated to creating a
more equitable internet. Those efforts, given time, will translate to social media in particular.
Yael Eisenstat
here at home as the biggest threat to
democracy. I became increasingly
concerned with how the Internet was
contributing to political polarization,
hate and division. I set out to both
Democracy activist; former Facebook elections publicly sound alarm bells and to see
integrity head; former diplomat, intel officer what role I could play in helping reverse
this course.
and White House advisor
This led me to Facebook, where I was
hired to head the company’s new Global
Elections Integrity Operations team for
political advertising. Realizing I was not
going to change the company from
within, I am now a public advocate for
transparency and accountability in tech,
particularly where the real-world-
consequences affect democracy and
societies around the world.
In your opinion, what area do you think needs the most improvement?
Every one of these is part of the larger puzzle. There is no one magical solution – we need a whole-of-society approach. I
focus on the government's role in defining responsibility, accountability for the externalities and threats to society
caused by current social media business models. This goes hand-in-hand with civic education, media literacy, public
awareness and healthier media in general.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
Civil Rights leaders, academics, journalists, advertisers, legislators, employees and activists all play a critical role in this
movement. Many organizations help educate the public, raise awareness and push the government to step up and
address these issues. Every one of these voices is important. This cannot just be left to technologists to solve and lawyers
to debate. Social media has a profound impact on all of society, and we are all stakeholders in the ultimate solutions.
What do you see as the risk of doing nothing to address the shortcomings of social media?
We have already seen the risks that I (and so many others) have been trying to highlight for years play out: People are
using social media tools, exactly as they were designed, to sow division, hatred and distrust. We saw where that can lead
when followers of conspiracy theories tried to launch an insurrection at the U.S. Capitol.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
This cannot be left to just technologists to fix. In addition to the need for racial, socio-economic, religious and geographic
diversity, this will require true diversity of thought, experience and background to fix. If we desire to create a healthier,
more equitable information ecosystem, the people who are most affected by the negative side of social media must be
incorporated into the decision-making processes moving forward.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
I do not believe the government should be regulating what speech is ok and what speech should be taken down, except
where it breaks the law. But I do think government should figure out how to regulate the tools the platforms use (and sell
to advertisers) for curating, recommending, amplifying and targeting. And that comes down to the fact that there is no
transparency into how those tools work. By insisting on real transparency around what these recommendation engines
are doing, how the curation, amplification, and targeting are happening, we could separate the idea that Facebook
shouldn’t be responsible for what a user posts from their responsibility for how their own tools treat that content. I want
us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for
how their recommendation engines spread it, how their algorithms steer people towards it, and how their tools are used
to target people with it.
Anita Williams
in the Global Affairs department as a
Legal Specialist. There I worked on
platform abuse protection initiatives
such as child sexual abuse investigations,
elections, advertising transparency and
counterfeit operations. I began my career
Graduate Researcher / Centre for Data Ethics with a keen interest in human trafficking
& Innovation prevention and obtained my BA in
Justice and Peace Studies from
Georgetown University with an emphasis
on labor and sexual exploitation. My
professional entry into the space of
online counter-abuse policy enforcement
has since elevated the scope of my career
focus to policy regulation and design.
Tell us about your role: national governments, How do we ensure safety, privacy and
international organizations and freedom of expression all at the same
I am currently an MPhil in private firms move beyond ethical time?
Technology Policy student, which is AI principles and toward the
an interdisciplinary postgraduate implementation of ethical AI, Research is increasingly uncovering the
program within the Cambridge standards are being discussed as an fact that there is no absolute way to deal
Judge Business School. The important way to achieve with online harms, and that only well-
program imparts cost-benefit, consensus across borders, and the reasoned trade-offs between privacy and
econometric and ethical analysis role of standards in AI governance security will define how online platforms
tools to design effective policies for will unquestionably help create a treat their users' data and voice. I believe
unregulated and/or emerging common language and way of the need for a fundamental digital bill of
technologies. I have recently joined operating for governments around rights is necessary to ensure safety,
the AI Assurance team at the the world who wish to maximize the privacy and freedom of expression of all
Centre for Data Ethics & opportunities of AI while users. The work by Liberal Democrats,
Innovation, alongside colleagues for minimizing risks it poses to society.
my program, to conduct research Continued on next page
on AI standards development. As Tell us about your career path and
IMPROVING SOCIAL MEDIA | 26
ALL TECH IS HUMAN
titled 'Creating a Digital Bill of Rights: Why do we need it, and what should we include?' addresses these central trade-
offs in a measurable and governable framework.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
From internal tech company hiring processes and retention programs to the very products and services they design for
use in global markets, there is a dire need for intersectionality and gender studies professionals in product design, policy
and improvement initiatives. I have worked directly with product and process users who have provided feedback on the
lack of consideration of their unique communication styles and needs – needs that academics with these areas of
expertise would be best equipped to design a new system to address. The Leverhulme Center for the Future of
Intelligence is doing phenomenal work with their AI Narratives and Justice program, working to highlight this 'pain point'
within companies and within greater cultural imaginations of AI.
Note: The views expressed above represent the personal views of Anita Williams and do not represent the official
views of the Centre for Data Ethics & Innovation.
Ethan Zuckerman
be best worked on not by tweaking the
existing system but by building new
systems that are much smaller and
governed by the communities that use
them.
responsibility of platforms, the role of media to educate the general public, governmental oversight, and the role of
citizens in terms of literacy and how they engage with platforms. In your opinion, what area do you think needs the
most improvement?
Most of the talk about media literacy is a cop out – it is transferring responsibility from platforms that have tons of
resources to throw at making spaces less dangerous and toxic to already overburdened educational systems.
Azmina Dhrodia
Head of Operations and Research at
Block Party, an early-stage tech startup
that solves online harassment by filtering
out users more likely to send unwanted
or harassing content, and was
responsible for building and executing
Senior Policy Manager, Gender and Data user research and operational strategies
Rights at the Web Foundation for Block Party’s alpha and beta
products.
voices and opinions we hear online. Experiences of online gender-based violence can also force women offline
completely. Online gender-based violence not only has harmful psychological and economic impacts on women, but the
silencing and censoring of women’s voices online is a threat to democracy and fails to respect their right to freely express
themselves online without fear.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
At the Web Foundation, we’re working to co-design policy and product solutions with tech companies and civil society as
part of the Contract for the Web. Our pilot program focuses on online gender-based violence and The Web Foundation
hosted a series of consultations throughout 2020 and now in 2021, bringing together tech companies, civil society
organizations, researchers, academics and women impacted by online abuse directly to collaborate on the technical,
policy and design challenges that need to be addressed to tackle gender-based violence on social media platforms.
Insights from four multi-stakeholder consultations will inform a series of policy design workshops that will bring
participants from the consultations and tech companies to co-create policy and product solutions to online gender-based
violence using an innovative human-centered approach.
How do we ensure safety, privacy and freedom of expression all at the same time?
Ensuring the safety of users online helps ensure that those who are disproportionately targets of online abuse are able to
freely and safely express themselves on social media without fear of violence and abuse. Ensuring safety and the right to
freedom of expression are not at odds with one another, and we must shift the narrative also to consider the right to free
expression for those voices being silenced and censored because of online gender-based violence and identity-based
attacks. When online abuse is allowed to flourish, especially when we know that some groups are targeted more than
others, it creates a culture where some users' right to free expression is seen to be more important than others. Privacy
settings are one way social media users can keep themselves safer online, but these settings can often be confusing, and
there is a lack of standardised terminology across platforms which makes it difficult to understand how a privacy feature
or safety tool can be used. Accessible and user-friendly privacy and security settings online can offer users an
empowering way to ensure that they are able to curate online experiences that make them feel safe and able to freely
express themselves without fear.
What do you see as the risk of doing nothing to address the shortcomings of social media?
When social media platforms fail women and silence or censor their voices by not adequately dealing with online gender-
based violence and abuse on their platforms, we also fail a whole generation of young women whose voices can also end
up being silenced as a result.
Samantha North
issues facing social media?
media to educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they
engage with platforms. In your opinion, what area do you think needs the most improvement?
A key area to focus on here is getting citizens used to engaging with social media in a more healthy way. This could
include teaching people about the tactics trolls use to draw them into hostility online, helping people break out of endless
dopamine loops, and teaching them how to develop more distance from their mobile devices. We also need to educate
social media users about the many cognitive biases involved in platform use.
What do you see as the risk of doing nothing to address the shortcomings of social media?
The side effects of social media have already caused at least five years of damage to information space and social
cohesion. Trust in institutions is at an all-time low, and many people across different countries are firm believers in
conspiracy theories. As we've seen recently in the US and other places, this can have disastrous results in real life. I do
not want to claim social media is wholly responsible for this, but it certainly has had significant influence.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Psychology and sociology are critical to include. Any efforts to tackle with online influence campaigns would also benefit
from experts in geopolitics, as well as cultural expertise in countries like Russia and China.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
This is an especially tricky challenge for governments in the current environment. Several Western governments have
been instrumental in polarizing their societies. We can look to the Brexit situation for a prime example of this. During the
debate, government officials deliberately used tribal terminology, such as “Brexiteer” and “Remainer,” which then
filtered through to social media, to be wielded by the two tribes against one another. Similar dynamics have been at play
in the United States, particularly during Donald Trump's presidency. To help solve this conundrum, governments need to
be more transparent and work harder at social cohesion, rather than leveraging division for political ends. Polarized
societies are unhealthy societies, prime targets for hostile influence campaigns.
What makes you optimistic that we, as a society, will be able to improve social media?
The cycles of history give me some cause for optimism. Whenever a new media emerged in the past, it took societies
some time to get used to it. Social media is still in its infancy, and perhaps the human brain needs time to catch up with it.
The people who design platform architecture have a lot of control. I am confident they can figure out ways to make social
media less harmful. After all, it offers a lot of positive factors as well.
Ioanna Noula
internet were already on the horizon
(GDPR).
tools and procedures but also the appropriate corporate cultures that will not just care about not being evil. We are
trapped in a whirlpool of unchecked inescapable (cannot be without) services and products, their speedy developments
and toxic impact.
Content moderation needs to be resourced and refined in ways that are responding to the scope, audience and
functionalities of the platform. Language proficiency, cultural awareness, understanding users' intent and offering
pathways to redress are some of the issues that AI could potentially support but it cannot yet be trained to do so. This
task is immense, it should be led by public deliberation and cross-sectoral consultation, and it requires ethical corporate
cultures characterised by openness and commitment to learning and accountability.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
I have recently been part of a conversation on AI-powered age verification. I was fascinated by the insightful inclusive
approach that was driven by experts from a range of sectors and disciplines and the leadership team's dedication to
children's welfare (YOTI).
How do we ensure safety, privacy and freedom of expression all at the same time?
I do not think I can offer an answer to this. It is a complex philosophical issue that is tied to the nature of democracy. I
think we should make peace with the fact that true democracy is a work in progress and it is based on struggle, dissent
and readiness to protect itself but also change when needed. In this sense, we should be driven by the value of human
dignity and try to draw lines according to the historical circumstances. The War on Terror came together with the
imperative to safeguard citizens' lives. The prioritization of safety came at the expense of our right to privacy. Snowden
exposed a system that had gone rogue and democratic institutions moved to address this.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
I think we need empowered and responsive (democratically elected) governments that can deploy pertinent regulation
and protect citizens. We are also in need of meaningful and critical education that will produce critical and responsible
citizens. Digital literacy and other education initiatives cannot be outsourced. I could not hold a profit-driven social
media company responsible for the lack of education of the general public.
I would, however, expect governments to hold media corporations accountable for not collaborating and sharing
knowledge that will assist governments to identify risks, assess societal impact and educate citizens in meaningful ways. I
cannot expect citizens to understand algorithms and the impact they have on their lives, if governments have no
understanding of how Facebook or Twitter's algorithms work. Democracies and their governments are tasked with
safeguarding, educating and empowering citizens and imagining the future of our society. Knowledge sharing from the
side of social media corporations is needed so that our governments can ask them the right questions, and offer the right
answers to their citizens.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
Our organization, the Internet Commission, is advancing digital responsibility through the evaluation of social media and
other digital organizations. We have recently addressed the contentious topic of organizational decision-making about
online content, conduct and contact. To deliver this, we went through the laborious process of convincing established
organizations to participate and secure a much wider endorsement.
We applied diverse experience of business, public service and academic research to the creation of detailed case studies.
Guided by a detailed evaluation framework, we identified and analyzed the key organizational practices that enable and
shape decisions about online content, contact and conduct.
For the first time, we looked "under the hood" of these organizations speaking to people involved in making these
decisions and who are generally unseen and unknown. We also interviewed those on the front line who can face very
challenging conditions.
IOANNA NOULA IMPROVING SOCIAL MEDIA | 35
Continued on next page
ALL TECH IS HUMAN
We documented sophisticated technologies that help to safeguard human moderators and the public, but which can also
amplify harmful stories, reinforce gender and racial biases and shape or limit the spread of ideas.
This accountability exercise was an independently scrutinized proof of concept that proved that auditing and holding
social media companies to account can be delivered.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
I think the dialogue on the diversity of tech issues is in the right direction and we are consistently shining new light on the
under-representation of ethnic groups, the toxic masculinity cultures, the suppression of voices of dissent that challenge
those cultures, the biased training of algorithms and the unaccounted for audiences. I think we need to include children
and also consider intersectionality of identities that amplify disadvantage. I would want to see children (including
disabled children, children of colour, children in poverty) being part of the discussion at the stage of designing tech and
developing tech regulation. The recent adoption of General Comment 25 by the UN Committee on the Rights of the
Child is a pivotal moment for children's rights that recognizes that these rights apply in the digital world. Considering this
new introduction with the breakthrough article 12 of the UNCRC (that stresses the importance of listening to children)
spotlights the imperative to consider children's voices in the process of designing their digital future.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
I do not think so. I think institutional vigilance is important, and governance should ensure that the right people can
support pertinent legislation and enforcement.
Douglas Rushkoff
itself collapse as a viable society, thanks
chiefly to the impact of social media on
our capacity to understand or engage in
civics.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
I think Mozilla may be looking at this. I do not really see much out there.
What do you see as the risk of doing nothing to address the shortcomings of social media?
The greatest risk? We all die. An insane population cannot govern itself or address collective problems.
What models do you see coming on line for providing a digital community (beyond today’s ad-based, extraction
model) – platform cooperatives? Decentralized Autonomous Organizations (DAOs)? For example, are there promising
applications for the blockchain?
Sure, DAOs could work. Most are just for building platforms, or alternative social networks, or versions of Facebook that
are less this or that. I think the open web and TOR type things are interesting. Platform cooperatives are certainly better
than platform monopolies. But how many businesses really need to be operating as decentralized, non-local networks?
Most of us should be working in local cottage industries. Maybe use the networks for B2B connectivity between the
cottage industries. That’s called anarcho-syndicalism.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Liberal arts, civics, economics, ethics…. everything that’s not included in STEM.
There is a concern that the focus on improving social media is overly US-centric. Can you point to stellar work
happening outside the US?
I cannot point to any stellar work that’s happening in the US. Who is focused on improving social media in the US? I like
what they’re doing at Enspiral in New Zealand.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
All you have to do is break up the monopolies, so that smaller communities can make their own decisions.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
Algorithms optimize for extraction of user data or for engagement/addiction. If we tuned the algorithms for something
other than mental illness, they might not make people as sick.
What makes you optimistic that we, as a society, will be able to improve social media?
Dude—you have the cart and horse reversed. I do not want society to focus on improving social media. I want social
media to focus on improving society. I am optimistic about those who have given up on social media. All media is social.
The platforms you’re calling “social media” are the least social media ever developed. As people realize this, and turn to
real life and other forms of media to accomplish social ends, we can improve society.
Soraya Chemaly
Trusting Women Can Change The World.
responsibility of platforms, the role of media to educate the general public, governmental oversight, and the role of
citizens in terms of literacy and how they engage with platforms. In your opinion, what area do you think needs the
most improvement?
The areas that need the most improvement are those that reside at the intersections between these sectors. Each has its
own areas of focus and addresses different dimensions of complex problems. Where concerns overlap, we find critical
expressions of cultural values that social media companies minimize in a cultural preference for tech fixes. Social media is
a sociotechnical system and requires sociotechnical understandings, at every level.
What do you see as the risk of doing nothing to address the shortcomings of social media?
Deeper social distrust, conservative and deeply authoritarian retrenchment, violent political disruptions.
How does social media look different five years from now?
It will be more deeply embedded in the Internet of Things and in our bodies.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
I do not believe governments can solve this problem. Particularly since social media companies benefit and profit from
operating transnationally and in proto-governmental ways.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
While the role that algorithms play in individuals' experience is significant, I believe that the impacts of algorithms,
systemically and at-scale are far more worrying. Racism, sexism and caste-enforcement operate invisibly and powerfully
to reproduce and empower long-standing discrimination. Individuals may never see or personally experience or realize
that they are subject to these effects.
What makes you optimistic that we, as a society, will be able to improve social media?
Steven Renderos
community radio station in Minneapolis,
Minnesota.
the 1990s when “If it bleeds it leads” dictated editorial decisions, social media platforms reward the most sensational and
hateful content. This has created the conditions for disinformation and misinformation to spread, which has led to the
mainstreaming of fringe conspiracies like QAnon. White supremacists, prior to social media, were smaller in ranks and
disconnected. In the social media age, white supremacy has grown as fast as a Facebook algorithm will allow them to.
Social media has been a tool that’s been used to mainstream their beliefs and incite violence that has caused offline
harms. We saw that during the summer in 2020 with the violence perpetrated by white supremacist groups as a way to
co-opt protests in response to the killing of George Floyd.
Lastly, these companies have been left to regulate themselves, as policy making has not caught up to the threats these
platforms pose.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
There is no singular solution that will do the greatest good, but there are a multitude of solutions that together can start
to tackle the many complicated issues these platforms pose to our society.
First, solutions should tackle the business model of social media companies. Compel transparency around the algorithms
these companies use and adopt civil and human rights protections to ensure the most vulnerable users in the US and
beyond are safe. Create limits to the data that can be collected and prevent data from being shared with third party
vendors. And transition ownership of that data to the individual that data belongs to.
Second, break up the monopoly power these companies hold. it is not just that Facebook also owns Instagram (which is a
direct competitor) but that alongside Google, these two companies account for the vast majority of growth in the digital
market space. These companies should be broken up.
And the last solution I have heard about is taking the social media approach at a smaller scale to tackle the infodemic. I
have heard of hyper local social media networks at a city level with strong curation rules that attempt to engage people
in healthy dialogue over the issues facing their communities. A key part of the solution to address misinformation is
strengthening where people are getting their news from. Hyper local networks where people rely on their neighbors is a
start, and so will revitalizing journalism.
How do we ensure safety, privacy and freedom of expression all at the same time?
Balancing safety, privacy, and freedom of expression begins with acknowledging who historically and currently does not
enjoy those protections on social media. According to Pew Research, 1 in 4 Black people in the U.S. has faced online
harassment because of their race. Women on Twitter have been doxxed by misogynists so routinely that it even
prompted a research project by Amnesty International (“Toxic Twitter”) to study this phenomenon. Arabs and Muslims
have seen social media used to mobilize violent actions directed at mosques.
Meanwhile, for years, the right has tried to co-opt the debate over free speech by claiming that social media platforms
are biased against conservatives. This is counter to any studies that routinely show right-leaning content performs better
on the platforms.
Balancing safety, privacy and freedom of expression should start where the greatest harm is happening. To quote a
recent op-ed by MediaJustice founder Malkia Devich-Cyril, “When an oppressed minority seeks equality and justice, and
freedom from the harm and violence brought on by the systematically privileged speech of others, that’s not censorship,
that’s accountability.”
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
Governments have largely left these large platforms to regulate themselves. Platforms have proven ill-equipped to
address many of the problems that have cropped off from the proliferation of disinformation to the spread of white
supremacy. There is a critical need for policy to define what behavior and practices are lawful and what is not. The
platforms will never make choices that are counter to their own business practices, and only the adoption and
enforcement of laws can ensure that people of color are protected.
STEVEN RENDEROS IMPROVING SOCIAL MEDIA | 42
Continued on next page
ALL TECH IS HUMAN
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
MediaJustice is a part of a couple coalitions that are aiming to improve social media, including the Change the Terms
coalition, which was founded by people of color-led organizations to tackle the spread of online hate. We’re part of the
leadership council of the Disinformation Defense League, which is aimed at fighting against racialized disinformation.
Organizations within them, including Color of Change, Free Press, United We Dream, First Draft, Lawyer Committee on
Civil Rights, the National Hispanic Media Coalition and others are trying to improve social media for good.
What do you see as the risk of doing nothing to address the shortcomings of social media?
There is a high concentration of people in society today who live in an alternate reality shaped by misinformation. The
Capitol riot was a violent manifestation of what happens when a lie, like that of a stolen election, is directed at the
government. Democracy itself hangs in the balance, as an electorate divorced from reality can vote in a government with
the ability to cause harm on a massive scale. We spent the last four years seeing a president run a country via social
media, and as always it was communities of color who faced the brunt of the harm.
How does social media look different five years from now?
We forget that Facebook is less than 18 years old (not even old enough to vote in a metaphorical sense). Other major
platforms that people have gravitated toward lately are less than five years old. A helpful point of comparison to me are
streaming platforms. For a while, there has been an undisputed giant in Netflix, but in recent years more niche platforms
are emerging. I think the same is likely to happen to social media, where we’ll network ourselves along place, identity or
topic. This happens now on platforms like Facebook, but I do see an appetite for stepping outside of that infrastructure.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
It is hard to feel confident that the government can solve this conundrum when it wasn’t long ago that a sitting US
Senator described the Internet as a “series of tubes.” That said, I would argue against false equivalencies. Deplatforming
Trump was a necessary action, albeit too late, to hold a public official accountable for inciting violence. Doing so was not
counter to protecting vulnerable voices on platforms. I believe the standards should be higher for public officials who are
given a platform to directly reach their audience, not lowered because their speech is inherently newsworthy, despite
the harm they cause. I also do not see the issue with platforms as one with an endpoint, for as long as platforms for
speech have existed from newspapers to radio to TV and now the Internet, the question of whose voices are centered
has always been an issue. And so long as white supremacy, embedded in U.S. institutions, continues to define who has
power and wealth, then any new innovation in communications will face the same challenge of amplifying the speech of
the privileged while suppressing dissenting voices.
What makes you optimistic that we, as a society, will be able to improve social media?
There is a greater understanding that local work matters. The multiracial alliance that drove the highest voter turnout in
U.S. history was rooted in local organizing. The way to combat information deserts is to rebuild local journalism, as the
work of Free Press’s News Voices is attempting to do.
In my first job out of college as an organizer for a tenants union. I had a chance to organize neighborhoods that were
composed of immigrants and poor white folks. They fought together because they shared an interest in living in a safe
and clean neighborhood. They had more in common with each other than differences, and organizing together helped
make that real. The silver lining in this moment where social media has driven divisions far and wide is that healing those
divisions has to start where we’re at. For most of us, that’s in the communities we live and work in every day.
Pia Zaragoza
without existing efforts in inclusive
design and accessibility research, more
needs to be done to understand the
ability, identity, habits and preferences
of people of all abilities. This framing
builds off of the research of Dr. Amy
UX Mentor at Springboard, Presidential Hurst. It is all about ensuring that social
Innovation Fellow media enables people with disabilities to
be seen, heard and understood.
Amanda Lenhart
management, especially content
moderation, but also cross-cultural
complexities, push platforms to look for
automated, AI-driven solutions that lack
nuance and often punish more users who
do not look like a platform's imagined
Program Director, Health + Data, Data & average user.
Society Research Institute What "solutions" to improving social
media have you seen suggested or
implemented that you are excited
about?
governmental oversight, and the role of citizens in terms of literacy and how they engage with platforms. In your
opinion, what area do you think needs the most improvement?
We have spent over 15 years working to educate the public and enhance the media literacy of platform users. it is time to
hold platforms more accountable for the harms to users, especially those that affect structurally marginalized
communities. These are difficult problems to manage. Platforms have not prioritized their moral responsibilities over
their pursuit of profits, and many need nudges from regulators to rebalance those priorities.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Technology companies need to enhance the diversity of their workforces – not just in who they hire, but who they retain
and promote. Employees of companies at all levels need to reflect the breadth of the users of their products and services.
Not just diversity by race and ethnicity, but by age, parent status, gender identity, disability status and other categories
of users who are not always well-served by platforms and their moderation. And where this diversity cannot be readily
acquired or maintained, companies must seek out and do direct research about users to better understand their
experiences on the platform. Further, emphasizing exposure to a variety of academic disciplines like the social sciences
and humanities, including but not limited to thinking about ethics among all types of hires – including engineers and
computer scientists, not just among legal and trust and safety teams – will also help seed ideas about responsibility to
users at all levels and across roles in technology companies.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
Algorithms indelibly shape user experiences on many social media platforms. They also help companies manage scale
problems by eliminating human moderators or assisting them with content moderation. However, the lack of
transparency in the algorithms that govern these systems, and bias enabled by assumptions built into automated
systems, harm certain sub-groups of users (especially structurally marginalized groups) and make it difficult for anyone
using the system to understand and critically respond to outputs. Requiring algorithmic impact assessments prior to the
launch of algorithms used (or modified) by platforms can start the process of providing greater transparency, especially
with regard to an algorithm's outcomes.
Laura Manley
advocacy organizations and citizens must
hold social media platforms accountable.
John Rousseau
the system. Doing so would require
significant efforts across industry,
government and society; tradeoffs
amongst various stakeholders; and a
willingness to completely redesign the
policies, products and platforms
Partner at Artefact themselves. Today, the system is
functioning as designed and in the
interests of those who designed it.
I am particularly interested in
decentralized identity, an idea most
prominently advocated by Balaji
Srinivasan. His concept – Pseudonymity
– refers to a middle ground between
anonymity (where identity cannot be
verified) and transparency/permanent
identity (which can be verified but is
high-risk/fragile). A pseudonymous
identity might include a range of
authenticated and secure profiles: for
example, a social identity, a transacting
identity and a permanent identity – all
encrypted on a blockchain ledger and
owned by the individual.
Responsibility starts with platforms – as I said previously, they are functioning as designed. And yet, as Jaron Lanier has
argued, the technology companies are unlikely to fix themselves absent real external pressure, due to conflicting
incentives and a distorted world view built on the primacy of their products.
Following Lanier, the only form of agency most of us have is to simply stop using the products and collectively imagine a
better future. I use the word “imagine” suggesting that responsibility for envisioning the future falls to each of us, and to
society at large. It is not the exclusive domain of pundits, technology companies or the government (though the latter can
and should structure incentives that are aligned with a clear vision).
We need to question our assumptions about how the system works, and what outcomes we seek, if we are to have any
voice in shaping a better world. So – how might the future be different? Will there be perfectly targeted advertising, or
no advertising at all? Will you own your personal data, or would others? Is engagement the primary metric, or well-being?
What models do you see coming on line for providing a digital community (beyond today’s ad-based, extraction
model) – platform cooperatives? Decentralized Autonomous Organizations (DAOs)?
The proposals I have seen for new platforms – particularly those with novel incentive structures – seem like they would
create a range of unexpected negative externalities at scale. Similarly, fully decentralized networks could address issues
of data sovereignty and disrupt the extraction model but could struggle to maintain order and moderate the flow of
disinformation without robust and reliable governance. This is particularly true as real power is involved, as the
discourse becomes more and more polarized and as the system influences real-world outcomes.
How does social media look different five years from now?
In five years, the most likely outcome is that social media is a lot like today, but worse. It will be dominated by the same
monopolistic platforms, which have made superficial changes to policies and products in response to public pressure and
regulation. These fixes will fail to have a significant impact because they did not address root causes (e.g., business
model) and systemic dysfunction (e.g., incentivizing platform growth and user engagement). The 2024 US presidential
election will be a high-water mark for disinformation.
This is not a hopeful vision, I know. My sense is that, in terms of social media and several related issues, things will get
much worse before they get better. Or rather, before enough people demand change, are willing to make personal
sacrifices for the greater good and are willing to act on that belief. Until then, keep scrolling.
What makes you optimistic that we, as a society, will be able to improve social media?
The reason to be optimistic about the future is that, unlike social media, society is (hopefully) enduring and has the
capacity to improve itself over time. Social media is not a mirror that merely reflects society; it is more like a powerful
amplifier turned up to 11 and stuck in a negative feedback loop. To make meaningful progress, we need to turn the
volume way down, begin from first principles, question our assumptions, and truly address the complex and systemic
issues that got us here with an eye toward a preferable future.
"We need to question our assumptions about how the system works,
and what outcomes we seek, if we are to have any voice in shaping a
better world."
Camille Stewart
decisions based on their risk tolerance
and to understand the consequences of
each choice, similar to the physical safety
decisions they make each day. A more
informed citizenry will make demands of
both the industry and government to
Cyber Fellow, Harvard Belfer Center drive the market and direct government
regulation and oversight. All these areas
are important, but we are underinvesting
in comprehensive citizen education on
digital literacy and civics – especially as
educating low-income, rural and minority
populations continues to be largely left
out or underfunded.
David Ball
LGBTQIA+ and multi-hyphenated youth
to thrive and find support through the
technology that is intertwined in their
lives. We do this through research, like
our crowdsourced mapping project
Digital Delta, through the Headstream
Headstream Director at SecondMuse Accelerator, which supports
entrepreneurs working on social
technology innovations, and through our
Youth 2 Innovator program for our
future leaders.
agency to create and connect around art and writing in a digital place where they can address the most important social
issues in their lives. Online games have also become incredibly social experiences for young people. Innovations like
Liminal Esports, Social Cipher, EquipT and Gamersafer are using gaming platforms to provide safe digital experiences for
young people to connect and develop new skills critical to their wellbeing, all while having fun.
These are the types of social technologies we believe will meet the wellbeing, relationship and developmental needs of
young people moving forward.
What do you see as the risk of doing nothing to address the shortcomings of social media?
We have a long way to go to create a digital economy that meets the needs of all of its participants. Structural injustices
exist for both the creators of these new technologies as well as communities of users who are often completely ignored.
At Headstream, we strive to deconstruct the unjust system that exists for women as well as Black and Indigenous People
of Color in the technology sector. We take a stand, through the innovations that we source and accelerate, to bring
solutions that serve communities of users who are traditionally unjustly served by social technologies. Our fear is that
we will lose a generation because we didn’t take care of their wellbeing in time. Social technology needs to be a space
where young people can nurture their talents as they grow up, connect meaningfully and find support when times are
tough.
How does social media look different five years from now?
Young people across the country have cracked the code and are flourishing because of social technologies. They are
connecting and collaborating with communities they may never have imagined existed, bridging socioeconomic and
racial divides. Their voices, ideas and creativity now have the potential to reach every corner of the globe. They can find
and pursue their diverse and unique passions. And of course, they can play and find joy. What would growing up be
without that? All of these incredibly rich and positive experiences point to the real social value that technology can
create. As Ose Arheghan, a national youth LGBT advocate and member of the Headstream community puts it, “Having
access to technology has been really empowering because I have been able to educate myself on my identity and what
that means for me, and I am not confined to what other people think being a part of the LGBT community means.” That
experience exists for some today; in five years, hopefully it exists for almost all young people.
Liz Lee
spent investing in VCs and startups. But
in 2015, while at Morgan Stanley, I was at
staring at two monitors, one with news
headlines of people committing suicide
due to online harassment, and the other
showing the financial performance of
OnlineSOS / Founder; Sr. Product Trust social media apps. I had never spoken
Partner / Twitter about my own experience being stalked
and extorted online. I realized I wasn’t
alone; this experience impacts 85 million
Americans. I asked myself: How many
more people need to be harmed before I
do something? I started sharing my story;
less than a year later, I left my job and
launched OnlineSOS to create the tools I
wish I had.
People who have been trained in the social and behavioral sciences and understand human interaction and behavior, as
well as historical developments and societal patterns should consider entering or contributing to the field. Sociologists,
psychologists, historians, community organizers know either firsthand through one on one primary research and
studying or academic research the very challenges that we grapple with on a day to day basis both in person and online.
This is especially crucial in order to prevent systemic inequalities from being mirrored or exacerbated in online spaces.
There is also a need for people who can speak to the experiences of people from historically underrepresented and
marginalized communities, including but not limited to: communities of color, LGBTQ+, women, people who understand
systemic racism and power dynamics. They are a necessary part of the overall solution to our puzzle. Anyone who can
use data to advocate for and keep in mind any structural or systemic biases that we may have in not only our algorithms,
but also our design and decision making, is critical.
Tech companies can build formal processes and initiatives for planning, prioritization/stack ranking, escalations and risk
mitigation, but companies are made of individuals and, in the middle of a crisis, an individual will choose what to
prioritize, will choose to use their political capital to raise an issue. However, it is also the responsibility of tech
companies to ensure that diverse candidates are welcomed and set up to succeed; we cannot and should not assume that
everyone should fit into the existing tech culture.
What makes you optimistic that we, as a society, will be able to improve social media?
There has been a major shift in the public consciousness about the role of social media and the importance of online
experiences. Online harassment, toxicity abuse, misleading information – these issues are now increasingly well
understood. Five years ago, people were still saying, “Turn off your computer, it is not real life.” Awareness and a demand
for change are the first steps.
Now, as more talent and dollars go into addressing multiple prongs of the range of challenges, I am certainly optimistic.
Improvements to social media are no longer a “nice to have” but generally understood and accepted as a “must.” Getting
funding for initiatives tackling online abuse is easier in 2021 than in 2015.
It is easy to be resigned about the progress that we haven’t made, but we also cannot discount the shift in tide of public
opinion. Real, lasting, systemic and structural change will require radically new levels of cross-stakeholder collaboration,
with tech companies, grassroots and advocacy groups, community organizers, lawmakers, researchers, educators,
technologists, policy makers, etc.
The announcement of the Biden/Harris Online Harassment Task Force indicates that they are taking violence against
women and online harassment seriously. Vice President Harris has demonstrated an understanding of cyber crime
issues, including nonconsensual pornography, when she was the Attorney General and then Senator of California. She
has a track record of engaging with members of the community who understand the problem well, including advocates
for victims/targets of abuse and grassroots groups.
Ashley Boyd
example, we launched the
RegretsReporter browser extension that
lets people send us data about
recommendation rabbit holes that they
get sent down on YouTube. Similarly,
during the 2020 U.S. elections, Mozilla
VP of Advocacy and Engagement, Mozilla researched and published platform’s
policies on misinformation and
disinformation; access for researchers;
ad transparency and consumer control.
gained deep understanding and first hand experience with many of the harms that social media can exacerbate in real
life.
What models do you see coming on line for providing a digital community (beyond today’s ad-based, extraction
model) – platform cooperatives? Decentralized Autonomous Organizations (DAOs)? For example, are there promising
applications for the blockchain?
Tech today collects our data and then uses it to make important decisions about us, from what news we read to who we
date. And there is a serious power imbalance at work here: Why should Amazon know so much about our shopping
impulses? Why should Facebook get to create secret profiles on us?
To re-balance power, we need new data governance models like data trusts, which operate as trusted intermediaries
between consumers and big tech platforms.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
We need to shed the harmful misconception that technologists alone should develop technology. Today’s consumer tech
products, from search engines to voice assistants, impact billions of people everyday. As a result, we need people with a
range of expertise, from sociologists to psychologists, to design, build and govern these systems. That’s how we fix blind
spots and prevent technologies from inadvertently excluding or manipulating entire communities. My colleague Kathy
Pham wrote a great essay about exactly this for Fast Company.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
One concern we have with how algorithms affect the social media experience is that they often drive people toward
more extreme content. Case-in-point: In the weeks leading up to the 2020 U.S. elections, we called on Facebook to
discontinue group recommendations as evidence mounted that people were increasingly being exposed to
disinformation and misinformation after joining groups recommended by Facebook’s algorithmic recommendation
engine. People who might never have been radicalized otherwise end up going down a rabbit hole on YouTube, engaged
in a feedback loop on Facebook or otherwise exposed to harmful content largely because of algorithmic
recommendations.
Giving consumers more control over content recommendations, eliminating algorithmic recommendations and
increasing transparency about how platforms’ algorithms are trained are all ways we can help mitigate this problem.
What makes you optimistic that we, as a society, will be able to improve social media?
Recently, more and more people have engaged in the challenge. There is increased political will to work toward solutions,
and platforms themselves recognize that, between consumer power, advocacy from civil society and government
interventions, they will have no choice but to make crucial improvements.
Julie Inman-Grant
There is pending legislation that would
give us additional powers to compel
take-down of serious adult cyber abuse
and require companies to live up to Basic
Online Safety Expectations (“the BOSE”)
including the ability to compel
eSafety Commissioner, Australian Office of the transparency reports to reduce opacity
eSafety Commissioner in policies but also to understand how
certain issues are being tackled and
whether companies are enforcing their
policies consistently and fairly.
role and finished my 17 year career at Microsoft as the global head of privacy and safety policy and outreach at Redmond
HQ. I had two exciting and eye opening years at Twitter setting up and running their public policy and philanthropy
functions in ANZ and South East Asia before joining Adobe as their head of Government Relations across Asia Pacific.
Nine months later this poacher became a game keeper and I was appointed to serve as eSafety Commissioner of
Australia.
In your opinion, what are the biggest issues facing social media?
The failure of corporate leadership to recognize and embrace their tremendous societal impact and the ill effects
technology can have on humanity and to actively take responsibility for these hazards. Had more of these companies
prioritized the safety, privacy, security and overall well-being of their users and balanced the imperatives of “profits at all
costs” with their responsibility to prevent and protect a range of online harms, they would be in a much better position. If
you add to that tremendous market power, the perception of evasion of international taxes, and occasional recalcitrance
towards governments, the biggest issues facing them will be the force of global governments regulating them in ways
that might be both unworkable, inconsistent and detrimental to their future growth. So, to me, this is the biggest threat
to the industry.
[T]he looming threats to users of social media and the industry’s ability to address these threats involve the various ways
threat actors are weaponizing their platforms to spread child sexual abuse material, pro-terrorist/extremist content and
other forms of illegal content—these cause the most harm to society but there are also a range of technology tools
available to tackle these issues, if there were greater will to do so. It is the more “contextual issues” and the forms of
harmful content that aren’t patently illegal and are likely to require more “ecosystem change,” investigation and human
moderation that are likely to be more challenging to tackle. This includes issues related to incitement or facilitation of
violence and sexual harassment.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about? How
do we ensure safety, privacy and freedom of expression all at the same time?
I served as an “online safety antagonist” within industry for more than two decades and I could never convince company
leadership that addressing “personal harms” should be elevated to the same status as privacy or security. I brought
“safety by design” to Microsoft leadership more than a decade ago and while there was a tacit understanding of the
importance of online safety, it was never given the priority, investment or attention that the other disciplines were.
Whilst at Twitter, I saw the devastation that targeted online harassment wrought on humanity every single day – and it
demoralized me too. The company that I was so excited to join, that stood for the levelling and promotion of voices online
that previously weren’t heard, simply weren’t doing enough to protect those voices, particularly marginalized voices. I
could not defend this anymore, so I sadly left a company that I saw having so much potential to do good in the world.
As eSafety Commissioner, I built an incredible team to work WITH industry to create a set of “safety by design principles”
that were achievable, actionable and meaningful. I understood that this is something we needed to do with industry
rather than to industry to be effective, as it will involve changing the ethos of how technology design, development and
deployment typically happen. We went “deep” over about 8 months to uncover innovation and best practice in this space
to elevate as examples and ended up with three sets of principles: “Service Provider Responsibility; User Empowerment
and Autonomy, and; Transparency and Accountability.” Because we want industry to be successful at assessing risk at
the front end and building in safety protections to prevent misuse rather than retrofitting after the damage has been
done, we also WANT companies to be successful at achieving higher levels of safety. So, we decided that we’d take the
principles and turn them into a free, interactive assessment tool so that companies could use this as an audit tool of sorts,
learn how to address safety weaknesses and have a robust “safety impact assessment" to help them build their roadmap.
This tool will be released in a few months—one tool is for start-ups, the other is for more mature enterprises.
Safety by design doesn’t end there – we believe the VC and investment community has an important role to play in
ensuring user safety as a way to ensure more ethical investing, managing risk and in preventing “tech wreck moments” –
these are preventable. In January, 2021, we released an investor toolkit. We’re also piloting safety by design curricula in
four universities in Australia – we believe the next generation of coders, designers and engineers should be building
technology with ethics, human rights and safety in-mind.
By the way, I reject the supposition that privacy, safety and freedom of expression are diametrically opposed or mutually
exclusive. They need to be balanced—and occasionally be recalibrated—like 4 legs of a stool….
JULIE INMAN-GRANT IMPROVING SOCIAL MEDIA | 58
Continued on next page
ALL TECH IS HUMAN
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
They all need improvement and need to work in harmony if we are going to make the online world more hospitable, civil
and positive. This balance has informed the way in which I structured eSafety. Everything we do is evidence-based, so I
have an internal research team that delves into qualitative and quantitative measures, and this informs our public
messaging education materials and resources. These are designed to reach specific audiences, whether parents,
educators, senior citizens or children themselves with the aim of helping citizens to harness the benefits of technology,
understand and mitigate risks with pragmatic and actionable solutions. We are aiming to encourage behavioral change
(which takes a long time) and measure that impact through evaluation. We reject purely fear-based messages and also
leverage the education sector to help reinforce messages and incident response throughout a child’s educational
journey.
Clearly, we believe that government oversight is required to serve as a “safety net” for our citizens when online abuse
falls through the cracks of their content moderation systems, to remove harmful content and when necessary use civil
penalties to punish perpetrators and fine content hosts. While I'd much rather use the carrot, there are times the stick is
definitely needed. And, as expressed through our commitment to safety by design, we absolutely believe that industry
has to do better in making their platforms safer, more secure and that they need to be both more transparent and
accountable for harms that take place on their platforms. They build the online roads, they also need to erect the
guardrails, occasionally police those roads for dangerous drivers and enforce the rules so that other users don’t end up
online roadkill.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
There are so many people doing such great work all around the world, committing to make the world a safer place. We
notice that examples of such work often focus on North America and Europe and that very few people outside of the
safety community know what we’re doing in Australia. We may be small and far away but we think what we’re doing for
our citizens is pretty unique and has impact.
There are some incredible technologists out there devoting their brain power and careers to making the online world a
better place—this includes Dr. Hany Farid, from Berkeley and the inventor of PhotoDNA, Christian Berg from Sweden
who has developed tools for law enforcement and several companies like NetClean and Paliscope. There are some really
great safety tech companies popping up too, including Spectrum Labs, Sentropy, Hive, Tiny Beans, Family Zone and
numerous others.
There are incredible researchers and advocates, particularly all of those affiliated with Global Kids Online, that bring a
lot of rigor mixed with a genuine concern for children and human rights, mixed with common sense. Dr. Sonia
Livingstone, Amanda Third and Anne Collier come to mind and I love the work of Sameer Hinduja and Justin Patchin of
cyberbullying.org. They are the real deal!! Some of the female lawyers and academics in the US working in the intimate
privacy, ethical AI and advocating for women and minorities online are doing ground breaking work including Danielle
Citron, Mary Anne Franks and Safiya Umoja Noble are my she-roes! I am honored to work with some amazing human
beings through the WeProtect Global Alliance including Baroness Joanna Shields, Ernie Allen, Julie Cordua of Thorn and
passionate advocates like John Carr. It is amazing what a bit of compassion, strategy, brains and strong communications
skills can do to enable meaningful change!
What do you see as the risk of doing nothing to address the shortcomings of social media?
Yes, the risk of doing nothing is too great. That is why we are doing stuff here in Australia. One of those things is seeking
to build capacity and capability in the online safety regulator space. We expect that there will be a network of online
safety regulators in the next 5 years, which is great. It’s been lonely at times, and challenging not having a playbook to
refer to. For us to have real impact, we need some “pincher moves.” Ireland, the UK, Canada, Fiji will be the next batch of
online safety regulators— we hope this catches on further with the European Digital Services Act. We are also heartened
Of course, one of our key goals is for Safety by Design to really take off and to become the de facto way that companies
design, develop and deploy (or refresh their technology). This is not incompatible with innovation— in many ways it
requires innovation—and I would argue it’s a much better investment to embed safety upfront than to have to re-
engineer after a major regulatory, revenue or reputational threat.
I don’t think any one sector can “solve” this conundrum on its own—but certainly, more governments affirmatively
committing not just to curbing tech industry power but also to ensuring that they are regulating for a range of safety and
privacy shortcomings would be a good start. eSafety was established to serve as a “safety net” and I think this is a good
model. I don’t think governments should be in the role of serving as the “censors” or “arbiters of speech or the cultural
wars” but they have a role in pointing out when banter turns to serious abuse and when hate speech poisons public
discourse.
Each social media company can be seen as a house. They inhabit a global neighborhood where there are zoning rules and
laws that prevent encroachment on others. They set their own house rules so as long as those rules are transparent and
clear, they should be able to decide how to discipline those who violate these rules. It might mean sending little Johnny to
his room or it might mean kicking the mean, drunk uncle out. But you are still not allowed to commit crimes in your house
—and you need to let the police in (with a warrant) when wrongdoing occurs. There will always be the need for a housing
commission or police to enforce such rules – not all home owners are going to be law abiding or respect their neighbors
rights.
Speech is much more difficult, of course. And, given the power, amplification potential and standing of the user, that level
of influence (or not) should also have bearing on how the rules are designed and enforced.
You can connect with the Australian Office of the eSafety Commissioner at esafety.gov.au
"While I'd much rather use the carrot, there are times the stick is
definitely needed. And, as expressed through our commitment to safety
by design, we absolutely believe that industry has to do better in
making their platforms safer, more secure and that they need to be
both more transparent and accountable for harms that take place on
their platforms."
Merve Lapus
issues facing social media?
Tell us about your role: Tell us about your career path and Redefine norms of online engagement.
how it led you to your work’s Create legislation and policies that
I oversee Education Outreach and focus: authenticate and legally hold users and
Engagement for Common Sense platforms accountable. Otherwise, it
Education, including school I have worked in EdTech for over goes without saying that education at
adoption, district implementation, 18 years, focused on developing both the federal and state levels on the
parent/community engagement, programs and resources to support merits of digital citizenship go a long way.
strategic marketing and community academic and social emotional Integrating digital citizenship into k-12
development. Centered on building blocks for learning and life instruction has been largely successful in
supporting communities to create for kids. At Common Sense, my educating kids and their communities to
positive learning cultures around work began with supporting better prepare and support kids as they
media and technology, I develop districts throughout CA to address engage with social media platforms.
leaders focused on empowering digital citizenship. Now, 11 years Shifting from ad-based monetization
kids and families to be thoughtful later, I focus on developing a team would also greatly benefit how content is
and innovative consumers of media to provide high-quality professional
and technology. development and community Continued on next page
engagement for educators/schools
IMPROVING SOCIAL MEDIA | 61
ALL TECH IS HUMAN
How do we ensure safety, privacy and freedom of expression all at the same time?
This is a really tough one, because – legally speaking (at the federal level) – a lot of hate speech is protected as free
speech. From a different perspective, I think we can advocate that hate speech is the enemy of free speech, since hate
speech is intended to silence marginalized voices and therefore does not fit into the scope of civil discussion. But what
exactly hate speech is and isn't is up for debate (on both sides of the debate). And there are disagreements – even within
the different camps – over where the line gets drawn. One solution I think about a lot, and I know has been discussed
elsewhere out there, is a form of ID for people to use social media. If social media is to become a digital public square,
then building in a form of accountability that exists in the real world might be a solution. However, this is a fraught
solution, as there are huge privacy and identity implications for various individuals and groups. And it could have the
opposite effect, (i.e. in countries where dissent isn't tolerated, for victims of stalking, etc.).
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
I think there is probably lots more room for improvement elsewhere (government, platforms, etc.) but in terms of where
some improvement is most likely to happen, and most likely to make a difference? I think education. More digital
citizenship education is probably something that would have the biggest ROI in terms of impact. But it is not a quick fix;
you'd have to do it right, then wait a generation to see the results. People probably want quick change, but in order to do
that, policy may need to hold platforms accountable, and platforms themselves will need to step up. Primarily: drastically
adjust revenue models and setting firm language to address free speech vs hate speech.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
Common Sense has been doing a lot of work focused on improving social media and the platforms that support them. K-
12 Education-wise, Common Sense provides a free robust curriculum addressing digital citizenship across k-12 schools
focused on empowering kids and families to think critically, navigate safely, and participate responsibly with media and
technology.
Policy-wise, Common Sense Kids Action has been working with lawmakers to pass a number of state and national
policies focused on addressing privacy, dark patterns and hate speech, to name a few. An additional resource supporting
these efforts is the Common Sense Social Media Scorecard focused on looking at how different platforms handled the
flood of videos, memes and hashtags based on last year's elections. This provides a framework for how these platforms
work.
More cynically, everyone who has consciously quit using social media in recent years and everyone who continues to do
so is an improvement. =)
What do you see as the risk of doing nothing to address the shortcomings of social media?
Hyperchambers, filter bubbles and misinformation exacerbated. I think an end to democratic government and relative
peace in the world – not just in the US, but worldwide – is something that has to be seriously discussed at this point. It
sounds hyperbolic, but it is not: violence, conflict, all out war.
How does social media look different five years from now?
I fear that shows like Black Mirror have already shown us what social media might look like five years from now. With
services becoming heavily socially rated, what is to stop social interactions from doing the same? With services and
products becoming so heavily rated, followed by social platforms exacerbating validation-type experiences, I wouldn’t be
surprised. My hope is that platforms take a stand on hate speech and misinformation and facilitate spaces that are truly
about fostering positive and accountable communities.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
More ethicists. And not just those who have come from within the industry. We cannot just "tech our way out of this" so
to speak. Documentaries like “The Social Dilemma” present the solution to the problem as coming from within tech, and
as much as those voices are helpful, lifting diverse perspectives outside of the industry is important. Mental health
professionals should be more heavily consulted, and their scrutiny should be taken as heavily as their hopeful
perspectives. Educators also should be consulted, as their understanding of whole child development is essential. These
platforms have terms meant for adults, but we know children are heavy consumers and contributors.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
As long as platforms (and their algorithms) are developed for profit, the problem will persist. They optimize and develop
your persona, focused on capturing your attention and investment – information and experiences that can then shape
thinking and behavior. Maybe a truly nonprofit social platform would be the way to go? I do not know how that would
work, but you gotta have the will to make it happen and have investors invested in people, not potential profit.
What makes you optimistic that we, as a society, will be able to improve social media?
Quite honestly, I am not optimistic about this. There are a lot of great experiences and relationships that social media
provides us, but largely, the disinhibition and disconnected nature of platforms foster many issues, perpetuated by the
platforms themselves. I do see glimmers of collective support and movements designed to bring people together, as
opposed to lengthening our divide, but until we thrive on unity and not one-sided satisfaction, social media will continue
to act as a fractured community.
"I think one of the biggest issues is whether or not social media will
continue to be a private enterprise, or if it (or at least some/one of the
platforms) will become more of a public domain-like space."
Jeff Collins
Department as a Foreign Service Officer
and helped navigate some difficult
bilateral relationships and advance U.S.
human rights initiatives in Cuba, Iraq,
Turkey, Bolivia and Venezuela. I also had
the honor of serving in the Obama
Sr. Director, Global Trust & Safety, TikTok National Security Council as the first
Director for Turkish Affairs.
prevent the amplification of misinformation and disinformation, hateful behavior, and other harmful content. At the
heart of these challenges are the sheer volume and virality of content that is created (around the world, tens of
thousands of videos are uploaded on TikTok every minute, some reaching millions of views the same day). We are
focusing on how to best moderate at scale where we can empower the originality of our creators while keeping our
communities safe.
At TikTok, we are tackling these challenges by developing clear policies grounded in evidence and the experience of the
past decade in social media, sound processes to train human moderators on how to understand and apply these policies,
more robust feedback loops between the two, machine learning algorithms to better detect potential policy violations
and transparency. I am particularly proud of and excited by our work to generate more frequent and detailed
Transparency Reports to help external stakeholders better understand our work, as well as our Transparency and
Accountability Centers – physical locations where experts can take a look under the hood and learn about how we
approach content moderation, how we keep our platform secure and how our For You feed algorithm works.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
I am incredibly inspired and influenced by the groundbreaking work of Eli Pariser and Talia Stroud at Civic Signals (now
New_ Public). Eli is well known for his leadership on "filter bubbles." He coined the term in 2011 – a year that saw both
the Arab Spring and a "can do no wrong" high point for big tech. From his work at MoveOn.org and Upworthy, Eli
recognized and called attention to the potentially harmful impact of algorithm amplification, siloization of
communication and online rabbit holes. Eli and Talia have put tremendous effort into researching and developing a
framework to help us use learnings and signals from how we design and use real world physical spaces (parks, town halls,
city streets, etc.) to design online spaces with respected rules and norms.
If we are honest, it is clear that we need to move beyond what I call Trust & Safety 2.0 – reliance on large-scale machine
learning detection and human moderation to police platforms. A new design-influenced paradigm that draws on lessons
from the offline world is an important instructive element that can help us invent (because this truly does need to be
invented) Trust & Safety 3.0 and truly create safe, positive and trustworthy spaces.
How do we ensure safety, privacy and freedom of expression all at the same time?
These are the kinds of trade-offs that my team at TikTok navigates daily. Safety, privacy and freedom of expression are
all principles that we endeavor to uphold; however, at times those priorities conflict with one another. For example,
when we consider when to send safety resources to users who may have had a video reported for self-harm, we also have
to consider their privacy. Or, when we design educational campaigns to raise awareness about eating disorders, we also
have to consider protections for others who may find that content triggering. There is no perfect approach here, but we
do a few things to help us strike the right balance:
We bring diverse perspectives into our decision-making, both from our teams and through partnering with experts
We take a broad view of the potential harm we must mitigate to include several dimensions – physical, psychological,
societal, etc.
We iterate on our policies and tactics frequently to ensure we keep pace with – and ultimately anticipate – new
trends and threats.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
When it comes to creating a safer online environment, the responsibility is shared. The platforms, governments, media
and public all need to be involved – especially because many of the challenges we face online have a nexus with broader
societal challenges, such as the uptick in polarization and decrease in trust in institutions we've seen in many areas of the
globe. With that in mind, I believe strongly that we as individuals need to step up in industry, in the government, in civil
society, and as community members. We also need to be proactive in building much greater connectivity and
collaboration across these dimensions. Doing this requires creativity and fresh thinking. This is an ever-evolving
landscape that requires moonshot goal-setting, commitment, investment and fierce determination.
What people and organizations do you feel are doing a good job toward improving social media?
JEFF COLLINS IMPROVING SOCIAL MEDIA | 65
Continued on next page
ALL TECH IS HUMAN
This list is long, which is important to recognize because the general public is unaware of the amazing work being done by
a global army of people and organizations to help platforms promote creativity and a diversity of viewpoints, while
maintaining guardrails to keep different online communities safe and generate greater societal trust in our companies. In
addition to All Tech Is Human, which is doing amazing work to infuse a thoughtful, responsible approach into high tech,
I'll mention just a few:
Aspen Digital's mission is to help policymakers, civic organizations, companies and the public to be responsible
stewards of technology and media in the service of an informed, just and equitable world. I am excited to be a part of
this organization's Virtually Human Working Group, whose purpose is to identify critical issues at the nexus of
human connection and technology and develop a repository of best practices, shared definitions and state-of-the-art
methodologies and measurement tools.
The DQ Institute (DQI) is an international think tank that is dedicated to setting global standards for digital
intelligence education, outreach and policies. We partnered with DQI to develop a safety guide for Safer Internet
Day and are working to develop educational videos based on their recognized standards for digital literacy, digital
skills and digital readiness.
Among the many amazing organizations working on mental wellbeing and technology, Crisis Text Line continues to
be a leader in providing real-time support that saves lives. Our partnership with Crisis Text Line, launched last year,
is helping address the needs of our community of users in the U.S. We're looking forward to doing much more in the
coming year to make a positive impact on our community.
What do you see as the risk of doing nothing to address the shortcomings of social media?
Given the events that led to the so-called "techlash," invasions of privacy, amplification of dangerous conspiracy theories,
election disinformation campaigns and much more, inaction would be disastrous. But I do not actually think we're at risk
of doing nothing; I work with too many dedicated Trust & Safety colleagues to believe that. But I do feel that there are
some challenges that will be really difficult to solve. After all, the problems we deal with are large, complex societal
problems that involve education systems, legislative systems and diverse social and cultural mores.
We are at a point where "Alone Together" has gone from beyond a dire warning (in Sherry Turkle's groundbreaking
book) to an empowering and empathetic TikTok campaign to support one another during the pandemic. It is a fool's
errand to try to stop your kids from using screens. These are realities, not trends, and they will continue to morph and
evolve along with the evolution of our social systems. So for me, the key question is how can we harness these
technologies to move humanity in a positive direction? How can we use technology, including the social dimensions of
platforms, to help educate, recognize and treat mental health issues, invent ways to save our climate, advance equity and
inclusion in our communities and build bridges across cultures while minimizing the negative externalities (which we will
never eliminate given that we are human beings).
What models do you see coming on line for providing a digital community (beyond today’s ad-based, extraction
model) – platform cooperatives? Decentralized Autonomous Organizations (DAOs)? For example, are there promising
applications for the blockchain?
Blockchain companies definitely are pioneering some interesting community governance models that seek to advance
the Reddit- and Wikipedia-type models of involving users in the creation and application of rules. There will certainly be
interesting lessons to learn from these efforts.
Industry has made progress on key issues, via the Global Internet Forum to Counter Terrorism, the Technology Coalition
(for CSAM-related issues) and multi-stakeholder initiatives like the Global Network Initiative. In just one recent example
of industry cooperation, TikTok is working to bring companies together to minimize the proliferation of suicide-related
content across platforms and think more deeply about how we approach mental wellbeing. Of course, there have been
pointed (and not invalid) criticisms raised about the manner in which companies come together to make content
decisions (see, e.g., Evelyn Douek's writings on "content cartels").
While it is unpredictable how things will move forward, we do know that actual progress in dealing with content in a
manner that is just and balances the variety of stakeholder concerns will require more, rather than less, collaboration
across government, civil society and business.
JEFF COLLINS IMPROVING SOCIAL MEDIA | 66
Continued on next page
ALL TECH IS HUMAN
How does social media look different five years from now?
The landscape already has changed so significantly that what we initially called "social media'' barely exists. Rather, we
are in a world where entertainment and social [media] have fused, with companies falling somewhere along a spectrum
of being more pure entertainment (think Netflix or Hulu here) vs. pure social networking (think Facebook here).
As this evolution continues toward social-entertainment, my hope is that in five years platforms will have made strides to
understand, respond to and help address important societal issues from mental health to education to the creation of
career opportunities. In the near term, this means more proactive work to address issues around diversity, equity and
inclusion as well as potential bias in algorithms, maturing our efforts to balance safety and privacy considerations and
enhancing our approach to supporting the mental health and wellbeing of our users. At TikTok, we're investing in
growing our team and expanding our commitments across these areas. I am optimistic that our progress will be evident
with concrete, real-world impacts in the years to come.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media? (50-250 words)
More than any particular background, I think, having expertise from a cross-section of disciplines is important. I am a
huge believer in the "triple strength leadership" paradigm, which holds that today's increasingly complex global
challenges – from climate to economic growth to health – require multi-faceted leaders who can bring to bear experience
in the public, private and social sectors to create solutions (which of course require the involvement/application of
various types of specialized expertise) – see https://hbr.org/2013/09/triple-strength-leadership
Tech in general, and Trust & Safety in particular, sits at the nexus of countless major issues, trends and challenges. As we
leverage our technical capabilities to address major societal issues, health, equality and democracy, we need people who
can connect the dots, so to speak. At the moment, one of my top priorities is to attract talent with expertise in the social
sciences, human rights and AI ethics, to help us strengthen the way in which we connect, support and give voice to our
user/creator community.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
I believe cross-sector work is incredibly important because neither governments nor platforms can answer this question
alone. Laws and regulations will continue to play an important role in setting consistent expectations for platforms and
representing the views of citizens. But we are global entities that span multiple countries and legislative systems. This
means that we must work actively and cooperatively with governments and civil society across the globe to help develop
approaches that are fit-for-purpose, rather than blunt instruments with negative unintended consequences. Speaking for
my team at TikTok, we are constantly evolving our policy frameworks so that we can continue to empower people to
create and share authentic content that lifts up rather than denigrates or creates harm. And we are always seeking out
feedback to do a better job of this.
What makes you optimistic that we, as a society, will be able to improve social media?
Two things: my colleagues and the creativity people bring to social media every day.
I have the privilege of leading an incredibly dedicated Trust & Safety team, and I wish I could convey just how true it is
that we put user safety first. I have colleagues around the world who hop on calls in the middle of the night or work
through the weekend to make sure we are responsible stewards of the creator experience and what all users encounter
online. They have thoughtful and difficult conversations about our work and sometimes take on an emotional load when
they review harmful content and make tough decisions. At the same time they bring great passion to their work and take
pride in the meaningful strides we make to counter bad actors and empower our community. This makes me feel
extremely optimistic about the future of our app and for our online lives going forward.
I also am consistently impressed with the ways people leverage platforms to educate one another, organize for the
common good, educate and simply create cool and interesting material. In safety-focused roles we tend to think first of
harmful content or tough policy calls, but the majority of what creators are putting out into the world is positive,
purposeful, encouraging and fun. I think that's what people want to see online, so I am hopeful that, as we tackle the
many challenges ahead of us, we can make sure to amplify the good.
JEFF COLLINS IMPROVING SOCIAL MEDIA | 67
does not presuppose a certain level of
LEARNING FROM THE COMMUNITY
experience or education.
Pinal Shah
What I love most about my role is that I
get to cater to and take into
consideration people from all
demographics, all socioeconomic levels
and educational backgrounds. This is
truly what we mean when we talk about
Behavioral Engineer, Robinhood product inclusivity. Further, I think about
how we are helping to nudge people
towards making better security and
safety decisions to help protect
themselves, as well as educating them on
the why.
CCPA and GDPR regulations and concepts like Privacy by Design, I knew I was ready for something that was closer to
the human interface with technology, which led me to Robinhood's Trust & Safety team.
In your opinion, what are the biggest issues facing social media?
I remember joining Facebook in college. At that time, you were just connected to people at your particular school (in my
case, University of California, Berkeley). It was a fun way to interact with more people and "poke” them so you could start
a conversation. It did not really serve as a replacement for real social engagement, but rather an extension of it. Over
time, social media platforms began to serve as catalysts for entire movements and as pulpits for the unheard.
We are at such a defining moment for social media as it relates mis/disinformation. When you take our human limitations
in processing large amounts of information (aka the attention economy) and you compound it with our cognitive biases,
our echo chambers and the various agendas that people want to push, it is a recipe for disaster.
We are seeing that play out daily as it relates to Covid-19 misinformation, as well as in the role it plays in radicalizing
people politically, under the guise of exercising free speech and democratizing voices.
There is so much work to do in terms of creating a whole-of-society effort to combat misinformation and the impacts it
has on our society. Misinformation exacerbates existing societal fissures, and the rapid spread makes it that much harder
to combat. It is so easy to spread, and yet it is like Pandora's box. Once the information is amplified, reeling it back in is
nearly impossible. So the key is preventing the misinformation from getting out there to begin with. And that is the
challenge of this decade.
How do we ensure safety, privacy and freedom of expression all at the same time?
I remember one of my law school professors emphasizing that the First Amendment is the First Amendment for a reason.
Freedom of expression is what we as Americans hold as the most sacred right. It is critical to our American culture and
our democracy that we should maintain a healthy culture of the free exchange of ideas. It is what allows our society to
innovate and progress. But rights also come with responsibilities.
The intellectual discourse around the freedom of speech debate tends to settle around allowing all forms of speech, even
hate speech, to come forward, so that it may be brought to light, refuted, and progress can then be made. In theory, this is
a great way to discredit harmful and hateful ideologies, or just plain wrong information.
But this is premised on the fact that such ideologies will in fact inherently be discredited before they cause harm. This is
not the case. In a pluralistic society such as ours which allows for such open discourse, we have to constantly remain
vigilant to the narratives that are being shared, who is sharing them and the motivation(s) behind them. This is not to
suggest that we live in a constantly-teetering-on-the-brink-of-censorship state. But because we know that the ideologies
that people share online can lead to physical and emotional harm, so there is a responsibility we all have to maintain
vigilance to the spread of harmful ideologies and narratives.
We cannot be blind to the harms that online discourse can cause in the name of free expression, and we must work at
examining speech and curbing it when necessary, when it begins to infringe on others’ rights to privacy and safety.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
When it comes it to improving social media, it is pretty clear to me that this is a whole-of-society responsibility. Although
platforms broker the engagement, there are downstream impacts on all of us. Most people get their news online now, so
media agencies have tailored their stories to fit into the mobile experience. Most companies now have social media
managers that cultivate an online presence for marketing purposes. We have now seen an entire presidency conducted
via Twitter (is it just a coincidence that Twitter's character limit bumped up to 240 characters during the Trump era)?
When it comes to making social media better, we need to ask ourselves: Who are we making it better for? Everyone has
an angle. Social media platforms have an interest in monetization, so they will constantly anticipate user needs and add
new features. But who is looking out for the user (beyond giving them cool new features)?
PINAL SHAH IMPROVING SOCIAL MEDIA | 69
Continued on next page
ALL TECH IS HUMAN
Using data privacy as an example, most data collection practices done by companies just five years ago were fairly
opaque, and most consumers did not pay much attention to their data. But as European legislation emerged and GDPR
took center stage, companies were forced to be more transparent about what data they collected about users, and to
give consumers more agency over their data.
So the government and lawmakers clearly have a role here, and we cannot just rely on companies to self-regulate. Media
companies have an added responsibility to verify the accuracy of what they share or procure via social media. And
individuals have a responsibility to stop and examine what they are consuming and maintain a healthy dose of skepticism
if it is not coming from a reputable source, especially if it appears to be emotionally charged.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
I think ethics organizations (like the Center for Humane Technology) and think tanks have also really stepped up lately
and have been providing great research, public education and commentary on the pitfalls of lack of oversight when it
comes to social media regulations. I would love to see academia and non-profit orgs partner with public health
organizations to share more knowledge and awareness with the public on how social media impacts our mental and
psychological health. There is currently some awareness out there, but I think we need to see more campaigning done by
public health groups to really create more education around the mental and physical impacts of too much social media
and too much screen time.
And in terms of organizations, one of my favorites is TheBridge. They send out weekly emails that are so informative and
they also host regular talks with experts on really pressing tech policy issues. They’re such an informative and down-to-
earth organization. I would recommend them if you are just getting started in the space, and even if you are tenured. And
of course, All Tech Is Human! I love how they are trying to bring together different facets of society to solve hard
problems, and especially that they are building the responsible tech pipeline.
What do you see as the risk of doing nothing to address the shortcomings of social media?
You cannot manage what you do not measure. We have seen all too well the pitfalls of letting misinformation and
disinformation campaigns spread on social media. If we do not monitor it, we cannot make the connections to its effects.
And if we do not understand the effects, we cannot propose relevant solutions. Doing nothing is not an option.
How does social media look different five years from now?
You know that phrase, "You are not sitting in traffic, you are traffic"? Well, we are not just consuming media, we are
media. Individuals are now micro-influencers with thousands of followers. Social media has gone from the quaint (us
sharing pictures with friends and family members, to users creating personal brands and viable businesses solely via
social media.
Social media will constantly evolve to keep up with society. We have seen a democratization of platforms in the last
decade, with more and more people having the access and visibility to share their truths and experiences. There is no
turning back the clock. Social media will now always serve as witness to the zeitgeist. More and more, though, we will
start to see a symbiotic relationship as social media will continue to shape how we live our lives. The first time I
understood the power and reach of social media was in 2008, when a student was jailed in Egypt and tweeted that he had
been arrested, prompting efforts to get him released. It was at that moment I knew social media would be a true force. As
a Californian, at the first signs of a shake, I immediately go to Twitter to validate whether or not what I experienced was
an earthquake. Behaviorally, we are wired now to consider social media in our daily lives.
Because of this, I do see a trend towards more privacy, security and trust-building by companies. Platforms understand
that users are becoming more educated about privacy matters and will choose platforms that can protect their privacy.
This is especially true as we move to more video interfaces.
We will also continue to see a wider array of voices, as more and more individuals gain access and create individual
platforms.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
The beauty of the tech industry is that it actually needs people from every single demographic, every cultural and
linguistic background, every nationality, every religion, and every age and gender. This is because global access has
increased dramatically, and as we close the digital divide, every single member of humanity will need to be represented.
And a case for diversity is a business case. Of course, we want companies to hire a wide range of individuals from diverse
backgrounds because it is the right thing to do, but there is a solid business case to be made for representation amongst
the ranks. We have seen the impacts that exclusion has had on product innovation and marketing. You cannot innovate
for and market for a group whose DNA you do not understand. Thus, we need global representation in the local meeting
rooms.
The other great thing about tech is that because it engages so many different parts of our brains, it needs thinkers from
all academic backgrounds.
I am a lawyer by training with a social sciences background, but I think my analytical skills and national security
background, and lived experiences of being bicultural, multi-lingual, and having lived and travelled so much globally,
gives me a unique background. I am also naturally curious and empathetic, so I think about how others might experience
technology as well. So although I do not have a "traditional” tech background, I am a value-add for teams when it comes
to product design and product inclusion, user experience and proactively identifying trust and safety issues.
But how does someone that has never worked in tech even know that their skills would be valuable? I do think companies
need to think more broadly about how to attract people from varying backgrounds, including those that are
neurodiverse. Tech problems do not require tech solutions. They require a variety of humans who use these technologies
to develop those solutions.
What makes you optimistic that we, as a society, will be able to improve social media?
Humans always fight for the betterment of society, and social media is no exception. There are so many brilliant people
working on these challenges, so I have faith that we will figure out how to make social media less addictive, how to
engage in in it meaningfully and not in a way that it is consuming our lives, balance free expression and ideas against
hateful ideologies, and use it to enrich our lives in a way that we can learn from each other and keep in touch via social
media.
Sydney Weigert
high school and early college, then
moved in-house to the legal team of a
utility company in the city of
Philadelphia, then switched back again to
criminal law. After receiving my B.A., I
moved to Berlin to work on the Trust and
Policy Administration Manager, Business and Safety team at SoundCloud. I love the
Legal Affairs, SoundCloud world of Trust and Safety; it is exciting
and extremely important. The work I did
on the team definitely gave me the skills
needed for my current position. After a
couple of years I decided I wanted to take
the next step and have more influence
over policymaking, especially given the
current climate. I felt it is where my
experience best suits me and where I
could have the most impact…. So here we
are!
hear each other’s voices, struggles and accomplishments. Each area mentioned above is able to bring different insight
and influence to the table, and each complements the others. Governmental oversight works best when platforms,
specifically smaller platforms, are heard. I would therefore hesitate in concluding that one area needs the most
improvement and instead place emphasis on teamwork and alliance.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
In my opinion, projects and initiatives such as Tech Against Terrorism and the Aqaba Process hosted by Jordan do a
fantastic job toward improving social media. We’ve had executives from SoundCloud attend the Aqaba Process, which
has helped foster dialogue around terrorism and extremism and led to fruitful cooperation with companies such as
ActiveFence. These initiatives and organizations are transparent and honest – and therefore impactful. They are there
for guidance, not for gain, and they simply want to help make the internet a better place. Having help, resources and
feedback focused in an area or areas where you may not have as detailed knowledge is also extremely useful in educating
your team on making informed and responsible decisions, especially for smaller platforms.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
Another million dollar question! Platforms will most likely always have to police, in some sense. That is to say, we all must
follow the law and be guided by those requirements. At the same time, laws differ by country and region, so that is tough
to navigate since one-size-fits-all is probably not realistic. Beyond that, platforms should do their best to be clear about
what is acceptable for their platform and be thoughtful and consistent in enforcing their policies. I do not believe this is
something governments can solve alone. However, I do believe they can have a positive impact.
Nicole Chi
how technology today
disproportionately harms vulnerable
communities instead of making empathy
easier. This led me to my current work at
the Mobius Project, where I am so
excited to be working with amazing
Co-Founder, Mobius Project collaborators to seriously think about
(and then create!) what tools and
resources we need to build a thriving
community of practice around mitigating
platform abuse.
and we need more spaces that encourage interdisciplinary work rather than creating more rooms full of technologists
trying to solve social problems.
In general, we need more solutions for social media that are built by and for the very people that it has harmed. Block
Party, by Tracy Chou, is a great example; it is an app created to make harassment on social media easier to mitigate. It
solves a real problem because it was built out of her experiences being harassed as a woman of color. I also love projects
like Archive of Our Own (AO3) that give us great case studies for what platforms can look like when they are designed
and developed entirely by the people they serve. AO3 is a fan-built social network with a women-led development team,
built with feminist HCI values in mind.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
I would love to see more people who do not have traditional tech backgrounds, especially people who have personally
experienced harms on social media or have experience navigating complex social issues like conflict resolution or
restorative justice. I know folks who have backgrounds in human rights, policy and community building who are
interested in improving social media, but there are not a lot of avenues for them to do so.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
Zebras Unite, a cooperatively owned movement to build businesses that are better for the world, quotes Rebecca Solnit
in discussing how the “tyranny of the quantifiable” affects venture financing. Measuring impact is important, but I think a
similar bias exists in social media products. What is measurable, clicks and likes, takes precedence over value added to
consumers that is harder to quantify, such the deepening of relationships or broadening of worldviews. This is reflected
in product metrics and also in algorithms that shape the social media experience. I think giving consumers more power
over their experience is one way we can improve it – platforms are already doing this today by allowing people to
organize their timelines by top or most recent content, but I would love to see more of that. We also need more research
on the effects of algorithms on human behavior and society, both from companies and independent efforts. J. Nathan
Matias has a great article on this entitled “The Obligation to Experiment.”
"I would love to see more people who do not have traditional tech
backgrounds, especially people who have personally experienced harms
on social media or have experience navigating complex social issues
like conflict resolution or restorative justice."
Azza El Masri
I worked in the digital rights space as a
campaigner, covering issues related to
free speech, misinformation and privacy
in the region, which then allowed me to
dig deeper into content moderation and
platform accountability.
NAWA Program Associate at Meedan
What "solutions" to improving social
media have you seen suggested or
implemented that you are excited
about?
tech industry. In your opinion, what academic or experience backgrounds should be more involved in improving social
media?
Just as platforms should take the steps to decentralize content moderation practices, conversations about bettering or
improving social media should be global, multidisciplinary and equitable. Civil society groups, human rights defenders,
technologists and independent journalists in countries such as Nigeria, Myanmar, India, Lebanon, Palestine and others
have a lot to bring to this conversation – they just need to be allowed the space to do so.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
We can (hopefully) solve this conundrum by implementing context-driven content moderation policies.
Michelle Cortese
me toward art direction in the fashion
advertising industry, I rediscovered my
passion for using technology to enrich
lives when virtual reality began making
its way into advertising. After earning a
masters degree in creative technology
Design Lead Manager, Facebook Reality Labs and executing a ton of VR work in
experiential advertising industry, I
moved in-house at Facebook with the
intention to make the future of
communication technology a better
place. Here, I have had the opportunity to
set inclusive, empowering and safety-
driven design paradigms at the outset of
early social VR networks, such as
Horizon.
features (pleasurable and personal). From here, we inject these principles, at these respective levels, into the fabric of an
application. The short answer: We must embed safety, first and foremost, into the core architecture of anything we build.
That foundation sets a tone to support privacy, and a palpable layer of privacy allows for freedom of expression.
How does social media look different five years from now?
In the coming years, I anticipate we will see growth in what are currently considered unconventional social networks. I
am particularly interested in the rapid growth of videogame social networks. Fortnite and Animal Crossing are platforms
that, while centered around a game, are functionally social networks. As of May 2020, Fortnite reported having 350
million players; Animal Crossing: New Horizons reportedly sold 31 million copies; and VRChat claims tens of thousands
of concurrent users. These are not trivial numbers. And they're a sign of a new generation of social networking: one that
is immersive, creative and not tied to real identity.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Everyone. This might feel like a cheap answer, but I stand by it. I want to live in a world where everyone has the tech
literacy to understand, criticize and demand more from their devices and software services. I want all elected officials to
understand the complexity of the digital social landscape – and enact policies to protect individuals. I want everyone
involved, because everyone is affected.
What makes you optimistic that we, as a society, will be able to improve social media?
I am actually incredibly optimistic that we, as a society, will materially improve social media in the coming years.
Everyday, at FRL, I get to see the potential future architecture of communication technology come together. Implicit in
that work, is an understanding that communication devices and services are an invaluable part of our world. When
designing a new experience, we acknowledge and investigate the impact it may have on the lives of the people who use it,
before they ever use it. This culture of responsible product development is baking itself into the ethos of product design
on an industry level – and that's my source of hope.
and literacy on social media platforms. Common Sense Media, Bark Technologies, and CyberWise are also doing great
work in this area, while the Electronic Frontier Foundation is leading the charge of online privacy protection.
What do you see as the risk of doing nothing to address the shortcomings of social media?
We've already seen the risk of doing nothing to address the shortcomings of social media. Starting with the 2016
election, through a global pandemic, and all the way up to the insurrection on January 6, 2021, social media has been
grossly manipulated in ways we should have been able to see coming and the results have been devastating to our
country and the world.
How does social media look different five years from now?
I think some users will be open to the idea of paid platforms that provide the type of privacy protection and content
moderation that would make your time on the platform a more valuable experience.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
Algorithms are only as effective as the people who develop them. There is still a lot of work to be done to diversify this
area to improve bias.
What makes you optimistic that we, as a society, will be able to improve social media?
I believe society will be able to eventually improve social media because I believe human ingenuity and innovation have
improved society as a whole more often than not. However, I am not as optimistic about what it will actually take to make
those necessary changes. Every time it feels like we may have hit rock bottom with respect to social media, something
else happens that makes it seem like there is no floor. I worry about what has to happen before we all agree that
something more disruptive has to be done and we take actionable steps to do it.
"In my opinion, the biggest issues facing social media are content
moderation and data privacy. Platforms seem to be a free-for-all of
abuse and misinformation, while companies only appear to care about
how much of our data they can sell."
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
I am really excited about the possibility that Trust and Safety and advisory boards have become much more prominent in
the last few years. I think that bringing the discussion of platform moderation and safety into common talk of social
media is a really good sign.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
To be honest, I think the role of media to educate the general public needs a lot of work. Here's why. Media in general is,
well, general. it is made for the widest audience possible. Second, media is designed to be consumed and consumed
strategically – as in advertising time, network clocks, scheduling, booking. In this way, media uses reductionism and
talking points to deliver nuanced concepts to a public. This requires either the host to be well versed in nearly everything
or a guest to be media trained to distill depth into shorter segments.
The additional issue is the in-built bias – not left or right, but rather the assumption that the audience isn't that smart (I
know this for a fact). So the reductionism employed by media to educate the public is simplified to a nearly unusable
system so that people continue to watch but do not feel the need to activate or participate – just consume. What we need
is on-air talent and guests that have internet literacies or meme literacies or internet fluency. The audience actually
wants this, but the corporate media model does not believe it to be true.
What do you see as the risk of doing nothing to address the shortcomings of social media?
If we do not address the shortcomings of social media, the systems will become recursive. They will inevitably create
their own ecosystems (which, to an extent they have) and eventually their own universes, cordoned off from the other
social media. We need to address the profit model first and reestablish the user's rights, but then of course create a
functioning way forward for social media that straddles the efforts of communication, connection and speech with
common sense approaches that prevent feedback loops, rabbit holes and dangerous activity.
If you want to go a step further, if we do not address the shortcomings, the market will possibly create a new model, a
more fractured web with distinct media flows that do not enable any cross-flow or connective discourse. The editorial
web, one with unique media flows, will literally result in competing realities.
How does social media look different five years from now?
I would have said something completely different previous to the insurrection. Now, I believe that the conversation has
moved out of the esoteric circles and into public conversation. Having the conversation about social media helps make
using social media more intentional. I think if many of the concerns are addressed, we'll see a more useful approach to
social media. What I actually foresee are new platforms that are not immediately crushed or purchased by the MAAAF
giants and we can use multiple platforms together. Sort of like how Zuckerberg wanted to merge the messaging on the
Facebook products, but without the possibility of complicity in more war crimes.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
This question will be biased as I come from a technical background and further educated in the Humanities and Cultural
Studies. I think, first, diversity is the top priority. Founders must prioritize BIPOC, people with disabilities and immigrants
into their organizations and tech. Second, I think some sort of humanities and critical thinking need to be on the resume
of those involved. Engineers are extremely important, and I think there are engineers who also have knowledge, interest
or experience in critical courses from English to media studies to history to art, etc.
Annie Brown
claimed a space where female* voices
were allowed, acknowledged and valued.
After graduating, I worked for gender
equality organizations such as Grameen
Bank, Humsafar Trust and Planned
Parenthood, and participated in Y
Founder of Lips Combinator as the Communications
Director for SafetyWing (W18).
Tell us about your role: womxn for over 10 years. The These issues cause significant harm to
concept of Lips originated as a mental and physical health and
I am the Founder of Lips. Lips is a project for my Introduction to a reproduce existing inequities rather than
novel, alternative social media Women’s Studies course at William correcting and solving them. For
platform built by and for women, and Mary University. example, it is common knowledge that
non-binary folks and the many women* deal with self-image issues
LGBTQIA+ community to safely I realized that there were no spaces and that social media has chiefly made it
express themselves and sell their on campus where women could worse. However, a large portion of body-
work without biased censorship, express themselves, and especially positive content on social media are
harassment or plagiarism. their sexuality, safely, openly and flagged as inappropriate and removed.
honestly. Lips asked women to mail
Tell us about your career path and in – or anonymously drop into a Also, hate groups and trolls have
how it led you to your work’s P.O. box – stories, poetry and unfortunately become inescapable on
focus: artwork expressing their sexuality social media — trans people being one of
for the publication. Quickly, the
I have been working to create idea became a hit on campus and Continued on next page
spaces of free expression for Lips Zine grew to five other local
IMPROVING SOCIAL MEDIA | 84
ALL TECH IS HUMAN
the most vulnerable populations to their abuse — and sadly, most platforms have done little to control or prevent
harmful antics. Their features often reinforce the behavior by removing the creator’s account when reported without
detecting that the actions are motivated purely by hate.
Finally, these issues perpetuate economic inequality. As just one example, female wellness brands are barred from selling
and advertising, and sexuality educators/coaches are shadow-banned for inappropriate content. Without better options,
these entrepreneurs will continue to be stricken with additional labor that is both wasted time and dollars lost; as just
one example, hundreds of business owners and creators have vented to us about the hours they’ve spent personally
contacting Facebook and Instagram reps about unfair rejections and content deleted.
When creators are faced with violence, shame and censorship by digital platforms, they are not able to reach their fullest
potential as artists, as entrepreneurs and as humans, and we want to change that.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
What does safe space online for women* & the LGBTQIA+ community actually look like? Well, it takes bringing these
marginalized voices in and centering them in the design process, which is a practice known as Design Justice led by
Professor Sasha Costanza Chock. These communities are well equipped with the knowledge and tools necessary for their
flourishing, but representatives are not often brought to the decision-making table. As members of the communities
ourselves, using research from user testing and co-design workshops, all of our decisions – from features to the language
of our “Community Guidelines” to how our community is built, maintained and moderated – are made with this
awareness. We host workshops for creators, brands and LGBTQIA+ youth to ensure that the communities for whom we
are building the app are centered, have been involved and will remain involved in the app’s design process at every stage.
We aim to give brands, artists, influencers a place to share content that educates and empowers marginalized
communities. Our “Community Guidelines” were collectively written and are rooted in freedom and fairness for all
groups. We do not tolerate hate speech, harassment, abuse or discrimination of any kind. Lips operates on the philosophy
sometimes referred to as the paradox of tolerance: “If a society is tolerant without limit, its ability to be tolerant is
eventually seized or destroyed by the intolerant.” [Karl Popper, the Paradox of Tolerance.] Hate speech is not free speech,
as it forces others into silence in order to survive.
We keep our people safe through a vetting process and only allow those who express understanding and shared
appreciation for our values to contribute their work to the community. Anyone can browse Lips, but only approved
members can post. Our patent-pending “inclusive” AI moderation system and blockchain technology are what will enable
us to maintain the norms and values of the community.
As we grow, we will continue to engage the community in design decisions along the way. Many other platforms deal with
issues of abuse by making it possible to turn features such as messaging, commenting and tagging on and off. Adopting
solely an approach like this is basically a form of virtual victim-blaming. Creators have the choice of continuing to receive
hate or disabling messaging, which usually comes at the expense of their businesses. Lips is much more proactive about
preventing this type of behavior in the first place.
Gates are the data we need here (by vertical, platform, use case), and the path toward the destination – your
community’s guidelines – is the AI model, Contextual AI. The problem at hand, Trust & Safety of the internet, is hard but
exciting.
How do we ensure safety, privacy and freedom of expression all at the same time?
Speed to trust will make the next generation of decacorns. Acknowledging this and acting now is the first order of all
things. Then it comes down to the three "first principles" for ethics by design: safety by design, privacy by design and DEI
by design. If we can bake the principles in the policy, product and platform design phase through ideas, people and
technologies, we will stand a chance to build a better internet. One where we can ensure safety, privacy and freedom of
diversity at the same time.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
It is a shared responsibility across regulators, platforms, media and users. The fundamental challenge for today's chaos is
that corporates are assuming a government-like role to enact, promote and enforce community policies. This is an
extremely hard and costly task for platforms. Acknowledging this, a few recommendations to increase ethical
transparency:
1. Collaborations: Technologists share the responsibility to educate and collaborate with policymakers. This will enhance
regulators' understanding of the issues du jour and drive effective policies.
2. Proactive approach: A platform needs to do more than just rely on users to report guideline infractions. This can take
shape in many ways (ideally using contextual AI) to proactively detect behaviors across all content.
3. Established process and consistent enforcement: A platform needs to know how its detection and reporting
capabilities are performing against what’s happening on the platform. There are going to be some cases that can be
automated away. Other cases are going to always require human moderation. And some of those are going to be unclear
gray areas that will be tough to deliberate on. it is important to have an internal process for how to handle these.
By improving policy transparency and content moderation on platforms, we can as an industry stay ahead of the curve of
government regulations.
"If we can bake the principles in the policy, product and platform
design phase through ideas, people and technologies, we will stand a
chance to build a better internet. One where we can ensure safety,
privacy and freedom of diversity at the same time."
Suw Charman-
campaigning for digital rights in the UK
alongside my consulting work, and then
founded Ada Lovelace Day in 2009 to
Anderson
campaign for equality for women in
STEM. In 2014, I moved to the USA and
began working on Ada Lovelace Day full
time.
How do we ensure safety, privacy and freedom of expression all at the same time?
The tech industry tends to focus on freedom of expression without considering the responsibilities that come with every
freedom, and it is only by considering our responsibilities that we can fairly balance safety, privacy and freedom of
expression. We have to think about what we owe to each other, what our responsibilities are to one another. Only when
we consider these questions can we begin to formulate appropriate responses to the question of how free our freedoms
can really be, and where they end.
The radical individualism and libertarianism of the American tech industry leads businesses to consider freedom of
expression as an absolute right, rather than considering that it has limits. Where is the boundary between your right to
free speech and my right to freedom from abuse, harassment and violence? Not only do these boundaries need to be
considered and defined, platforms need to decide what to do about transgressors, to apply rules consistently and fairly,
and have an accessible and transparent appeals process. there is no doubt that this is a difficult challenge for companies
with millions or even billions of users that are active across multiple jurisdictions with radically different and sometimes
opposing definitions of acceptable speech, but there is also no doubt that this nettle has not been grasped and that social
media platforms are failing to balance rights and responsibilities.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
There are multiple axes of diversity, and any attempt to diversify the tech industry must address all of these axes. Whilst
I campaign for women's rights, I recognize not just that our work must be intersectional, but that there must also be
broader diversification projects that support communities I cannot reach. Obviously women are badly underrepresented
in tech, and social media companies are no different to any other in the sector. We need more women at all pay scales,
but we also need to see more women from a variety of backgrounds, particularly women of colour, women with
disabilities, and women from working class, immigrant and other under-served communities. Only when we see these
women who bring invaluable perspectives and expertise to the table progressing into senior roles will we start to see
social media companies developing the skill, empathy and insight required to deal with problems such as harassment and
abuse.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
The biggest problem with algorithms is that they take the biases of a small group of individuals – the developers – and
bake them into the system, whilst also removing choice from the user. We need to recognize that content sorting
algorithms exist for the benefit of the platforms, not the user. The idea that they surface valuable content to the user is
nothing but a myth, given how poorly these algorithms perform. While content sorting algorithms serve the business's
profit motive rather than the user's interests, it will be almost impossible to engender the required change. This isn't just
a matter of user experience, but a problem with respect to issues that have serious repercussions for society, such as far-
right radicalisation or the suppression of speech from marginalised groups. It seems difficult to see how platforms will
solve this problem when it is not in their interest to do so, and I fear that regulatory intervention is needed.
"The biggest problem with algorithms is that they take the biases
of a small group of individuals – the developers – and bake them
into the system, whilst also removing choice from the user. We
need to recognize that content sorting algorithms exist for the
benefit of the platforms, not the user."
Sonja Solomun
There seem to be two ways to answer
this question of “what is wrong” more
generally. One is that it seems
resoundingly clear that the economic
models underpinning social media are
Research Director, Centre for Media, incompatible with the public interest. I
Technology and Democracy at McGill think to address “what is wrong,” we also
need to look at the norms, the practices
University and the experience of platforms as
something other than a tool. At the end
of the day, we experience social media
and technology more broadly as many
different kinds of things.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
There are so many civil society, academic and community groups and organizations working for fair and accountable
platform governance that are doing such critical work in this area.
I am really excited by the first annual conference of a platform governance research network (Platgov.net). Research on
platform governance often remains fragmented by discipline, methods and regions; or focuses on the most popular
platforms, usually based in the U.S. Too often, work by the most affected communities remains excluded, including so
much critical and justice-oriented work coming out of the Global South. The conference aims to bring those voices
together and to build a new research network that aims to coalesce a global conversation about platform governance,
while highlighting underrepresented groups and disciplines.
So much work by organizations such as IT for Change, CIS-India, Mnemonic, Digital Africa Research Lab, KICTANet and
countless other Global South organizations are leading the charge and coming up with innovative ways to govern
platforms in more responsible and justice-oriented ways.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
I work in platform governance, where work tends to focus on the dominant U.S. platforms and on the same kinds of
problems that affect mostly North American and European users. That means that critical work on other types of
platforms such as payment platforms, or on problems facing gig workers gets left out of the platform governance
conversations. Research from other disciplines and approaches, like game studies or sex worker rights, which may not be
typically associated with platform governance do not get brought in to these conversations. Most importantly, the work
of Global South groups long at the forefront of working for fair and accountable tech governance needs to be brought
into global approaches to the problems we are seeing today (including organizations mentioned above).
More broadly, I think we need to hear more from what different groups using and building social media can teach us
about it, including sex workers (including the work of Zahra Stardust, Gabriella Garcia and Chibundo Egwuatu) and what
other models of relating to technology and ethics of technology look like more broadly (such as Sareeta Amrute [2019]
“Of Techno-Ethics and Techno-Affects,” Feminist Review 123[(1]: 56–73 and Jason Edward Lewis, Noelani Arista, Archer
Pechawis, and Suzanne Kit [2018] “Making kin with the machines,” Journal of Design and Science). Bringing those most
marginalized by technology into the design process is also key (Sasha Costanza-Chock [2020] “Design Justice:
Community-Led Practices to Build the Worlds We Need,” Cambridge, MA: MIT Press).
Anne Collier
encounter more skepticism than
receptivity. I find there are two ways to
solve this problem. Pushing out the
research, which we do via our sites,
speaking and social media channels, isn't
enough. The other piece is cross-sector
Founder of NetFamilyNews.org conversation and collaboration, such as
what All Tech Is Human is about –
because working together is even more
persuasive than facts, and a siloed,
single-sector approach cannot work
when problem-solving for a media
environment that is, by definition, social.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
With the pandemic and related mental health effects, ever greater demand for social justice and growing distrust of Big
Tech, social media and the dominant ad-based business model, the conditions have never been more ripe for
experimenting with business and governance models for a social media environment that serves people better.
Diversification is needed – and we are seeing innovation in both the business and investing communities. I am excited to
see the experiments with platform-based and -enabled cooperatives, or “platform cooperativism" (platform.coop);
private vertical-interest digital communities such as 2Swim.plus; user-governed decentralized autonomous
organizations such as Mastodon and DAOs on Aragon (aragon.org); and blockchain-enabled transparency and
accountability. I am thankful that people are challenging what scholars in Australia have called the “control paradigm,”
whether manifest in digital media or governments.
How do we ensure safety, privacy and freedom of expression all at the same time?
Societies do not have the answer to this question yet. Especially in the US, we have not figured out how to regulate social
media, even as some social media platforms are calling for regulation! We need to figure this out in this country (I think of
ideas being put forth by University of Toronto law Prof. Gillian Hadfield and what Australia’s eSafety Commissioner,
Julie Inman-Grant, and her office are modeling).
Just in the past few months, European regulators hit a logjam in their efforts to protect privacy and safety at the same
time, to the point where Microsoft, Google, LinkedIn, Yubo and Roblox felt the need to sign a joint statement saying they
would continue to detect, remove and report online child sexual abuse content despite Europe's new law (requiring
confidentiality of communications data on devices other than phones) But promising experiments are being proposed
and tried, convening experts in all three and other fields, such as Social Media Councils and TSPA-like professional
associations for workers who deal with all sorts of content. And Facebook spawned, funded and spun off an Oversight
Board as a new, experimental form of regulation. We need to stay tuned.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
Regulation on this side of the Atlantic is what I see as needing the most improvement – with an eye to innovation in
regulation, for example, law professor Gillian Hadfield's concept of "super-regulation". It is useful to consider all
potential solutions, such as revising Sect. 230 and antitrust action, yes, but not as the only possible solutions. There
needs to be a conscious effort to be less reactive to public fears and less focused on revising old models. And as for old
models, regulation and legislation need to be newly flexible – have built-in expiration dates or mandate being revisited so
as to keep up with changing technology.
What people and organizations do you feel are doing a good job toward improving social media? Why/how would you
say their work is helping?
If you mean "improving social media" in the broader sense of improving people's experiences in our new, very social,
media environment, I feel Internet helplines – such as the Europe-wide network of helplines instituted by the European
Commission over a decade ago, New Zealand's Netsafe and Australia's Office of the eSafety Commissioner – are doing a
great deal to improve young people's experiences with social media. More need to be established so that psychosocial
Internet help – independent of the Internet industry but in cooperation with it – could form a network that covers all the
planet's time zones – and counterbalances law enforcement responses to online harm.
Helplines take different forms in different countries – from long-standing services such as Child Focus in Belgium, which
added Internet help to existing child help services, to new ones that support the well-established offline services of the
likes of Canada’s Kids Help Phone – but societies need to be sure that help for children, and other vulnerable groups,
includes expertise in social media and technologies (Internet helplines can also provide psychosocial and Internet
expertise to law enforcement and social services). A way after-the-fact “appeals court” like the Oversight Board
Facebook created is great and serves an important purpose but is also way too late for the harms social media users can
experience. Users experiencing online harm need and deserve realtime – or near-realtime – help independent of
industry, and we have great models for this in a number of countries now, including eSafety in Australia, Netsafe in New
Zealand and Internet helplines in a number of EU countries.
ANNE COLLIER IMPROVING SOCIAL MEDIA | 93
Continued on next page
ALL TECH IS HUMAN
How does social media look different five years from now?
Big Tech will remain, but alongside the giant platforms, there will be many more media options in five years, including
private, vertical-interest, member-governed membership communities both for-profit and non-profit, on the Web, on
the blockchain, etc. We're already seeing examples, from Mastodon to DAOs (decentralized autonomous organizations)
on Aragon.org to platform cooperatives with physical presences. All of them need to be embracing Safety by Design, but
it’s great that they’re all about giving power to their users, involving them in governance and, at least to a degree, giving
users ownership of their own data. This is actually a pretty exciting moment in media history.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Psychology, social science, media studies, neuroscience, anthropology, constitutional law, criminology, human rights
(including children’s rights), pedagogy/andragogy, child/adolescent development, pediatrics – to name just a handful. I
would say any field that tracks human development and sociality in the digital age, mirroring the methodology of a multi-
disciplinary report on bullying and cyberbullying by the National Academies in 2016. Just as the name All Tech Is Human
implies, social media is just as much about our humanity as our technology, if not more. Social media is global, too; we
can’t ever think together in terms of only one country or society.
What makes you optimistic that we, as a society, will be able to improve social media?
MLK's “the arc of history...bends toward justice." When bad stuff happens, such as Cambridge Analytica, election
manipulation and disinformation and the massive cyber attack on the US that came to light late last year, people –
advocates, activists, innovators, investors, researchers, students, educators, parents, policymakers and pundits – call for
and effect change. It takes time, but change happens, at least in many societies around the world. It just may not happen
where and as we most want it to right now. An example of meaningful change for a huge sector of humanity – children
and young people, who represent fully one-third of Internet users worldwide – is General Comment 25, bringing their
digital rights to the more than 30-year-old UN Convention on the Rights of the Child. The UN Committee thereof just
announced the General Comment’s adoption earlier this month. The US is the only country on the planet that hasn’t
ratified the UNCRC, unfortunately, so I’m hoping the “arc of history” will include that development and US-based
Internet corporations will honor minors’ digital rights in any case.
"[T]he conditions have never been more ripe for experimenting with
business and governance models for a social media environment that
serves people better. Diversification is needed – and we are seeing
innovation in both the business and investing communities."
Tell us about your career path, and When we discuss improving social
how it led you to your work’s media, we often toggle between
focus: the responsibility of platforms, the
role of media to educate the
I was a movie executive and then general public, governmental
got a PhD in child development. I oversight, and the role of citizens
am combining my expertise in the in terms of literacy and how they
entertainment industry and engage with platforms. In your
academics to support positive opinion, what area do you think
youth development. needs the most improvement?
IMPROVING SOCIAL MEDIA | 95
community of people who can listen and
LEARNING FROM THE COMMUNITY
know what it is like.
Charlotte Willner
What do you see as the risk of doing
nothing to address the shortcomings of
social media?
Lawrence Ampofo
What "solutions" to improving social
media have you seen suggested or
implemented that you are excited
about?
Widespread social degradation in the form of groups with nefarious intentions using these platforms for their own ends,
whether asymmetric warfare, disinformation, market manipulation. The potential consequences are grave indeed for the
world and humanity. Addressing them is strategically important.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
Psychologists and governance experts should have more influence in product decisions if social media is to thrive.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
There will always be a balancing act around policing platforms. It is unclear whether nation states will be able to solve
these problems, but perhaps this will be solved by supranational organizations.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
I see that algorithms can have a strong impact on people's everyday experiences, both online and offline. They can play
an influential role in shaping people's beliefs, decisions and life experiences. The solutions to ensuring the algorithms
work harmoniously with society are manifold, but the best thing at the outset is to ensure that high-level multi-
stakeholder engagement is put in place in companies to ensure that the best possible product is produced.
What makes you optimistic that we, as a society, will be able to improve social media?
I believe that, as the community of people involved in building technologies discussing the real improvement of social
media grows, this will bring social media to its stated goal of being a real force for global good.
Lisa Thee
the crime of human trafficking on their
platforms. The most visible outcome of
this legislation was Backpage.com losing
their protections from CDA 230 that had
prevented victims from holding them
accountable from profiting from their
Data for Good Practice Lead abuse until 2020.
Gillian K. Hadfield
actionable and both locally and globally
relevant to the challenge of building safe,
responsible and inclusive AI and other
advanced technologies.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
Most solutions call for governments to conduct fairly traditional oversight and rulemaking. But I am most excited about
new ideas that leverage technology and markets. For example, Ron Bodkin, our Engineering Lead at the Schwartz
Reisman Institute, is spearheading an initiative to focus on what machine learning systems are optimizing for – especially
recommendation systems. Ron’s group will explore what goes into the design of optimization objectives (looking at both
engineering decisions and business incentives) and how better objective functions could mitigate undesirable outcomes.
This is a pragmatic and transparent approach that can be implemented today. We want to show what’s possible right
now and to spur innovation. I think this is exciting .I am also talking to researchers about ideas I have developed for
“regulatory markets.” We know that governments do not have the capacity or speed to keep up with technological
developments, so we need to create market incentives for new regulation ideas.
We can do this by creating a market for licensed regulatory services and requiring social media companies to purchase
independent regulatory oversight services. Regulators would be required to show that their systems achieve goals set by
governments. So rather than passing a law that specifies what social media platforms can and cannot do (as a bill
currently before the U.S. Senate proposes), governments would license private regulatory companies whose methods
demonstrably reduce excessive and harmful engagement on social media. Governments then regulate the regulators—
who are incentivized to do the research and tech development we need.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
A big part of my research is on the innovation of governance methods, so I would say we need the involvement of both
governments and platforms most urgently. But, what’s key here is that they must do things very differently. We need to
rethink the entire relationship between governments and corporations, and we need to involve diverse parties like
standards organizations, social science and humanities scholars, and public policy experts in this endeavor. We also know
that self-regulation simply does not work.
The bulk of responsibility should not rest on citizens to fully understand and think critically about powerful technologies.
Certainly, media literacy and public education are important, but primary reliance on these is a sort of victim-blaming.
One of my colleagues at the Schwartz Reisman Institute, Lisa Austin, has written about how the “individual consent”
model – one in which people are expected to, for example, read, understand and consent to the use or disclosure of their
personal information – is simply untenable and unrealistic in light of the complexity and opacity of current data flows and
data ecosystems. Who reads – let alone understands – terms and conditions on apps and other digital products?
Instead, the builders of technologies and the governments, whose job it is to protect people, need to imagine a new kind
of collaborative governance framework. That’s why my vision of innovative governance includes both tech- and market-
based regimes. I suppose that’s a more advanced version of “governmental oversight,” but that’s where I think our best
option lies.
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
If we find more effective means of regulating with some of the methods I have mentioned, I really believe we’ll get better
at finding the right balance. But it is important to remember that we do a lot of this kind of political toggling around all
important public issues, not just this one. So there is no reason to think this will be any different.
The tech world, of course, would like to wish away politics. But politics is not going anywhere, and we need to get better
at making this complex question about politics and, by extension, about people. We often see contested public decision-
making in democratic institutions (appellate courts, election observation, etc.) and that’s a fundamental part of a healthy
democracy. There is rarely a decision that affects a large part of the public that does not get scrutinized – and rightfully
so.
The current problem is that the power of Big Tech swamps our regulatory and democratic methods. Consequently, these
companies have outsized power in these kinds of decision-making processes. That’s what we need to change.
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
The problem is not algorithms themselves – or how large their role is. There are “good” algorithms and “bad” ones as well.
The crux of this issue is how we build those algorithms. What exactly are we optimizing for? Are we optimizing too much?
There is an axiom in economics that says, when you cannot measure everything you care about, you have to be careful
about over-optimizing for the things you can measure. In other words, we need to look at results with some skepticism
when we know that not all inputs were available for measurement. Take teacher incentives, for example—we can
measure student performance on standardized tests, but not as easily measure creativity or moral growth. If we tie
teacher incentives to standardized tests, we pay too much attention to optimizing student test performance, and not
enough to the things we cannot measure. The overall outcome is worse for everyone.
Excessive optimization is a problem we're facing in multiple domains – scheduling of retail workers, for example. Just
because we can optimize with data does not mean we should.
The fact that we are building very powerful algorithms that play a large role in dictating users’ experiences on social
media is not, in itself, a problem. What we need is a complex incentive structure – and corollary research and policy
infrastructures – that truly understand the power of these algorithms in order to nurture and shape them into the kinds
of algorithms we want to be influential—the kinds that reflect human values.
Paloma Viejo
race-critical theory and critical social
studies. By 2014, Professor Eugenia
Sapiera of Dublin City University School
of Communications opened a PhD
research position in racism and hate
speech in online environments. I was
Research Assistant/Post Doc selected as the PhD candidate to
research the conditions of possibility for
the creation and circulation of racist
material in social media. Inquiring about
the notion of hate speech leads me to
look at the evolution of mechanisms in
place over time to “control hate,”
particularly the period between 1940
and the 2010s (from the drafting process
of the Declaration of Human Rights to
the time of social media), by looking into
the principles and values that underpin
each actor who has regulated hate.
In this particular case, Facebook has invited us to post anything we want, whatever is on our mind, and that potentially
includes hateful content. Yes, we have the Community Standards forbidding specific expressions and automatic
detection to stop them. However, operationally speaking, those are activated once the content is flowing in the platform
– once the word is out. That is only a small example of how Facebook’s Principles and Values affect how we interact. We
could also talk about how Facebook’s value of Equality determines the policy definition of hate speech and embraces a
post-racial understanding of hate speech.
What "solutions" to improving social media have you seen suggested or implemented that you are excited about?
What do we mean by improving? Do we mean adding more product solutions designed upon the same principles? Or do
we mean altering the conditions of possibility for hateful content to be on the platform? If it is the first case, I can say I am
excited to see how Facebook will expand its product solutions to “advance racial justice” (see [Mark] Zuckerberg’s post
on June 5th, 2020). It is a new project currently led by Fidji Simo, head of the Facebook app, and Ime Archibong, who is in
charge of Product Experimentation on Facebook.
If by improving, we mean altering the conditions of possibility for hateful content on the platform, platforms like
Facebook would have to change enormously, to the extent, I argue, that they would no longer be the platforms we know.
Therefore, it would no longer be an improvement but a change. I am inquisitive to know how building platforms with
different values would affect the way we connect and communicate.
How do we ensure safety, privacy and freedom of expression all at the same time?
When it comes to ensuring safety and freedom of expression, a matter of fact is that Facebook already does. It is a
technicality, but one I find fascinating.
Tacitly, Facebook makes the distinction between freedom of expression and freedom of information. If we look closely,
all the mechanisms and techniques that Facebook has implemented to provide safety do not dictate what the users have
to say. Their voices are intact but mostly interfere with how users receive and disseminate information. Take a look:
Zuckerberg summarized this well in 2017: “Freedom means you do not have to ask permission first, and that by default
you can say what you want. If you break our community standards or the law, then you're going to face consequences
afterwards. We won't catch everyone immediately, but we can make it harder to try to interfere.” (Zuckerberg, Mark, 21
September 2017).
As such, freedom of expression and safety are ensured. Perhaps we should start talking specifically about freedom of
information. I actually think that, to talk about privacy, we will need to open a different question, but to an extent it is
also linked with circulation. The lower your visibility, the lower your circulation of content. Although it is not guaranteed.
You would have to rely on your close contacts to not circulate a post whose privacy is important for you.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
PALOMA VIEJO IMPROVING SOCIAL MEDIA | 104
Continued on next page
Governmental Oversight. No doubt. I like Suzor’s (2019) idea when he suggests that terms of service should respond to
General Law. It would affect community standards, I guess. Furthermore, I say Facebook would be grateful for it. They
clarify that they do not want to be the arbiters of discrimination, neither the arbiters of truth. That is at least what the
public says, and I don’t have arguments that prove that what they – Facebook – says is not what they believe.
What makes you optimistic that we, as a society, will be able to improve social media?
It makes me feel optimistic that we will keep testing different forms of connecting digitally. Not sure if it has to be on a
platform. I do not see why we cannot own our data and share it with whoever we want. I would love to have a small data
center in my kitchen, right beside my toaster.
"It makes me feel optimistic that we will keep testing different forms of
connecting digitally. Not sure if it has to be on a platform. I do not see
why we cannot own our data and share it with whoever we want. I
would love to have a small data center in my kitchen, right beside my
toaster."
Justin Hendrix
satisfy incentives that promote an
unhealthy information ecosystem. Some
of the companies in that ecosystem are
more honest with themselves and with
the public than others are about that
reality. But while I am a pessimist in the
Editor at Tech Policy Press short term, I am an optimist in the long
term, for a few different reasons. The
problems we face are now much better
understood than they were a few years
ago. Investments in research, the efforts
of journalists and activists and the
bravery of individuals across the world in
raising awareness of the real violence
and damages they have experienced
have created the conditions for a
renewal I believe is just getting
underway.
When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
The one that needs the most attention at present is governmental oversight, followed closely by the responsibility of the
platforms. That is because the responsibility of the platforms needs to be defined by governments in liberal democracies.
What people and organizations do you feel are doing a good job toward improving social media?
I would point to the excellent ideas that just came out of the New Public festival, and the new partnership described by
Nantina Vgontzas and Meredith Whittaker between "militant workers, engaged social movements, progressive
politicians, radical lawyers and critical researchers" who want to develop a new future. We need to put our attention on
what comes next.
What do you see as the risk of doing nothing to address the shortcomings of social media?
It is wrong to blame social media for every problem in the public sphere; but it is equally wrong not to ascribe some blame
to these massive platforms for the fact that democracy is losing ground around the world. We risk losing this form of
governance, which is one of humanity's greatest achievements.
What models do you see coming on line for providing a digital community (beyond today’s ad-based, extraction
model) – platform cooperatives? Decentralized Autonomous Organizations (DAOs)? For example, are there promising
applications for the blockchain?
I am certainly interested in some of these new models. The subscription economy in media has its advantages, for
instance, and the platform co-op vision seems worth continuing to build. But the dominant model, the capitalist,
attention economy model, will be hard to displace. That is why efforts to hack it such as the ideas proposed at New Public
are important.
How does social media look different five years from now?
I reckon we will see continued innovation. At least two or three new platforms will be prominent. We may also see more
fragmentation into peculiar, more narrow communities such as Parler. I also reckon that ubiquitous 5G networks will
drive new modes of interaction – Clubhouse may be an early example, Spatial another. Expect more media-rich
experiences.
Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what
academic or experience backgrounds should be more involved in improving social media?
I'll point to what Courtney Cogburn and Desmond Patton at the Columbia School of Social Work often say: We need
more social workers in tech! They will help us identify the issues and ideas that are important to society that mainstream
technology companies and their engineers may regard as "fringe.”
Will we (all) ever be able to solve the conundrum around whether the platforms are policing too much or too little (de-
platforming heads of state vs. not protecting the vulnerable enough)? Can governments solve this conundrum?
Yes, they can – if they establish bodies that can build precedent, iterate on changes and incorporate new data as they
move forward. We do not need one framework; we need a system that can grow and evolve and change as we learn more
and observe the effects of regulation.
JUSTIN HENDRIX IMPROVING SOCIAL MEDIA | 107
Continued on next page
Algorithms play a significant role in the social media experience. What issues do you see with how algorithms play
such a large role in an individual's experience and how can we improve this?
I would point to writers like Shoshana Zuboff or Cathy O'Neil or Kate Crawford or danah boyd who have written
extensively on these issues. Algorithms increasingly shape reality – they are defined by people with biases. We need to
be careful about their application and honest about how and why they are applied.
What makes you optimistic that we, as a society, will be able to improve social media?
I am optimistic because I teach. There are great ideas coming out of the young people I work with. They are always
looking for something to fix. I try and connect them with these various movements that are pushing toward a more just,
equitable, democratic information ecosystem.
"It is wrong to blame social media for every problem in the public
sphere; but it is equally wrong not to ascribe some blame to these
massive platforms for the fact that democracy is losing ground around
the world. We risk losing this form of governance, which is one of
humanity's greatest achievements."
Organizations
and Resources
Read about and connect with the many
organizations that are involved in improving
social media, and utilize the vast amount of
resources available
AllTechIsHuman.org | ImprovingSocialMedia.com
RESOURCE: AccessNow Digital Security Helpline: Services include support for securing users and
NGOs’ technical infrastructure, websites, and social media against attacks (government or otherwise)
Accountable Tech (@accountabletech) "We are facing a crisis of truth. Accountable Tech advocates for
the social media companies at the center of today’s information ecosystem to strengthen the integrity of
their platforms and our democracy." Accountabletech.org
RESOURCE: The Tech Transparency Project (TTP) is a research initiative of Accountable Tech
that seeks to hold large technology companies accountable.
Ada Lovelace Institute (@AdaLovelaceInst) "An independent research institute and deliberative body
with a mission to ensure data and AI work for people and society." Adalovelaceinstitute.org
AI Now Institute (@AINowInstitute) “The AI Now Institute at New York University is an interdisciplinary
research center dedicated to understanding the social implications of artificial intelligence.”
ainowinstitute.org
Algorithmic Justice League (@AJLUnited) "The Algorithmic Justice League’s mission is to raise awareness
about the impacts of AI, equip advocates with empirical research, build the voice and choice of the most
impacted communities, and galvanize researchers, policy makers, and industry practitioners to mitigate AI
harms and biases. We’re building a movement to shift the AI ecosystem towards equitable and accountable
AI." AJL.org
All Tech Is Human (@AllTechIsHuman) "Building the Responsible Tech pipeline by informing & inspiring
the next generation of responsible technologists & changemakers. Building a better tech future by changing
those involved in it, making the pipeline more diverse, multidisciplinary and aligned with the public
interest." AllTechIsHuman.org
RESOURCE: Guide to Responsible Tech: How to Get Involved & Build a Better Tech Future, aka the
"Responsible Tech Guide"
The Asia Foundation (@Asia_Foundation) “The Asia Foundation is a nonprofit international development
organization committed to improving lives across a dynamic and developing Asia.Through their emerging
issues lab they examine shifting labor markets, how to help workers adapt, and setting a policy agenda for a
future of work that promotes prosperity, jobs, and inclusive growth.” AsiaFoundation.org
RESOURCE: Violent Conflict, Tech Companies, and Social Media in Southeast Asia
Aspen Tech Policy Hub (@AspenPolicyHub) "The Aspen Tech Policy Hub is a West Coast policy
incubator, training a new generation of tech policy entrepreneurs. Modeled after tech incubators like Y
Combinator, we take tech experts, teach them the policy process through an in-residence fellowship
program in the Bay Area, and encourage them to develop outside-the-box solutions to society’s problems."
AspenTechPolicyHub.org
Atlantic Council (@AtlanticCouncil) "The Atlantic Council promotes constructive leadership and
engagement in international affairs based on the Atlantic Community’s central role in meeting global
challenges. The Council provides an essential forum for navigating the dramatic economic and political
changes defining the twenty-first century by informing and galvanizing its uniquely influential network of
global leaders." Atlanticcouncil.org
Avaaz (@avaaz) "Avaaz is a global web movement to bring people-powered politics to decision-making
everywhere. Avaaz empowers millions of people from all walks of life to take action on pressing global,
regional and national issues, from corruption and poverty to conflict and climate change. Our model of
internet organising allows thousands of individual efforts, however small, to be rapidly combined into a
powerful collective force." Avaaz.org
Berggruen Institute (@berggruenInst) "Exploring new ideas across tech, governance & philosophy in an
era of great transformations." Berggruen.org
Berkman Klein Center (Harvard) (@BKCHarvard) "The Berkman Klein Center for Internet & Society at
Harvard University is dedicated to exploring, understanding, and shaping the way we use technology."
Cyber.havard.edu
Betalab (@betaworksVC) “An early-stage investment program for startups aiming to Fix The Internet.”
betaworksventures.com/betalab
Build Tech We Trust (@buildtechtrust) "We are a collective of tech CEOs, activists, changemakers, and
workers who believe the time to act to counter the hate and terrorism is now. We believe technology should
improve the human experience and quality of life for everyone, and that tech companies and
IMPROVING SOCIAL MEDIA | 111
leaders should take responsibility for the harm caused by their platforms and tools. We believe technology
has the power to transform our lives for the better, but only if we prioritize people over the gains for the
few. Today, we invite you to join us in changing the way we build and use tech." BuildTechWeTrust.com
RESOURCE: Contact form to share more information with Build Tech We Trust
Center for Democracy & Technology (@CenDemTech) "The Center for Democracy & Technology.
Shaping tech policy & architecture, with a focus on the rights of the individual...Our team of experts
includes lawyers, technologists, academics, and analysts, bringing diverse perspectives to all of our efforts."
Cdt.org
Center for Humane Technology (@HumaneTech_) "We are a team of deeply concerned technologists,
policy experts, and social impact leaders who intimately understand how the tech industry’s culture,
techniques, and business models control 21st century digital infrastructure. Together with our partners, we
are dedicated to radically reimagining technology for the common good of humanity." Humanetech.com
Center for Information Technology and Policy (CITP) at Princeton University (@PrincetonCITP)
“CITP is an interdisciplinary center at Princeton University. The center is a nexus of expertise in technology,
engineering, public policy, and the social sciences on campus. In keeping with the strong University tradition
of service, the center’s research, teaching, and events address digital technologies as they interact with
society.” citp.princeton.edu
Center for Media, Technology and Democracy at McGill University (@MediaTechDem) "The Centre
produces critical research, policy activism, and inclusive events that inform public debates about the
changing relationship between media and democracy, and that ground policy aimed at maximising the
benefits and minimizing the systemic harms embedded in the design and use of emerging technologies.”
Mediatechdemocracy.com
Center for Technology Innovation at Brookings (@BrookingsInst) "[F]ocuses on delivering research that
affects public debate and policymaking in the arena of U.S. and global technology innovation. Our research
centers on identifying and analyzing key developments to increase innovation; developing and publicizing
best practices to relevant stakeholders; briefing policymakers about actions needed to improve innovation;
and enhancing the public and media’s understanding of technology innovation."
Brookings.edu/center/center-for-technology-innovation/
Center for Technology & Society at the ADL (@ADL) "How do we ensure justice and fair treatment for
all in a digital environment? How do we counter online hate, protect free speech, and use social media to
reduce bias in society? The Center for Technology and Society takes ADL’s civil rights mission and applies it
to the 21st century." ADL.org/who-we-are/our-organization/advocacy-centers/center-for-
technology-and-society
Change the Terms (@changeterms) “To ensure that companies are doing their part to help combat
hateful conduct on their platforms, organizations in this campaign will track the progress of major tech
companies – especially social media platforms – to adopt and implement these model corporate policies
and give report cards to these same companies on both their policies and their execution of those policies
the following year.” changetheterms.org
Clean Up Twitter Online forum for interdisciplinary discussions of combating online hate on all
platforms.
Common Sense Media (@CommonSense) "Common Sense has been the leading source of entertainment
and technology recommendations for families and schools...Together with policymakers, industry leaders,
and global media partners, we're building a digital world that works better for all kids, their families, and
their communities." Commonsensemedia.org
RESOURCE: Tweens, Teens, Tech, and Mental Health: Coming of Age in an Increasingly Digital,
Uncertain, and Unequal World 2020
Consentful Tech Project "The Consentful Tech Project raises awareness, develops strategies, and shares
skills to help people build and use technology consentfully."
Contract for the Web (@webfoundation) "A global plan of action to make our online world safe and
empowering for everyone" Contract launch, 2018: Founder: Sir Tim Berners-Lee." Webfoundation.org &
Contractfortheweb.org
RESOURCE: The ESG Report: Organized Crime and Terror on Facebook, WhatsApp, Instagram and
Messenger
Coworker.org (@teamcoworker) "At Coworker.org, we deploy digital tools, data, and strategies in service
of helping people improve their work lives. Coworker.org is a laboratory for workers to experiment with
power-building strategies and win meaningful changes in the 21st century economy." Coworker.org
Cyber Civil Rights Initiative (@CCRInitiative) "Empowering victims of nonconsensual porn (NCP) to
become stewards of their own life and doing everything in our power to eradicate NCP altogether.
IMPROVING SOCIAL MEDIA | 113
CCRI’s Mission is to combat online abuses that threaten civil rights and civil liberties. CCRI’s Vision is of a
world in which law, policy and technology align to ensure the protection of civil rights and civil liberties for
all." Cybercivilrights.org
RESOURCE: 2017 Nationwide Online Study of Nonconsensual Porn Victimization and Perpetration:
A Summary Report
CyberPeace Institute (@CyberpeaceInst) "A Cyberspace at peace, for everyone, everywhere." Based in
Geneva, Switzerland." Cyberpeaceinstitute.org
CyberWise (@BeCyberwise) "CyberWise is a resource site for BUSY grownups who want to help youth
use digital media safely and wisely. It is the companion site to Cyber Civics, our comprehensive digital
literacy program for middle school." Cyberwise.org
Dangerous Speech Project (@dangerousspeech) “The Dangerous Speech Project was founded in 2010
to study speech (any form of human expression) that inspires violence between groups of people – and to
find ways to mitigate this while protecting freedom of expression.” Dangerousspeech.org
Data & Society (@datasociety) "Data & Society studies the social implications of data-centric
technologies & automation. We produce original research on topics including AI and automation, the
impact of technology on labor and health, and online disinformation." Datasociety.net/
DemocracyLab (@DemocracyLab) "Nonprofit, open source platform empowering people who use
#technology to advance the public good by connecting skilled #volunteers to #techforgood projects."
Democracylab.org
Design Justice Network (@design__justice) "The Design Justice Network challenges the ways that design
and designers can harm those who are marginalized by systems of power. We use design to imagine and
build the worlds we need to live in — worlds that are safer, more just, and more sustainable. We advance
practices that center those who are normally excluded from and adversely impacted by design decisions in
design processes." Designjustice.org
Digital Wellness Collective (@dwforall) "We enhance human relationship through the intentional use and
development of technology." Digitalwellnesscollective.com
Digital Forensic Research Lab, Atlantic Council (@DFRLab) "Atlantic Council's Digital Forensic
Research Lab. Cultivating a global network of digital forensic analysts (#DigitalSherlocks) to combat
disinformation." Based in Washington, DC. Digitalsherlocks.org
RESOURCE: Reports
RESOURCE: #DQEveryChild
IMPROVING SOCIAL MEDIA | 114
Electronic Frontier Foundation (@EFF) "We're the Electronic Frontier Foundation. We defend your civil
liberties in a digital world." EFF.org
EU Disinfo Lab (@DisinfoEU) "A vibrant home for disinformation activists and experts. EU DisinfoLab is
an independent non-profit organisation focused on tackling sophisticated disinformation campaigns
targeting the EU, its member states, core institutions, and core values." Disinfo.eu
Facing Facts (@FacingFactsEU) “Facing Facts is an innovative programme aiming to tackle the issue of
hate crime and hate speech in Europe. Due to increasing demand for capacity building programmes in this
field by EU Member States, the Facing Facts training offer is now available online
(www.facingfactsonline.eu) and is used by law enforcement and civil society representatives. Multiple
courses in multiple languages address specific aspects of identifying, monitoring and countering hate crime
and hate speech.”
Family Online Safety Institute [FOSI] (@FOSI) "FOSI convenes leaders in industry, government and
the non-profit sectors to collaborate and innovate new solutions and policies in the field of online
safety." Fosi.org/
Fight for the Future (@fightfortheftr) "Fight for the Future is a nonprofit advocacy group in the area of
digital rights founded in 2011. The group aims to promote causes related to copyright legislation, as well as
online privacy and censorship through the use of the Internet.“ Fightforthefuture.org
RESOURCE: Projects
First Draft News (@FirstDraftNews) "We work to protect communities from harmful information by
sharing tips and resources to build resilience and improve access to accurate information."
Firstdraftnews.org
Future Says (@futuresays_) "Powered by the Minderoo Foundation, Future Says is a new global initiative,
committed to accountability in the tech ecosystem, to rebalancing power, and to reimagining technology in
a pro-public way – built and designed by people for people." FutureSays.org
Global Disinformation Index (@DisinfoIndex) “The Global Disinformation Index (GDI) aims to disrupt,
defund and down-rank disinformation sites. We collectively work with governments, business and
civil society. We operate on three core principles of neutrality, independence and transparency.”
Global Internet Forum to Counter Terrorism (@GIFCT_official) "The mission of the Global Internet
Forum to Counter Terrorism (GIFCT) is to prevent terrorists and violent extremists from exploiting digital
platforms. Founded by Facebook, Microsoft, Twitter, and YouTube in 2017, the Forum was designed to
foster technical collaboration among member companies, advance relevant research, and share knowledge
with smaller platforms. Since 2017, GIFCT’s membership has expanded beyond the founding companies to
include over a dozen diverse platforms committed to cross-industry efforts to counter the spread of
terrorist and violent extremist content online."
RESOURCE: Antecedents of support for social media content moderation and platform regulation:
the role of presumed effects on self and others
Institute for Strategic Dialogue (@ISDglobal) “ISD’s work surveys the wide range of disinformation
tactics used to promote polarisation, to undermine elections and to threaten democratic discourse. This
includes smear campaigns, distortive and deceptive media, and the range of inorganic methods used to
amplify this content to wider audiences." ISDglobal.org
Lincoln Network (@JoinLincoln) “Lincoln Network believes that when technology meets and supports the
cause of liberty, our society wins and our future becomes brighter.” LincolnPolicy.org
Lumen Database "The Lumen database collects and analyzes legal complaints and requests for removal of
online materials, helping Internet users to know their rights and understand the law. These data enable us
to study the prevalence of legal threats and let Internet users see the source of content removals."
MediaJustice (@mediajustice) “MediaJustice (formerly CMJ) fights racial, economic, and gender justice in
a digital age.” MediaJustice.org
Meedan (@meedan) “Meedan builds digital tools for global journalism and translation. We are a team of
designers, technologists and journalists who focus on open source investigation of digital media and
crowdsourced translation of social media. With commercial, media and university partners, we support
research, curriculum development, and new forms of digital storytelling.” Meedan.com
MIT - The Media Lab (@medialab) "An antidisciplinary research community and graduate program at MIT
focused on the study, invention, and creative use of emerging technologies." Media.mit.edu
The Mozilla Foundation (@mozilla) "The Mozilla Foundation works to ensure the internet remains a
public resource that is open and accessible to us all." Foundation.mozilla.org
NAMLE (@MediaLiteracyEd) "The National Association for Media Literacy Education (NAMLE) is a non-
profit organization dedicated to advancing media literacy education. We define both education and media
broadly." Namle.net
The News Literacy Project (@NewsLitProject) "The News Literacy Project, a nonpartisan national
education nonprofit, provides programs and resources for educators and the public to teach, learn and
share the abilities needed to be smart, active consumers of news and information and equal and engaged
participants in a democracy." Newslit.org
OASIS Consortium (@ConsortiumOasis) "[A]n association of key stakeholders across digital platforms,
media, government, and academia focused on brand and user safety. This includes advocating for brand
and user safety, focusing on actionable terms and standards of behavior, and steering the industry toward
clarity and responsibility." Oasisconsortium.com
One in Tech (@WeAreOneInTech) "One In Tech is focused on the prevalent issues of inequality, inequity,
and bias in technology and digital inclusion affecting under-resourced, under-represented, and under-
engaged populations throughout the world. Our organization works to bridge the global Digital Divide,
which is the gap between people with and those without effective access, resources, and skills to enable
healthy digital engagement with the internet and other digital technology." Oneintech.org
Online Hate Prevention Institute (@OnlineHate) “The Online Hate Prevention Institute (OHPI) is an
Australian Harm Prevention Charity that conducts research, runs campaigns, provides public education,
recommends policy changes and law reform, and seeks ways of changing online systems to make them
more effective in reducing the risks posed by online hate. We work to change online culture so hate in all its
forms becomes as socially unacceptable online as it is in real life.” ohpi.org.au
OnlineSOS (@onlinesos) “Online SOS is a safe place where people can find tools, information and, above
all, empowerment, in the face of online harassment.” Onlinesos.org
RESOURCE: Assess and Take Action: Identify what type of online harassment you’re experiencing-
and take action.
Open Source Researchers of Color (@osroccollective) “We are a radical and ethical collective of
investigators who research and preserve crowd-sourced information, we create resources on
security, privacy, investigating, and archiving social movements” Osroc.org
Oxford Internet Institute (@oiioxford) "The Oxford Internet Institute is a multi-disciplinary department
of social and computer science dedicated to the study of information, communication, and technology, and
is part of the Social Sciences Division of the University of Oxford, England."
Pew Research Center: Internet & Technology (@pewinternet) "Analyzing the social impact of digital
technologies." Pewresearch.org/internet/
Privacy International “Investigating brands using Facebook for advertising, exposing how difficult it is to
understand how our data's used and demanding Facebook make it easier to exercise our rights."
RESOURCE: Advertisers on Facebook: who the heck are you and how did you get my data?
Prosocial Design Network (@DesignProsocial) “We believe that digital products can be designed to help
us better understand one another. That’s why we are building an international network of behavioral
science and design experts to articulate a better, more prosocial future online; and to disentangle the Web’s
most glaring drawbacks: from misunderstandings to incitements to hatred.” prosocialdesign.org
Pivot For Humanity (@Pivot4Humanity) "We’re working to professionalize the social tech industry and
create a more responsible and accountable Silicon Valley." Pivotforhumanity.com
Public Data Lab (@PublicDataLab) "Created A Field Guide to “Fake News” and Other Information
Disorders explores the use of digital methods to study false viral news, political memes, trolling practices
and their social life online." PublicDataLab.org
Ranking Digital Rights (@rankingrights) "Evaluating the world's most powerful digital platforms and
telecommunications companies on their commitments to #digitalrights." RankingDigitalRights.org
Safer Internet Day (@SaferInternetDay) "Starting as an initiative of the EU SafeBorders project in 2004
and taken up by the Insafe network as one of its earliest actions in 2005, Safer Internet Day has grown
beyond its traditional geographic zone and is now celebrated in approximately 170 countries worldwide.
From cyberbullying to social networking to digital identity, each year Safer Internet Day aims to raise
awareness of emerging online issues and current concerns." SaferInternetDay.org
Santa Clara University, The Internet Ethics program at the Markkula Center for Applied Ethics
(@IEthics) "The Markkula Center for Applied Ethics explores privacy, big data, social media, the "right to be
forgotten," cybersecurity, and other issues in Internet Ethics." Scu.edu/ethics
Stanford Psychiatry’s Center for Youth Mental Health & Wellbeing "Their Media and Mental Health
Initiative co-designs with youth and implements interventions to support the mental health and wellbeing
of young people ages 12-25, including the youth-led #goodformedia project."
Tech Against Terrorism (@techvsterrorism) “an initiative launched and supported by the United Nations
Counter Terrorism Executive Directorate (UN CTED) working with the global tech industry to tackle
terrorist use of the internet whilst respecting human rights.” Techagainstterrorism.org
TechCongress "Tech experts and professionals spend one year with relevant Members or Committees in
the House and Senate. The fellowship's goal is helping Congress aim for more informed decisions regarding
technology and policy by allowing Congress to gain technical insight. At present, only 6 out of 15,000
staffers have a technical background." Techcongress.io
Tech Policy Press (@TechPolicyPress) “The goal for Tech Policy press is to provoke new ideas, debate and
discussion at the intersection of technology, democracy and policy. We invite you to submit essays, opinion,
reporting and other forms of content for consideration.” techpolicy.press
Tech2025 (@JoinTech2025) "Tech 2025 is a platform and innovation community for learning about, and
discussing, the most consequential emerging technologies that will impact our world in the next 5 years."
Tech2025.com
Tech Transparency Project (@TTP_updates) “TTP is an information and research hub for journalists,
academics, policymakers and members of the public interested in exploring the influence of the major
technology platforms on politics, policy, and our lives.” Techtransparencyproject.org
RESOURCE: Reports
Thorn (@thorn) Thorn: "[A]n international anti-human trafficking organization that works to address the
sexual exploitation of children. The primary programming efforts of the organization focus on Internet
technology and the role it plays in facilitating child pornography and sexual slavery of children on a global
scale."
IMPROVING SOCIAL MEDIA | 119
RESOURCE: Sound Practices Guide Download
Trust & Safety Professional Association (@tspainfo) "TSPA is a forum for professionals to connect with
a network of peers, find resources for career development, and exchange best practices for navigating
challenges unique to the profession." Tspa.info
WashingTech (@WashingTECH) "As America's "inclusive voice of tech policy", WashingTECH's mission is
to convene diverse technology public policy professionals to defend America's rich diversity with programs
that promote an inclusive narrative about technology's impact on society." WashingTech.org
The Web Foundation (@WebFoundation) "The World Wide Web Foundation was established in 2009 by
web inventor Sir Tim Berners-Lee and Rosemary Leith to advance the open web as a public good and a
basic right. We are an independent, international organisation fighting for digital equality — a world where
everyone can access the web and use it to improve their lives." WebFoundation.org
5Rights Foundation (@5RightsFound) "5Rights Foundation exists to make systemic changes to the digital
world that will ensure it caters for children and young people, by design and default, so that they can thrive.
5 Rights work with, and on behalf of, children and young people to reshape the norms of the digital world in
four priority areas: design of service, child online protection, children and young people's rights and data
literacy." 5rightsfoundation.com
BASE
collaboration to inform all nodes.
TECH WORKERS
Greater awareness of their
power and role.
POLICYMAKERS
Moving from reactive to
proactive.
ADVERTISERS
Recognition of responsibility as
the lifeblood of ad-based models.
NEWS MEDIA
Greater ethics, tech literacy and
accountability to serve the public.
FUNDERS
Withholding funding for
exploitative business practices.
IMPROVING SOCIAL MEDIA | 123
In Summary
This report is a catalyst for change and a resource to begin
transitioning from theorizing to greater action.