Meta in Myanmar (full series)
Between July and October of this year, I did a lot of reading and writing about the role of Meta and Facebook—and the internet more broadly—in the genocide of the Rohingya people in Myanmar. The posts below are what emerged from that work.
The format is a bit idiosyncratic, but what I’ve tried to produce here is ultimately a longform cultural-technical incident report. It’s written for people working on and thinking about (and using and wrestling with) new social networks and systems. I’m a big believer in each person contributing in ways that accord with their own skills. I’m a writer and researcher and community nerd, rather than a developer, so this is my contribution.
More than anything, I hope it helps.
Meta in Myanmar, Part I: The Setup (September 28, 2023, 10,900 words)
Myanmar got the internet late and all at once, and mostly via Meta. A brisk pass through Myanmar’s early experience coming online and all the benefits—and, increasingly, troubles—connectivity brought, especially to the Rohingya ethnic minority, which was targeted by massive, highly organized hate campaigns.
Something I didn’t know going in is how many people warned Meta‚ and in how much detail, and for how many years. This post captures as many of those warnings as I could fit in.
Meta in Myanmar, Part II: The Crisis (September 30, 2023, 10,200 words)
Instead of heeding the warnings that continued to pour in from Myanmar, Meta doubled down on connectivity—and rolled out a program that razed Myanmar’s online news ecosystem and replaced it with inflammatory clickbait. What happened after that was the worst thing that people can do to one another.
Also: more of the details of the total collapse of content moderation and the systematic gaming of algorithmic acceleration to boost violence-inciting and genocidal messages.
Meta in Myanmar, Part III: The Inside View (October 6, 2023, 12,500 words)
Using whistleblower disclosures and interviews, this post looks at what Meta knew (so much) and when (for a long time) and how they handled inbound information that suggests that Facebook was being used to do harm (they shoved it to the margins).
This post introduces an element of the Myanmar tragedy that turns out to have echoes all over the planet, which is the coordinated covert influence campaigns that have both secretly and openly parasitized Facebook to wreak havoc.
I also get into a specific and I think illustrative way that Meta continues to deceive politicians and media organizations about their terrible content moderation performance, and look at their record in Myanmar in the years after the Rohingya genocide.
Meta in Myanmar, Part IV: Only Connect (October 13, 2023, 8,600 words)
Starting with the recommendations of Burmese civil-society organizations and individuals plus the concerns of trust and safety practitioners who’ve studied large-scale hate campaigns and influence operations, I look at a handful of the threats that I think cross over from centralized platforms to rapidly growing new-school decentralized and federated networks like Mastodon/the fediverse and Bluesky—in potentially very dangerous ways.
It may be tempting to take this last substantial piece as the one to read if you don’t have much time, but I would recommend picking literally any of the others instead—my concluding remarks here are not intended to stand alone.
Meta Meta (September 28, 2023, 2,000 words)
I also wrote a short post about my approach, language, citations, and corrections. That brings the total word to about 44,000.
Acknowledgements
Above all, all my thanks go to the people of the Myanmar Internet Project and its constituent organizations.
Thanks additionally to the various individuals on the backchannel whom I won’t name but hugely appreciate, to Adrianna Tan and Dr. Fancypants, Esq., to all the folks on Mastodon who helped me find answers to questions, and to the many people who wrote in with thoughts, corrections, and dozens of typos. All mistakes are extremely mine.
Many thanks also to the friends and strangers who helped me find information, asked about the work, read it, and helped it find readers in the world. Writing and publishing something like this as an independent writer and researcher is weird and challenging, especially in a moment when our networks are in disarray and lots of us are just trying to figure out where our next job will come from.
Without your help, this would have just disappeared, and I’m grateful to every person who reads it and/or passes it along.
“Thanks” is a deeply inadequate thing to say to my partner, Peter Richardson, who read multiple drafts of everything and supported me through some challenging days in my 40,000-words-in-two-weeks publishing schedule, and especially the months of fairly ghastly work that preceded it. But as ever, thank you, Peter.
Meta in Myanmar, Part IV: Only Connect
The Atlantic Council’s report on the looming challenges of scaling trust and safety on the web opens with this statement:
That which occurs offline will occur online.
I think the reverse is also true: That which occurs online will occur offline.
Our networks don’t create harms, but they reveal, scale, and refine them, making it easier to destabilize societies and destroy human beings. The more densely the internet is woven into our lives and societies, the more powerful the feedback loop becomes.
In this way, our networks—and specifically, the most vulnerable and least-heard people inhabiting them—have served as a very big lab for gain-of-function research by malicious actors.
And as the first three posts in this series make clear, you don’t have to be online at all to experience the internet’s knock-on harms—there’s no opt-out when internet-fueled violence sweeps through and leaves villages razed and humans traumatically displaced or dead. (And the further you are from the centers of tech-industry power—geographically, demographically, culturally—the less likely it is that the social internet’s principal powers will do anything to plan for, prevent, or attempt to repair the ways their products hurt you.)
I think that’s the thing to keep in the center while trying to sort out everything else.
In the previous 30,000 words of this series, I’ve tried to offer a careful accounting of the knowable facts of Myanmar’s experience with Meta. Here’s the argument I outlined toward the end of Part II:
- Meta bought and maneuvered its way into the center of Myanmar’s online life and then inhabited that position with a recklessness that was impervious to warnings by western technologists, journalists, and people at every level of Burmese society. (This is most of Part I.)
- After the 2012 violence, Meta mounted a content moderation response so inadequate that it would be laughable if it hadn’t been deadly. (Discussed in Part I and also [in Part II].)
- With its recommendation algorithms and financial incentive programs, Meta devastated Myanmar’s new and fragile online information sphere and turned thousands of carefully laid sparks into flamethrowers. (Discussed [in Part II] and in Part III.)
- Despite its awareness of similar covert influence campaign based on “inauthentic behavior”—aka fake likes, comments, and Pages—Meta allowed an enormous and highly influential covert influence operation to thrive on Burmese-language Facebook throughout the run-up to the peak of the 2016 and 2017 “ethnic cleansing,” and beyond. (Part III.)
I still think that’s right. But this story’s many devils are in the details, and getting at least some of the details down in public was the whole point of this very long exercise.
Here at the end of it, it’s tempting to try to package up a tidy set of anti-Meta action items here and call it a day, but there’s nothing tidy about this story, or about what I think I’ve learned working on it. What I’m going to try to do instead is to try to illuminate some facets of the problem, suggest some directions for mitigations, rotate the problem, and repeat.
The allure of the do-over
After my first month of part-time research on Meta in Myanmar, I was absorbed in the work and roughed up by the awfulness of what I was learning and, frankly, incandescently furious with Meta’s leadership. But sometime after I read Faine Greenwood’s posts—and reread Craig Mod’s essay for the first time since 2016—I started to get scared, for reasons I couldn’t even pin down right away. Like, wake-up-at-3am scared.
At first, I thought I was just worried that the new platforms and networks coming into being would also be vulnerable to the kind of coordinated abuse that Myanmar experienced. And I am worried about that and will explain at great length later in this post. But it wasn’t just that.
Craig’s essay about his 2015 fieldwork with farmers in Myanmar captures something real about the exhilarating possibilities of a reboot:
…there is a wild and distinct freedom to the feeling of working in places like this. It is what intoxicates these consultants. You have seen and lived within a future, and believe—must believe—you can help bring some better version of it to light here. A place like Myanmar is a wireless mulligan. A chance to get things right in a way that we couldn’t or can’t now in our incumbent-laden latticeworks back home.
This rings bells not only because I remember my own early-internet spells of optimism—which were pretty far in the rearview by 2016—but because I recognize a much more recent feeling, which is the way it felt to come back online last fall, as the network nodes on Mastodon were starting to really light up.
I’d been mostly off social media since 2018, with a special exception for covid-data work in 2020 and 2021. But in the fall and winter of 2022, the potential of the fediverse was crackling in ways it hadn’t been in my previous Mastodon experiences in 2017 and 2018. If you’ve been in a room where things are happening, you’ll recognize that feeling forever, and last fall, it really felt like some big chunks of the status quo had changed state and gone suddenly malleable.
I also believe that the window for significant change in our networks doesn’t open all that often and doesn’t usually stay open for long.
So like any self-respecting moth, when I saw it happening on Mastodon, I dropped everything I’d been doing and flew straight into the porch light and I’ve been thinking and writing toward these ideas since.
Then I did the Myanmar project. By the time I got to the end of the research, I recognized myself in the accounts of the tech folks at the beginning of Myanmar’s internet story, so hopeful about the chance to escape the difficulties and disappointments of the recent past.
And I want to be clear: There’s nothing wrong with feeling hopeful or optimistic about something new, as long as you don’t let yourself defend that feeling by rejecting the possibility that the exact things that fill you with hope can also be turned into weapons. (I’ll be more realistic—will be turned into weapons, if you succeed at drawing a mass user base and don’t skill up and load in peacekeeping expertise at the same time.)
A lot of people have written and spoken about the unusual naivety of Burmese Facebook users, and how that made them vulnerable, but I think Meta itself was also dangerously naive—and worked very hard to stay that way as long as possible. And still largely adopts the posture of the naive tech kid who just wants to make good.
It’s an act now, though, to be clear. They know. There are some good people working themselves to shreds at Meta, but the company’s still out doing PR tapdancing while people in Ethiopia and India and (still) Myanmar suffer.
When I first realized how bad Meta’s actions in Myanmar actually were, it felt important to try to pull all the threads together in a way that might be useful to my colleagues and peers who are trying in various ways to make the world better by making the internet better. I thought I would end by saying, “Look, here’s what Meta did in Myanmar, so let’s get everyone the fuck off of Meta’s services into better and safer places.”
I’ve landed somewhere more complicated, because although I think Meta’s been a disaster, I’m not confident that there are sustainable better places for the vast majority of people to go. Not yet. Not without a lot more work.
We’re already all living through a series of rolling apocalypses, local and otherwise. Many of us in the west haven’t experienced the full force of them yet—we experience the wildfire smoke, the heat, the rising tide of authoritarianism, and rollbacks of legal rights. Some of us have had to flee. Most of us haven’t lost our homes, or our lives. Nevertheless, these realities chip away at our possible futures. I was born at about 330 PPM; my daughter was born at nearly 400.
The internet in Myanmar was born at a few seconds to midnight. Our new platforms and tools for global connection have been born into a moment in which the worst and most powerful bad actors, both political and commercial, are already prepared to exploit every vulnerability.
We don’t get a do-over planet. We won’t get a do-over network.
Instead, we have to work with the internet we made and find a way to rebuild and fortify it to support the much larger projects of repair—political, cultural, environmental—that are required for our survival.
I think those are the stakes, or I’d be doing something else with my time.
What “better” requires
I wrestled a lot with the right way to talk about this and how much to lean on my own opinions vs. the voices of Myanmar’s own civil society organizations and the opinions of whistleblowers and trust and safety experts.
I’ve ended up taking the same approach to this post as I did with the previous three, synthesizing and connecting information from people with highly specific expertise and only sometimes drawing from my own experience and work.
If you’re not super interested in decentralized and federated networks, you probably want to skip down a few sections.
If you’d prefer to get straight to the primary references, here they are:
Notes and recommendations from people who were on the ground in Myanmar, and are still working on the problems the country faces:
- “The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020
- “Facebook and the Rohingya Crisis,” Myanmar Internet Project, September 29, 2022. This document is online again, at least partially, at the Myanmar Internet Project site, and I’ve used Document Cloud to archive a copy of a PDF version someone affiliated with the project provided directly (thank you to that person).
- “Episode 5: Victoire Rio,” Brown Bag Podcast, ICT4Peace, October 9, 2022.
Two docs related to large-scale threats. The federation-focused “Annex Five” of the big Atlantic Council report, Scaling Trust on the Web. The whole report is worth careful reading, and this annex feels crucial to me, even though I don’t agree with every word.
I’m also including Camille François’ foundational 2019 paper on disinformation threats, because it opens up important ideas.
- “Annex Five: Collective Security in a Federated World,” Scaling Trust on the Web, Samantha Lai and Yoel Roth, the Atlantic Council’s Task Force for a Trustworthy Web, June 2023.
- “Actors, Behaviors, Content: A Disinformation ABC: Highlighting Three Vectors of Viral Deception to Guide Industry & Regulatory Responses,” Camille François, Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression, September 20, 2019.
Three deep dives with Facebook whistleblowers.
- “She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.
- “How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.
- “Facebook is Worse than You Think: Whistleblower Reveals All | Frances Haugen x Rich Roll,” The Rich Roll Podcast, September 7, 2023. (The title is dramatic and the video podcast is a weird cultural artifact, but this is a really solid conversation about what Haugen thinks are the biggest problems and solutions. Most conversations don’t let Haugen dig in this deeply.)
…otherwise, here’s what’s worrying me.
1. Adversaries follow the herd
Realistically, a ton of people are going to stay on centralized platforms, which are going to continue to fight very large-scale adversaries. (And realistically, those networks are going to keep ignoring as much as they can for as long as they can—which especially means that outside the US and Western Europe, they’re going to ignore a lot of damage until they’re regulated or threatened with regulation. Especially companies like Google/YouTube, whose complicity in situations like the one in Myanmar has been partially overlooked because Meta’s is so striking.)
But a lot of people are also trying new networks, and as they do, spammers and scammers and griefers will follow, in increasingly large numbers. So will the much more sophisticated people—and pro-level organizations—dedicated to manipulating opinion; targeting, doxxing, and discrediting individuals and organizations; distributing ultra-harmful material; and sowing division among their own adversaries. And these aren’t people who will be deterred by inconvenience.
In her super-informative interview on the Brown Bag podcast from the ICT4Peace Foundation, Myanmar researcher Victoire Rio mentions two things that I think are vital to this facet of the problem: One is that as Myanmar’s resistance moved off of Facebook and onto Telegram for security reasons after the coup, the junta followed suit and weaponized Telegram as a crowdsourced doxxing tool that has resulted in hundreds of arrests—Rio calls it “the Gestapo on steroids.”
This brings us to the next thing, which is commonly understood in industrial-grade trust and safety circles, but I think less so on newer networks, which have mostly experienced old-school adversaries—basic scammers and spammers, distributors of illegal and horrible content, and garden-variety amateur Nazis and trolls—which is that although those blunter and less sophisticated harms are still quite bad, the more sophisticated threats that are common on the big centralized platforms are considerably more difficult to identify and root out. And if the people running new networks don’t realize that what we’re seeing right now are the starter levels, they’re going to be way behind the ball when better organized adversaries arrive.
2. Modern adversaries are heavy on resources and time
Myanmar has a population of about 51 million people, and in the years before the coup, it already had an internal adversary in the military that ran a professionalized, Russia-trained online propaganda and deception operation that maxed out at about 700 people, working in shifts to manipulate the online landscape and shout down opposing points of view. It’s hard to imagine that this force has lessened now that the genocidaires are running the country.
Russia’s adversarial operations roll much deeper, and aren’t limited to the well-known, now allegedly disbanded Internet Research Agency.
And although Russia is the best-known adversary in most US and Western European conversations I’ve been in, it’s very far from being the only one. Here’s disinfo and digital rights researcher Camille François, warning about the association of online disinformation with “the Russian playbook”:
Russia is neither the most prominent nor the only actor using manipulative behaviors on social media. This framing ignores that other actors have abundantly used these techniques, and often before Russia. Iran’s broadcaster (IRIB), for instance, maintains vast networks of fake accounts impersonating journalists and activists to amplify its views on American social media platforms, and it has been doing so since at least 2012.
What’s more, this kind of work isn’t the exclusive domain of governments. A vast market of for-hire manipulation proliferates around the globe, from Indian public relations firms running fake newspaper pages to defend Qatar’s interests ahead of the World Cup and Israeli lobbying groups running influence campaigns with fake pages targeting audiences in Africa.
This chimes with what Sophie Zhang reported about fake-Page networks on Facebook in 2019—they’re a genuinely global phenomenon, and they’re bigger, more powerful, and more diverse in both intent and tactics than most people suspect.
I think it’s easy to imagine that these heavy-duty threats focus only on the big, centralized services, but an in-depth analysis of just one operation, Secondary Infektion, shows that it operated across at least 300 websites and platforms ranging from Facebook, Reddit, and YouTube (and WordPress, Medium, and Quora) to literally hundreds of other sites and forums.
These adversaries will take advantage of decentralized social networks. Believing otherwise requires a naivety I hope we’ll come to recognize as dangerous.
3. No algorithms ≠ no trouble
Federated networks like Mastodon, which eschews algorithmic acceleration, offer fewer incentives for some kinds of adversarial actors—and that’s very good. But fewer isn’t none.
Here’s what Lai and Roth have to say about networks without built-in algorithmic recommendation surfaces:
The lack of algorithmic recommendations means there’s less of an attack surface for inauthentic engagement and behavioral manipulation. While Mastodon has introduced a version of a “trending topics” list—the true battlefield of Twitter manipulation campaigns, where individual posts and behaviors are aggregated into a prominent, platform-wide driver of attention—such features tend to rely on aggregation of local (rather than global or federated) activity, which removes much of the incentive for engaging in large-scale spam. There’s not really a point to trying to juice the metrics on a Mastodon post or spam a hashtag, because there’s no algorithmic reward of attention for doing so…
These disincentives for manipulation have their limits, though. Some of the most successful disinformation campaigns on social media, like the IRA’s use of fake accounts, relied less on spam and more on the careful curation of individual “high-value” accounts—with uptake of their content being driven by organic sharing, rather than algorithmic amplification. Disinformation is just as much a community problem as it is a technological one (i.e., people share content they’re interested in or get emotionally activated by, which sometimes originates from troll farms)—which can’t be mitigated just by eliminating the algorithmic drivers of virality.
Learning in bloody detail about how thoroughly Meta’s acceleration machine overran all of its attempts to suppress undesirable results has made me want to treat algorithmic virality like a nuclear power source: Maybe it’s good in some circumstances, but if we aren’t prepared to do industrial-grade harm-prevention work and not just halfhearted cleanup, we should not be fucking with it, at all.
But, of course, we already are. Lemmy uses algorithmic recommendations. Bluesky has subscribable, user-built feeds that aren’t opaque and monolithic in the way that, say, Facebook’s are—but they’re still juicing the network’s dynamics, and the platform hasn’t even federated yet.
I think it’s an open question how much running fully transparent, subscribable algorithmic feeds that are controlled by users mitigates the harm recommendation systems can do. I think have a more positive view of AT Protocol than maybe 90% of fediverse advocates—which is to say, I feel neutral and like it’s probably too early to know much—but I’d be lying if I said I’m not nervous about what will happen when the people behind large-scale covert influence networks get to build and promote their own algo feeds using any identity they choose.
4. The benefits and limits of defederation
Another characteristic of fediverse (by which I mean “Activity-Pub-based servers, mostly interoperable”) networks is the ability for both individual users and whole instances to defederate from each other. The ability to “wall off” instances hosting obvious bad actors and clearly harmful content offers ways for good-faith instance administrators to sharply reduce certain kinds of damage.
It also means, of course, that instances can get false-flagged by adversaries who make accounts on target groups’ instances and post abuse in order to get those instances mass-defederated, as was reportedly happening in early 2023 with Ukrainian servers. I’m inclined to think that this may be a relatively niche threat, but I’m not the right person to evaluate that.
A related threat that was expressed to me by someone who’s been working on the ground in Myanmar for years is that authoritarian governments will corral their citizens on instances/servers that they control, permitting both surveillance and government-friendly moderation of propaganda.
Given the tremendous success of many government-affiliated groups in creating (and, when disrupted, rebuilding) huge fake-Page networks on Facebook, I’d also expect to see harmless-looking instances pop up that are actually controlled by covert influence campaigns and/or organizations that intend to use them to surveil and target activists, journalists, and others who oppose them.
And again, these aren’t wild speculations: Myanmar’s genocidal military turned out to be running many popular, innocuous-looking Facebook Pages (“Let’s Laugh Casually Together,” etc.) and has demonstrated the ability to switch tactics to keep up with both platforms and the Burmese resistance after the coup. It seems bizarre to me to assume that equivalent bad actors won’t work out related ways to take advantage of federated networks.
5. Content removal at mass scale is failing
The simple version of this idea is that content moderation at mass scale can’t be done well, full stop. I tend to think that we haven’t tried a lot of things that would help—not at scale, at least. But I would agree that doing content moderation in old-internet ways on the modern internet at mass scale doesn’t cut it.
Specifically, I think it’s increasingly clear that doing content moderation as a sideline or an afterthought, instead of building safety and integrity work into the heart of product design, is a recipe for failure. In Myanmar, Facebook’s engagement-focused algorithms easily outpaced—and often still defeat—Meta’s attempts to squash the hateful and violence-inciting messages they circulated.
Organizations and activists out of Myanmar are calling on social networks and platforms to build human-rights assessments not merely into their trust and safety work, but into changes to their core product design. Including, specifically, a recommendation to get product teams into direct contact with the people in the most vulnerable places:
Social media companies should increase exposure of their product teams to different user realities, and where possible, facilitate direct engagement with civil society in countries facing high risk of human rights abuse.
Building societal threat assessments into product design decisions is something that I think could move the needle much more efficiently than trying to just stuff more humans into the gaps.
Content moderation that focuses only on messages or accounts, rather than the actors behind them, also comes up short. The Myanmar Internet Project’s report highlights Meta’s failure—as late as 2022—to keep known bad actors involved in the Rohingya genocide off Facebook, despite its big takedowns and rules nominally preventing the military and the extremists of Ma Ba Tha from using Facebook to distribute their propaganda:
…most, if not all, of the key stakeholders in the anti-Rohingya campaign continue to maintain a presence on Facebook and to leverage Facebook and other platforms for influence. As we repeatedly warned the platforms, the bulk of the harmful content we face comes from a handful of actors, who have been consistently violating Terms of Services and Community Standards.
The Myanmar Internet Project recommends that social media companies “rethink their moderation approach to more effectively deter and—where warranted—restrict actors with a track record of violating their rules and terms of services, including by enforcing sanctions and restrictions at an actor and not account level, and by developing better strategies to detect and remove accounts of actors under bans.”
This is…going to be complicated on federated networks, even if I set aside the massive question of how federated networks will moderate messages originating outside their instances that require language and culture expertise they lack.
I’ll focus here on Mastodon because it’s big and it’s been federated for years. Getting rid of obvious, known bad actors at the instance level is something Mastodon excels at—viz the full-scale quarantine of Gab. If you’re on a well-moderated, mainstream instance, a ton of truly horrific stuff is going to be excised from your experience on Mastodon because the bad instances get shitcanned. And because there’s no central “public square” to contest on Mastodon, with all the corporations-censoring-political-speech-at-scale issues those huge ~public squares raise, many instance admins feel free to use a pretty heavy hand in throwing openly awful individuals and instances out of the pool.
But imagine a sophisticated adversary with a sustained interest in running both a network of covert and overt accounts on Mastodon and things rapidly get more complicated.
Lai and Roth weigh in on this issue, noting that the fediverse currently lacks capability and capacity for tracking bad actors through time in a structured way, and also doesn’t presently have much in the way of infrastructure for collaborative actor-level threat analysis:
First, actor-level analysis requires time-consuming and labor-intensive tracking and documentation. Differentiating between a commercially motivated spammer and a state-backed troll farm often requires extensive research, extending far beyond activity on one platform or website. The already unsustainable economics of fediverse moderation seem unlikely to be able to accommodate this kind of specialized investigation.
Second, even if you assume moderators can, and do, find accounts engaged in this type of manipulation— and understand their actions and motivations with sufficient granularity to target their activity—the burden of continually monitoring them is overwhelming. Perhaps more than anything else, disinformation campaigns demonstrate the “persistent” in “advanced persistent threat”: a single disinformation campaign, like China-based Spamouflage Dragon, can be responsible for tens or even hundreds of thousands of fake accounts per month, flooding the zone with low-quality content. The moderation tools built into platforms like Mastodon do not offer appropriate targeting mechanisms or remediations to moderators that could help them keep pace with this volume of activity.… Without these capabilities to automate enforcement based on long-term adversarial understanding, the unit economics of manipulation are skewed firmly in favor of bad actors, not defenders.
There’s also the perhaps even greater challenge of working across instances—and ideally, across platforms—to identify and root out persistent threats. Lai and Roth again:
From an analytic perspective, it can be challenging, if not impossible, to recognize individual accounts or posts as connected to a disinformation campaign in the absence of cross-platform awareness of related conduct. The largest platforms—chiefly, Meta, Google, and Twitter (pre-acquisition)—regularly shared information, including specific indicators of compromise tied to particular campaigns, with other companies in the ecosystem in furtherance of collective security. Information sharing among platform teams represents a critical way to build this awareness—and to take advantage of gaps in adversaries’ operational security to detect additional deceptive accounts and campaigns.… Federated moderation makes this kind of cross-platform collaboration difficult.
I predict that many advocates of federated and decentralized networks will believe that Lai and Roth are overstating these gaps in safety capabilities, but I hope more developers, instance administrators, and especially funders, will take this as an opportunity to prioritize scaled-up tooling and institution-building.
Edited to add, October 16, 2023: Independent Federated Trust and Safety (IFTAS), an organization working on supporting and improving trust and safety on federated networks, just released the results of their Moderator Needs Assessment results, highlighting needs for financial, legal, technical, and cultural support.
Meta’s fatal flaw
I think if you ask people why Meta failed to keep itself from being weaponized in Myanmar, they’ll tell you about optimizing for engagement and ravenously, heedlessly pursuing expansion and profits and continuously fucking up every part of content moderation.
I think those things are all correct, but there’s something else, too, though “heedless” nods toward it: As a company determined to connect the world at all costs, Meta failed, spectacularly, over and over, to make the connections that mattered, between their own machinery and the people it hurt.
So I think there are two linked things Meta could have done to prevent so much damage, which are to listen out for people in trouble and meaningfully correct course.
“Listening out” is from Ursula Le Guin, who said it in a 2015 interview with Choire Sicha that has never left my mind. She was speaking about the challenge of working while raising children while her partner taught:
…it worked out great, but it took full collaboration between him and me. See, I cannot write when I’m responsible for a child. They are full-time occupations for me. Either you’re listening out for the kids or you’re writing. So I wrote when the kids went to bed. I wrote between nine and midnight those years.
This passage is always with me because the only time I’m not listening out, at least a little bit, is when my kid is completely away from the house at school. Even when she’s sleeping, I’m half-concentrating on whatever I’m doing and…listening out. I can’t wear earplugs at night or I hallucinate her calling for me in my sleep. This is not rational! But it’s hardwired. Presumably this will lessen once she leaves for college or goes to sea or whatever, but I’m not sure it will.
So listening out is meaningful to me for embarrassingly, viscerally human reasons. Which makes it not something a serious person puts into an essay about the worst things the internet can do. I’m putting it here anyway because it cuts to the thing I think everyone who works on large-scale social networks and tools needs to wire into our brainstems.
In Myanmar and in Sophie Zhang’s disclosures about the company’s refusal to prioritize the elimination of covert influence networks, Meta demonstrated not just an unwillingness to listen to warnings, but a powerful commitment to not permitting itself to understand or act on information about the dangers it was worsening around the world.
It’s impossible for me to read the Haugen and Zhang disclosures and not think of the same patterns dismissing and hiding dangerous knowledge that we’ve seen from tobacco companies (convicted of racketeering and decades-long patterns of deception over tobacco’s dangers), oil companies (being sued by the state of California over decades-long patterns of deception over their contributions to climate change), or the Sacklers (who pled guilty to charges based on a decade-long pattern of deception over their contribution to the opioid epidemic).
But you don’t have to be a villain to succumb to the temptation to push away inconvenient knowledge. It often takes nothing more than being idealistic or working hard for little (or no) pay to believe that the good your work does necessarily outweighs its potential harms—and that especially if you’re offering it for free, any trouble people get into is their own fault. They should have done their own research, after all.
And if some people are warning about safety problems on an open source network where the developers and admins are trying their best, maybe they should just go somewhere else, right? Or maybe they’re just exaggerating, which is the claim I saw the most on Mastodon when the Stanford Internet Observatory published its report on CSAM on the fediverse.
We can’t have it both ways. Either people making and freely distributing tools and systems have some responsibility for their potential harms, or they don’t. If Meta is on the hook, so are people working in open technology. Even nice people with good intentions.
So: Listening out. Listening out for signals that we’re steering into the shoals. Listening out like it’s our own children at the sharp end of the worst things our platforms can do.
The warnings about Myanmar came from academics and digital rights people. They came, above all, from Myanmar, nearly 8,000 miles from Palo Alto. Twenty hours on a plane. Too far to matter, for too many years.
The civil society people who issued many of the warnings to Meta have clear thoughts about the way to avoid recapitulating Meta’s disastrous structural callousness during the years leading up to the genocide of the Rohingya. Several of those recommendations involve diligent, involved, hyper-specific listening to people on the ground about not only content moderation problems, but also dangers in the core functionality of social products themselves.
Nadah Feteih and Elodie Vialle’s recent piece in Tech Policy Press, “Centering Community Voices: How Tech Companies Can Better Engage with Civil Society Organizations” offers a really strong introduction to what that kind of consultative process might be like for big platforms. I think it also offers about a dozen immediately useful clues about how smaller, more distributed, and newer networks might proceed as well.
But let’s get a little more operational.
“Do better” requires material support
It’s impossible to talk about any of this without talking about the resource problem in open source and federated networks—most of the sector is critically underfunded and built on gift labor, which has shaping effects on who can contribute, who gets listened to, and what gets done.
It would be unrealistic bordering on goofy to expect everyone who contributes to projects like Mastodon and Lemmy or runs a small instance on a federated network to independently develop in-depth human-rights expertise. It’s just about as unrealistic to expect that even lead developers who are actively concerned about safety to have the resources and expertise to arrange close consultation with relevant experts in digital rights, disinformation, and complex, culturally specific issues globally.
There are many possible remedies to the problems and gaps I’ve tried to sketch above, but the one I’ve been daydreaming about a lot is the development of dedicated, cross-cutting, collaborative institutions that work not only within the realm of trust and safety as it’s constituted on centralized platforms, but also on hands-on research that brings the needs and voices of vulnerable people and groups into the heart of design work on protocols, apps, and tooling.
Maintainers and admins all over the networks are at various kinds of breaking points. Relatively few have the time and energy to push through year after year of precariousness and keep the wheels on out of sheer cussedness. And load-bearing personalities are not, I think, a great way to run a stable and secure network.
Put another way, rapidly growing, dramatically underfunded networks characterized by overtaxed small moderation crews and underpowered safety tooling present a massive attack surface. Believing that the same kinds of forces that undermined the internet in Myanmar won’t be able to weaponize federated networks because the nodes are smaller is a category error—most of the advantages of decentralized networks can be turned to adversaries’ advantage almost immediately.
Flinging money indiscriminately isn’t a cure, but without financial support that extends beyond near-subsistence for a few people, it’s very hard to imagine free and open networks being able to skill up in time to handle the kinds of threats and harms I looked at in the first three posts of this series.
The problem may look different for venture-funded projects like Bluesky, but I don’t know. I think in a just world, the new CTO of Mastodon wouldn’t be working full-time for free.
I also think that in that just world, philanthropic organizations with interests in the safety of new networks would press for and then amply fund collective, collaborative work across protocols and projects, because regardless of my own concerns and preferences, everyone who uses any of the new generation of networks and platforms deserves to be safe.
We all deserve places to be together online that are, at minimum, not inimical to offline life.
So what if you’re not a technologist, but you nevertheless care about this stuff? Unsurprisingly, I have thoughts.
Everything, everywhere, all at once
The inescapable downside of not relying on centralized networks to fix things is that there’s no single entity to try to pressure. The upside is that we can all work toward the same goals—better, safer, freer networks—from wherever we are. And we can work toward holding both centralized and new-school networks accountable, too.
If you live someplace with at least semi-democratic representation in government, you may be able to accomplish a lot by sending things like Amnesty International’s advocacy report and maybe even this series to your representatives, where there’s a chance their staffers will read them and be able to mount a more effective response to technological and corporate failings.
If you have an account on a federated/network, you can learn about the policies and plans of your own instance administrators—and you can press them (I would recommend politely) about their plans for handling big future threats like covert influence networks, organized but distributed hate campaigns, actor-level threats, and other threats we’ve seen on centralized networks and can expect to see on decentralized ones.
And if you have time or energy or money to spare, you can throw your support (material or otherwise) behind collaborative institutions that seek to reduce societal harms.
On Meta itself
It’s my hope that the 30,000-odd words of context, evidence, and explanation in parts 1–3 of this series speak for themselves.
I’m sure some people, presumably including some who’ve worked for Meta or still do, will read all of those words and decide that Meta had no responsibility for its actions and failures to act in Myanmar. I don’t think I have enough common ground with those readers to try to discuss anything.
There are quite clearly people at Meta who have tried to fix things. A common thread across internal accounts is that Facebook’s culture of pushing dangerous knowledge away from its center crushes many employees who try to protect users and societies. In cases like Sophie Zhang’s, Meta’s refusal to understand and act on what its own employees had uncovered is clearly a factor in employee health breakdowns.
And the whistleblower disclosures from the past few years make it clear that many people over many years were trying to flag, prevent, and diagnose harm. And to be fair, I’m sure lots of horrible things were prevented. But it’s impossible to read Frances Haugen’s disclosures or Sophie Zhang’s story and believe that the company is doing everything it can, except in the sense that it seems unable to conceive of meaningfully redesigning its products—and rearranging its budgets—to stop hurting people.
It’s also impossible for me to read anything Meta says on the record without thinking about the deceptive, blatant, borderline contemptuous runarounds it’s been doing for years over its content moderation performance. (That’s in Part III, if you missed it.)
Back in 2018, Adam Mosseri, who was in charge of News Feed—a major “recommendation surface” on which Facebook’s algorithms boosted genocidal anti-Rohingya messages in Myanmar—during the genocide, wrote that he’d lost some sleep over what had happened.
The lost sleep apparently didn’t amount to much in the way of product-design changes, considering that Global Witness found Facebook doing pretty much the exact same things with the same kinds of messages three years later.
But let’s look at what Mosseri actually said:
There is false news, not only on Facebook but in general in Myanmar. But there are no, as far as we can tell, third-party fact-checking organizations with which we can partner, which means that we need to rely instead on other methods of addressing some of these issues. We would look heavily, actually, for bad actors and things like whether or not they’re violating our terms of service or community standards to try and use those levers to try and address the proliferation of some problematic content. We also try to rely on the community and be as effective as we can at changing incentives around things like click-bait or sensational headlines, which correlate, but aren’t the same as false news.
Those are all examples of how we’re trying to take the issue seriously, but we lose some sleep over this. I mean, real-world harm and what’s happening on the ground in that part of the world is actually one of the most concerning things for us and something that we talk about on a regular basis. Specifically, about how we might be able to do more and be more effective, and more quickly.
This is in 2018, so six years after Myanmar’s digital-rights and civil-society organizations started contacting Meta to tell them about the organized hate campaigns on Facebook in Myanmar, which Meta appears to have ignored, because all those organized campaigns were still running through the peak of the Rohingya genocide in 2016 and 2017.
This interview also happens several years after Meta started relying on members of those same Burmese organizations to report content—because, if you remember from the earlier posts in this series, they hadn’t actually translated the Community Standards or the reporting interface itself. Or hired enough Burmese-speaking moderators to handle a country bigger than a cruise ship. It’s also interesting that Mosseri reported that Meta couldn’t find any “third-party fact-checking organizations” given that MIDO, which was one of the organizations reporting content problems to them, actually ran its own fact-checking operation.
And the incentives on the click-bait Mosseri mentions? That would be the market for fake and sensationalist news that Meta created by rolling out Instant Articles, which directly funded the development of Burmese-language clickfarms, and which pretty much destroyed Myanmar’s online media landscape back in 2016.
Mosseri and his colleagues talked about it on a regular basis, though.
I was going to let myself be snarky here and note that being in charge of News Feed during a genocide the UN Human Rights Council linked to Facebook doesn’t seem to have slowed Mosseri down personally, either. He’s the guy in charge of Meta’s latest social platform, Threads, after all.
But maybe it goes toward explaining why Threads refuses to allow users to search for potentially controversial topics, including the effects of an ongoing pandemic. This choice is being widely criticized as a failure to let people discuss important things. It feels to me like more of an admission that Meta doesn’t think it can do the work of content moderation, so it’s designing the product to avoid the biggest dangers.
It’s a clumsy choice, certainly. And it’s weird, after a decade of social media platforms charging in with no recognition that they’re making things worse. But if the alternative is returning to the same old unwinnable fight, maybe just not going there is the right call. (I expect that it won’t last.)
The Rohingya are still waiting
The Rohingya are people, not lessons. Nearly a million of them have spent at least six years in Bangladeshi camps that make up the densest refugee settlement on earth. Underfunded, underfed, and prevented from working, Rohingya people in the camps are vulnerable to climate-change-worsened weather, monsoon flooding, disease, fire, and gang violence. The pandemic has concentrated the already intense restrictions and difficulties of life in these camps.
If you have money, Global Giving’s Rohingya Refugee Relief Fund will get it into the hands of people who can use it.
The Canadian documentary Wandering: A Rohingya Story provides an intimate look at life in Kutupalong, the largest of the refugee camps. It’s beautifully and lovingly made.
From Wandering: A Rohingya Story. This mother and her daughter destroyed me.
Back in post-coup Myanmar, hundreds of thousands of people are risking their lives resisting the junta’s brutal oppression. Mutual Aid Myanmar is supporting their work. James C. Scott (yes) is on their board.
In the wake of the coup, the National Unity Government—the shadow-government wing of the Burmese resistance against the junta—has officially recognized the wrongs done to the Rohingya, and committed itself to dramatic change, should the resistance prevail:
The National Unity Government recognises the Rohingya people as an integral part of Myanmar and as nationals. We acknowledge with great shame the exclusionary and discriminatory policies, practices, and rhetoric that were long directed against the Rohingya and other religious and ethnic minorities. These words and actions laid the ground for military atrocities, and the impunity that followed them emboldened the military’s leaders to commit countrywide crimes at the helm of an illegal junta.
Acting on our ‘Policy Position on the Rohingya in Rakhine State’, the National Unity Government is committed to creating the conditions needed to bring the Rohingya and other displaced communities home in voluntary, safe, dignified, and sustainable ways.
We are also committed to social change and to the complete overhaul of discriminatory laws in consultation with minority communities and their representatives. A Rohingya leader now serves as Deputy Minister of Human Rights to ensure that Rohingya perspectives support the development of government policies and programs and legislative reform.
From the refugee camps, Rohingya youth activists are working to build solidarity between the Rohingya people and the mostly ethnically Bamar people in Myanmar who, until the coup, allowed themselves to believe the Tatmadaw’s messages casting the Rohingya as existential threats. Others, maybe more understandably, remain wary of the NUG’s claims that the Rohingya will be welcomed back home.
In Part II of this series, I tried to explain—clearly but at speed—how the 2016 and 2017 attacks on the Rohingya, which accelerated into full-scale ethnic cleansing and genocide, began when Rohingya insurgents carried out attacks on Burmese security forces and committed atrocities against civilians, after decades of worsening repression and deprivation in Myanmar’s Rakhine State by the Burmese government.
This week, the social media feed of one of the young Rohingya activists featured in a story I linked above is filled with photographs from Gaza, where two million people live inside fences and walls, and whose hospitals and schools and places of worship are being bombed by the Israeli military because of Hamas’s horrific attacks on Israeli civilians, after decades of worsening repression and deprivation in Gaza and the West Bank by the Israeli government.
We don’t need the internet to make our world a hell. But I don’t think we should forgive ourselves for letting our technology make the world worse.
I want to make our technologies into better tools for the many, many people devoted to building the kinds of human-level solidarity and connection that can get more of us through our present disasters to life on the other side.
Meta in Myanmar, Part III: The Inside View
“Well, Congressman, I view our responsibility as not just building services that people like to use, but making sure that those services are also good for people and good for society overall.” — Mark Zuckerberg, 2018
In the previous two posts in this series, I did a long but briskly paced early history of Meta and the internet in Myanmar—and the hateful and dehumanizing speech that came with it—and then looked at what an outside-the-company view could reveal about Meta’s role in the genocide of the Rohingya in 2016 and 2017.
In this post, I’ll look at what two whistleblowers and a crucial newspaper investigation reveal about what was happening inside Meta at the time. Specifically, the disclosed information:
- gives us a quantitative view of Meta’s content moderation performance—which, in turn, highlights a deceptive PR move routine Meta uses when questioned about moderation;
- clarifies what Meta knew about the effects of its algorithmic recommendations systems; and
- reveals a parasitic takeover of the Facebook platform by covert influence campaigns around the world—including in Myanmar.
Before we get into that, a brief personal note. There are few ways to be in the world that I enjoy less than “breathless conspiratorial.” That rhetorical mode muddies the water when people most need clarity and generates an emotional charge that works against effective decision-making. I really don’t like it. So it’s been unnerving to synthesize a lot of mostly public information and come up with results that wouldn’t look completely out of place in one of those overwrought threads.
I don’t know what to do with that except to be forthright but not dramatic, and to treat my readers’ endocrine systems with respect by avoiding needless flourishes. But the story is just rough and many people in it do bad things. (You can read my meta-post about terminology and sourcing if you want to see me agonize over the minutiae.)
Content warnings for the post: The whole series is about genocide and hate speech. There are no graphic descriptions or images, and this post includes no slurs or specific examples of hateful and inciting messages, but still. (And there’s a fairly unpleasant photograph of a spider at about the 40% mark.)
The disclosures
When Frances Haugen, a former product manager on Meta’s Civic Integrity team, disclosed a ton of internal Meta docs to the SEC—and several media outlets—in 2021, I didn’t really pay attention. I was pandemic-tired and I didn’t think there’d be much in there that I didn’t know. I was wrong!
Frances Haugen’s disclosures are of generational importance, especially if you’re willing to dig down past the US-centric headlines. Haugen has stated that she came forward because of things outside the US—Myanmar and its horrific echo years later in Ethiopia, specifically, and the likelihood that it would all just keep happening. So it makes sense that the docs she disclosed would be highly relevant, which they are.
There are eight disclosures in the bundle of information Haugen delivered via lawyers to the SEC, and each is about one specific way Meta “misled investors and the public.” Each disclosure takes the form of a letter (which probably has a special legal name I don’t know) and a huge stack of primary documents. The majority of those documents—internal posts, memos, emails, comments—haven’t yet been made public, but the letters themselves include excerpts, and subsequent media coverage and straightforward doc dumps have revealed a little bit more. When I cite the disclosures, I’ll point to the place where you can read the longest chunk of primary text—often that’s just the little excerpts in the letters, but sometimes we have a whole—albeit redacted—document to look at.
Before continuing, I think it’s only fair to note that the disclosures we see in public are necessarily those that run counter to Meta’s public statements, because otherwise there would be no need to disclose them. And because we’re only getting excerpts, there’s obviously a ton of context missing—including, presumably, dissenting internal views. I’m not interested in making a handwavey case based on one or two people inside a company making wild statements. So I’m only emphasizing points that are supported in multiple, specific excerpts.
Let’s start with content moderation and what the disclosures have to say about it.
How much dangerous stuff gets taken down?
We don’t know how much “objectionable content” is actually on Facebook—or on Instagram, or Twitter, or any other big platform. The companies running those platforms don’t know the exact numbers either, but what they do have are reasonably accurate estimates. We know they have estimates because sampling and human-powered data classification is how you train the AI classifiers required to do content-based moderation—removing posts and comments—at mass scale. And that process necessarily lets you estimate from your samples roughly how much of a given kind of problem you’re seeing. (This is pretty common knowledge, but it’s also confirmed in an internal doc I quote below.)
The platforms aren’t sharing those estimates with us because no one’s forcing them to. And probably also because, based on what we’ve seen from the disclosures, the numbers are quite bad. So I want to look at how bad they are, or recently were, on Facebook. Alongside that, I want to point out the most common way Meta distracts reporters and governing bodies from its terrible stats, because I think it’s a very useful thing to be able to spot.
One of Frances Haugen’s SEC disclosure letters is about Meta’s failures to moderate hate speech. It’s helpfully titled, “Facebook misled investors and the public about ‘transparency’ reports boasting proactive removal of over 90% of identified hate speech when internal records show that ‘as little as 3-5% of hate’ speech is actually removed.”1
Here’s the excerpt from the internal Meta document from which that “3–5%” figure is drawn:
…we’re deleting less than 5% of all of the hate speech posted to Facebook. This is actually an optimistic estimate—previous (and more rigorous) iterations of this estimation exercise have put it closer to 3%, and on V&I [violence and incitement] we’re deleting somewhere around 0.6%…we miss 95% of violating hate speech.2
Here’s another quote from different memo excerpted in the same disclosure letter:
[W]e do not … have a model that captures even a majority of integrity harms, particularly in sensitive areas … We only take action against approximately 2% of the hate speech on the platform. Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.3
Another estimate from a third internal document:
We seem to be having a small impact in many language-country pairs on Hate Speech and Borderline Hate, probably ~3% … We are likely having little (if any) impact on violence.4
Here’s a fourth one, specific to a study about Facebook in Afghanistan, which I include to help contextualize the global numbers:
While Hate Speech is consistently ranked as one of the top abuse categories in the Afghanistan market, the action rate for Hate Speech is worryingly low at 0.23 per cent.5
I don’t think these figures need a ton of commentary, honestly. I would agree that removing less than a quarter of a percent of hate speech is indeed “worryingly low,” as is removing 0.6% of violence and incitement messages. I think removing even 5% of hate speech—the highest number cited in the disclosures—is objectively terrible performance, and I think most people outside of the tech industry would agree with that. Which is presumably why Meta has put a ton of work into muddying the waters around content moderation.
So back to that SEC letter with the long name. It points something out, which is that Meta has long claimed that Facebook “proactively” detects between 95% (in 2020, globally) and 98% (in Myanmar, in 2021) of all the posts it removes because they’re hate speech—before users even see them.
At a glance, this looks good. Ninety-five percent is a lot! But since we know from the disclosed material that based on internal estimates the takedown rates for hate speech are at or below 5%, what’s going on here?
Here’s what Meta is actually saying: Sure, they might identify and remove only a tiny fraction of dangerous and hateful speech on Facebook, but of that tiny fraction, their AI classifiers catch about 95–98% before users report it. That’s literally the whole game, here.
So…the most generous number from the disclosed memos has Meta removing 5% of hate speech on Facebook. That would mean that for every 2,000 hateful posts or comments, Meta removes about 100–95 automatically and 5 via user reports. In this example, 1,900 of the original 2,000 messages remain up and circulating. So based on the generous 5% removal rate, their AI systems nailed…4.75% of hate speech. That’s the level of performance they’re bragging about.
You don’t need to take my word for any of this—Wired ran a critique breaking it down in 2021 and Ranking Digital Rights has a strongly worded post about what Meta claims in public vs. what the leaked documents reveal to be true, including this content moderation math runaround.
Meta does this particular routine all the time.
The shell game
Here’s Mark Zuckerberg on April 10th, 2018, answering a question in front of the Senate’s Commerce and Judiciary committees. He says that hate speech is really hard to find automatically and then pivots to something that he says is a real success, which is “terrorist propaganda,” which he simplifies immediately to “ISIS and Al Qaida content.” But that stuff? No problem:
Contrast [hate speech], for example, with an area like finding terrorist propaganda, which we’ve actually been very successful at deploying A.I. tools on already. Today, as we sit here, 99 percent of the ISIS and Al Qaida content that we take down on Facebook, our A.I. systems flag before any human sees it. So that’s a success in terms of rolling out A.I. tools that can proactively police and enforce safety across the community.6
So that’s 99% of…the unknown percentage of this kind of content that’s actually removed.
Zuckerberg actually tries to do the same thing the next day, April 11th, before the House Energy and Commerce Committee, but he whiffs the maneuver:
…we’re getting good in certain areas. One of the areas that I mentioned earlier was terrorist content, for example, where we now have A.I. systems that can identify and—and take down 99 percent of the al-Qaeda and ISIS-related content in our system before someone—a human even flags it to us. I think we need to do more of that.7
The version Zuckerberg says right there, on April 11th, is what I’m pretty sure most people think Meta means when they go into this stuff—but as stated, it’s a lie.
No one in those hearings presses Zuckerberg on those numbers—and when Meta repeats the move in 2020, plenty of reporters fall into the trap and make untrue claims favorable to Meta:
…between its AI systems and its human content moderators, Facebook says it’s detecting and removing 95% of hate content before anyone sees it. —Fast Company
About 95 percent of hate speech on Facebook gets caught by algorithms before anyone can report it… —Ars Technica
Facebook said it took action on 22.1 million pieces of hate speech content to its platform globally last quarter and about 6.5 million pieces of hate speech content on Instagram. On both platforms, it says about 95% of that hate speech was proactively identified and stopped by artificial intelligence. —Axios
The company said it now finds and eliminates about 95% of the hate speech violations using automated software systems before a user ever reports them… —Bloomberg
This is all not just wrong but wildly wrong if you have the internal numbers in front of you.
I’m hitting this point so hard not because I want to point out ~corporate hypocrisy~ or whatever, but because this deceptive runaround is consequential for two reasons: The first is that it provides instructive context about how to interpret Meta’s public statements. The second is that it actually says extremely dire things about Meta’s only hope for content-based moderation at scale, which is their AI-based classifiers.
Here’s Zuckerberg saying as much to a congressional committee:
…one thing that I think is important to understand overall is just the sheer volume of content on Facebook makes it so that we can’t—no amount of people that we can hire will be enough to review all of the content.… We need to rely on and build sophisticated A.I. tools that can help us flag certain content.8
This statement is kinda disingenuous in a couple of ways, but the central point is true: the scale of these platforms makes human review incredibly difficult. And Meta’s reasonable-sounding explanation is that this means they have to focus on AI. But by their own internal estimates, Meta’s AI classifiers are only identifying something in the range of 4.75% of hate speech on Facebook, and often considerably less. That seems like a dire stat for the thing you’re putting forward to Congress as your best hope!
The same disclosed internal memo that told us Meta was deleting between 3% and 5% of hate speech had this to say about the potential of AI classifiers to handle mass-scale content removals:
[O]ur current approach of grabbing a hundred thousand pieces of content, paying people to label them as Hate or Not Hate, training a classifier, and using it to automatically delete content at 95% precision is just never going to make much of a dent.9
Getting content moderation to work for even extreme and widely reviled categories of speech is obviously genuinely difficult, so I want to be extra clear about a foundational piece of my argument.
Responsibility for the machine
I think that if you make a machine and hand it out for free to everyone in the world, you’re at least partially responsible for the harm that the machine does.
Also, even if you say, “but it’s very difficult to make the machine safer!” I don’t think that reduces your responsibility so much as it makes you look shortsighted and bad at machines.
Beyond the bare fact of difficulty, though, I think the more what harm the machine does deviates from what people might expect a machine that looks like this to do, the more responsibility you bear: If you offer everyone in the world a grenade, I think that’s bad, but also it won’t be surprising when people who take the grenade get hurt or hurt someone else. But when you offer everyone a cute little robot assistant that turns out to be easily repurposed as a rocket launcher, I think that falls into another category.
Especially if you see that people are using your cute little robot assistant to murder thousands of people and elect not to disarm it because that would make it a little less cute.
This brings us to the algorithms.
“Core product mechanics”
From a screencapped version of “Facebook and responsibility,” one of the disclosed internal documents.
In the second post in this series, I quoted people in Myanmar who were trying to cope with an overwhelming flood of hateful and violence-inciting messages. It felt obvious on the ground that the worst, most dangerous posts were getting the most juice.
Thanks to the Haugen disclosures, we can confirm that this was also understood inside Meta.
In 2019, a Meta employee wrote a memo called “What is Collateral damage.” It included these statements (my emphasis):
“We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and the family of apps are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.
If integrity takes a hands-off stance for these problems, whether for technical (precision) or philosophical reasons, then the net result is that Facebook, taken as a whole, will be actively (if not necessarily consciously) promoting these types of activities. The mechanics of our platform are not neutral. 10
If you work in tech or if you’ve been following mainstream press accounts about Meta over the years, you presumably already know this, but I think it’s useful to establish this piece of the internal conversation.
Here’s a long breakdown from 2020 about the specific parts of the platform that actively put “unconnected content”—messages that aren’t from friends or Groups people subscribe to—in front of Facebook users. It comes from an internal post called “Facebook and responsibility” (my emphasis):
Facebook is most active in delivering content to users on recommendation surfaces like “Pages you may like,” “Groups you should join,” and suggested videos on Watch. These are surfaces where Facebook delivers unconnected content. Users don’t opt-in to these experiences by following other users or Pages. Instead, Facebook is actively presenting these experiences…
News Feed ranking is another way Facebook becomes actively involved in these harmful experiences. Of course users also play an active role in determining the content they are connected to through feed, by choosing who to friend and follow. Still, when and whether a user sees a piece of content is also partly determined by the ranking scores our algorithms assign, which are ultimately under our control. This means, according to ethicists, Facebook is always at least partially responsible for any harmful experiences on News Feed.
This doesn’t owe to any flaw with our News Feed ranking system, it’s just inherent to the process of ranking. To rank items in Feed, we assign scores to all the content available to a user and then present the highest-scoring content first. Most feed ranking scores are determined by relevance models. If the content is determined to be an integrity harm, the score is also determined by some additional ranking machinery to demote it lower than it would have appeared given its score. Crucially, all of these algorithms produce a single score; a score Facebook assigns. Thus, there is no such thing as inaction on Feed. We can only choose to take different kinds of actions.11
The next few quotes will apply directly to US concerns, but they’re clearly broadly applicable to the 90% of Facebook users who are outside the US and Canada, and whose disinfo concerns receive vastly fewer resources.
This one is from an internal Meta doc from November 5, 2020:
Not only do we not do something about combustible election misinformation in comments, we amplify them and give them broader distribution.12
When Meta staff tried to take the measure of their own recommendation systems’ behavior, they found that the systems led a fresh, newly made account into disinfo-infested waters very quickly:
After a small number of high quality/verified conservative interest follows… within just one day Page recommendations had already devolved towards polarizing content.
Although the account set out to follow conservative political news and humor content generally, and began by following verified/high quality conservative pages, Page recommendations began to include conspiracy recommendations after only 2 days (it took <1 week to get a QAnon recommendation!)
Group recommendations were slightly slower to follow suit - it took 1 week for in-feed GYSJ recommendations to become fully political/right-leaning, and just over 1 week to begin receiving conspiracy recommendations.13
The same document reveals that several of the Pages and Groups Facebook’s systems recommend to its test user show multiple signs of association with “coordinated inauthentic behavior,” aka foreign and domestic covert influence campaigns, which we’ll get to very soon.
Before that, I want to offer just one example of algorithmic malpractice from Myanmar.
Flower speech
Back in 2014, Burmese organizations including MIDO and Yangon-based tech accelerator Phandeeyar collaborated on a carefully calibrated counter-speech project called Panzagar (flower speech). The campaign—which was designed to be delivered in person, in printed materials, and online—encouraged ordinary Burmese citizens to push back on hate speech in Myanmar.
Later that year, Meta, which had just been implicated in the deadly communal violence in Mandalay, joined with the Burmese orgs to turn their imagery into digital Facebook stickers that users could apply to posts calling for things like the annihilation of the Rohingya people. The stickers depict cute cartoon characters, several of which offer admonishments like, “Don’t be the source of a fire,” “Think before you share,” “Don’t you be spawning hate,” and “Let it go buddy!”
The campaign was widely and approvingly covered by western organizations and media outlets, and Meta got a lot of praise for its involvement.
But according to members of the Burmese civil society coalition behind the campaign, it turned out that the Panzagar Facebook stickers—which were explicitly designed as counterspeech—“carried significant weight in their distribution algorithm,” so anyone who used them to counter hateful and violent messages inadvertently helped those messages gain wider distribution.14
I mention the Panzagar incident not only because it’s such a head-smacking example of Meta favoring cosmetic, PR-friendly tweaks over meaningful redress, or because it reveals plain incompetence in the face of already-serious violence, but also because it gets to what I see as a genuinely foundational problem with Meta in Myanmar.
Even when the company was finally (repeatedly) forced to take notice of the dangers it was contributing to, actions that could actually have made a difference—like rolling out new programs only after local consultation and adaptation, scaling up culturally and linguistically competent human moderation teams in tandem with increasing uptake, and above all, altering the design of the product to stop amplifying the most charged messages—remained not just undone, but unthinkable because they were outside the company’s understanding of what the product’s design should take into consideration.
This refusal to connect core project design with accelerating global safety problems means that attempts at prevention and repair are relegated to window-dressing—or which are actually counterproductive, as in the case of the Panzagar stickers, which absorbed the energy and efforts of local Burmese civil society groups and turned them into something that made the situation worse.
In a 2018 interview with Frontline about problems with Facebook, Meta’s former Chief Security Officer, Alex Stamos, returns again and again to the idea that security work properly happens at the product design level. Toward the end of the interview, he gets very clear:
Stamos: I think there was a structural problem here in that the people who were dealing with the downsides were all working together over kind of in the corner, right, so you had the safety and security teams, tight-knit teams that deal with all the bad outcomes, and we didn’t really have a relationship with the people who are actually designing the product.
Interviewer: You did not have a relationship?
Stamos: Not like we should have, right? It became clear—one of the things that became very clear after the election was that the problems that we knew about and were dealing with before were not making it back into how these products are designed and implemented.15
Meta’s content moderation was a disaster in Myanmar—and around the world—not only because it was treated and staffed like an afterthought, but because it was competing against Facebook’s core machinery.
And just as the house always wins, the core machinery of a mass-scale product built to boost engagement always defeats retroactive and peripheral attempts at cleanup.
This is especially true once organized commercial and nation-state actors figured out how to take over that machinery with large-scale fake Page networks boosted by fake engagement, which brings us to a less-discussed revelation: By the mid-2010s, Facebook had effectively become the equivalent of botnet in the hands of any group, governmental or commercial, who could summon the will and resources to exploit it.
A lot of people did, including, predictably, some of the worst people in the world.
Meta’s zombie networks

Ophiocordyceps formicarum observed at the Mushroom Research Centre, Chiang Mai, Thailand; Steve Axford (CC BY-SA 3.0)
Content warning: The NYT article I link to below is important, but it includes photographs of mishandled bodies, including those of children. If you prefer not to see those, a “reader view” or equivalent may remove the images. (Sarah Sentilles’ 2018 article on which kinds of bodies US newspapers put on display may be of interest.)
In 2018, the New York Times published a front-page account of what really happened on Facebook in Myanmar, which is that beginning around 2013, Myanmar’s military, the Tatmadaw, set up a dedicated, ultra-secret anti-Rohingya hatefarm spread across military bases in which up to 700 staffers worked in shifts to manufacture the appearance of overwhelming support for the genocide the same military then carried out.16
When the NYT did their investigation in 2018, all those fake Pages were still up.
Here’s how it worked: First, the military set up a sprawling network of fake accounts and Pages on Facebook. The fake accounts and Pages focused on innocuous subjects like beauty, entertainment, and humor. These Pages were called things like, “Beauty and Classic,” “Down for Anything,” “You Female Teachers,” “We Love Myanmar,” and “Let’s Laugh Casually.” Then military staffers, some trained by Russian propaganda specialists, spent years tending the Pages and gradually building up followers.17
Then, using this array of long-nurtured fake Pages—and Groups, and accounts—the Tatmadaw’s propagandists used everything they’d learned about Facebook’s algorithms to post and boost viral messages that cast Rohingya people as part of a global Islamic threat, and as the perpetrators of a never-ending stream of atrocities. The Times reports:
Troll accounts run by the military helped spread the content, shout down critics and fuel arguments between commenters to rile people up. Often, they posted sham photos of corpses that they said were evidence of Rohingya-perpetrated massacres…18
That the Tatmadaw was capable of such a sophisticated operation shouldn’t have come as a surprise. Longtime Myanmar digital rights and technology researcher Victoire Rio notes that the Tatmadaw had been openly sending its officers to study in Russia since 2001, was “among the first adopters of the Facebook platform in Myanmar” and launched “a dedicated curriculum as part of its Defense Service Academy Information Warfare training.”19
What these messages did
I don’t have the access required to sort out which specific messages originated from extremist religious networks vs. which were produced by military operations, but I’ve seen a lot of the posts and comments central to these overlapping campaigns in the UN documents and human rights reports.
They do some very specific things:
- They dehumanize the Rohingya: The Facebook messages speak of the Rohingya as invasive species that outbreed Buddhists and Myanmar’s real ethnic groups. There are a lot of bestiality images.
- They present the Rohingya as inhumane, as sexual predators, and as an immediate threat: There are a lot of graphic photos of mangled bodies from around the world, most of them presented as Buddhist victims of Muslim killers—usually Rohingya. There are a lot of posts about Rohingya men raping, forcibly marrying, beating, and murdering Buddhist women. One post that got passed around a lot includes a graphic photo of a woman tortured and murdered by a Mexican cartel, presented as a Buddhist woman in Myanmar murdered by the Rohingya.
- They connect the Rohingya to the “global Islamic threat”: There’s a lot of equating Rohingya people with ISIS terrorists and assigning them group responsibility for real attacks and atrocities by distant Islamic terror organizations.
Ultimately, all of these moves flow into demands for violence. The messages call incessantly and graphically for mass killings, beatings, and forced deportations. They call not for punishment, but annihilation.
This is, literally, textbook preparation for genocide, and I want to take a moment to look at how it works.
Helen Fein is the author of several definitive books on genocide, a co-founder and first president of the International Association of Genocide Scholars, and the founder of the Institute for the Study of Genocide. I think her description of the ways genocidaires legitimize their attacks holds up extremely well despite having been published 30 years ago. Here, she classifies a specific kind of rhetoric as one of the defining characteristics of genocide:
Is there evidence of an ideology, myth, or an articulated social goal which enjoins or justifies the destruction of the victim? Besides the above, observe religious traditions of contempt and collective defamation, stereotypes, and derogatory metaphor indicating the victim is inferior, subhuman (animals, insects, germs, viruses) or superhuman (Satanic, omnipotent), or other signs that the victims were pre-defined as alien, outside the universe of obligation of the perpetrator, subhuman or dehumanized, or the enemy—i.e., the victim needs to be eliminated in order that we may live (Them or Us).20
It’s also necessary for genocidaires to make claims—often supported by manufactured evidence—that the targeted group itself is the true danger, often by projecting genocidal intent onto the group that will be attacked.
Adam Jones, the guy who wrote a widely used textbook on genocide, puts it this way:
One justifies genocidal designs by imputing such designs to perceived opponents. The Tutsis/ Croatians/Jews/Bolsheviks must be killed because they harbor intentions to kill us, and will do so if they are not stopped/prevented/annihilated. Before they are killed, they are brutalized, debased, and dehumanized—turning them into something approaching “subhumans” or “animals” and, by a circular logic, justifying their extermination.21
So before their annihilation, the target group is presented as outcast, subhuman, vermin, but also themselves genocidal—a mortal threat. And afterward, the extraordinary cruelties characteristic of genocide reassure those committing the atrocities that their victims aren’t actually people.
The Tatmadaw committed atrocities in Myanmar. I touched on them in Part II and I’m not going to detail them here. But the figuratively dehumanizing rhetoric I described in parts one and two can’t be separated from the literally dehumanizing things the Tatmadaw did to the humans they maimed and traumatized and killed. Especially now that it’s clear that the military was behind much of the rhetoric as well as the violent actions that rhetoric worked to justify.
In some cases, even the methods match up: The military’s campaign of intense and systematic sexual violence toward and mutilation of women and girls, combined with the concurrent mass murder of children and babies, feels inextricably connected to the rhetoric that cast the Rohingya as both a sexual and reproductive threat who endanger the safety of Buddhist women and outbreed the ethnicities that belong in Myanmar.
Genocidal communications are an inextricable part of a system that turns “ethnic tensions” into mass death. When we see that the Tatmadaw was literally the operator of covert hate and dehumanization propaganda networks on Facebook, I think the most rational way to understand those networks is as an integral part of the genocidal campaign.
After the New York Times article went live, Meta did two big takedowns. Nearly four million people were following the fake Pages identified by either the NYT or by Meta in follow-up investigations. (Meta had previously removed the Tatmadaw’s own official Pages and accounts and 46 “news and opinion” Pages that turned out to be covertly operated by the military—those Pages were followed by nearly 12 million people.)
So given these revelations and disclosures, here’s my question: Does the deliberate, adversarial use of Facebook by Myanmar’s military as a platform for disinformation and propaganda take any of the heat off of Meta? After all, a sovereign country’s military is a significant adversary.
But here’s the thing—Alex Stamos, Facebook’s Chief Security Officer, had been trying since 2016 to get Meta’s management and executives to acknowledge and meaningfully address the fact that Facebook was being used as host for both commercial and state-sponsored covert influence ops around the world. Including in the only place where it was likely to get the company into really hot water: the United States.
“Oh fuck”
On December 16, 2016, Facebook’s newish Chief Security Officer, Alex Stamos—who now runs Stanford’s Internet Observatory—rang Meta’s biggest alarm bells by calling an emergency meeting with Mark Zuckerberg and other top-level Meta executives.
In that meeting, documented in Sheera Frenkel and Cecilia Kang’s book, The Ugly Truth, Stamos handed out a summary outlining the Russian capabilities. It read:
We assess with moderate to high confidence that Russian state-sponsored actors are using Facebook in an attempt to influence the broader political discourse via the deliberate spread of questionable news articles, the spread of information from data breaches intended to discredit, and actively engaging with journalists to spread said stolen information.22
“Oh fuck, how did we miss this?” Zuckerberg responded.
Stamos’ team had also uncovered “a huge network of false news sites on Facebook” posting and cross-promoting sensationalist bullshit, much of it political disinformation, along with examples of governmental propaganda operations from Indonesia, Turkey, and other nation-state actors. And the team had recommendations on what to do about it.
Frenkel and Kang paraphrase Stamos’ message to Zuckerberg (my emphasis):
Facebook needed to go on the offensive. It should no longer merely monitor and analyze cyber operations; the company had to gear up for battle. But to do so required a radical change in culture and structure. Russia’s incursions were missed because departments across Facebook hadn’t communicated and because no one had taken the time to think like Vladimir Putin.23
Those changes in culture and structure didn’t happen. Stamos began to realize that to Meta’s executives, his work uncovering the foreign influence networks, and his choice to bring them to the executives’ attention, were both unwelcome and deeply inconvenient.
All through the spring and summer of 2017, instead of retooling to fight the massive international category of abuse Stamos and his colleagues had uncovered, Facebook played hot potato with the information about the ops Russia had already run.
On September 21, 2017, while the Tatmadaw’s genocidal “clearance operations” were approaching their completion, Mark Zuckerberg finally spoke publicly about the Russian influence campaign for the first time.24
In the intervening months, the massive covert influence networks operating in Myanmar ground along, unnoticed.
Thanks to Sophie Zhang, a data scientist who spent two years at Facebook fighting to get networks like the Tatmadaw’s removed, we know quite a lot about why.
What Sophie Zhang found
In 2018, Facebook hired a data scientist named Sophie Zhang and assigned her to a new team working on fake engagement—and specifically on “scripted inauthentic activity,” or bot-driven fake likes and shares.
Within her first year on the team, Zhang began finding examples of bot-driven engagement being used for political messages in both Brazil and India ahead of their national elections. Then she found something that concerned her a lot more. Karen Hao of the MIT Technology Review writes:
The administrator for the Facebook page of the Honduran president, Juan Orlando Hernández, had created hundreds of pages with fake names and profile pictures to look just like users—and was using them to flood the president’s posts with likes, comments, and shares. (Facebook bars users from making multiple profiles but doesn’t apply the same restriction to pages, which are usually meant for businesses and public figures.)
The activity didn’t count as scripted, but the effect was the same. Not only could it mislead the casual observer into believing Hernández was more well-liked and popular than he was, but it was also boosting his posts higher up in people’s newsfeeds. For a politician whose 2017 reelection victory was widely believed to be fraudulent, the brazenness—and implications—were alarming.25
When Zhang brought her discovery back to the teams working on Pages Integrity and News Feed Integrity, both refused to act, either to stop fake Pages from being created, or to keep the fake engagement signals the fake Pages generate from making posts go viral.
But Zhang kept at it, and after a year, Meta finally removed the Honduran network. The very next day, Zhang reported a network of fake Pages in Albania. The Guardian’s Julia Carrie Wong explains what came next:
In August, she discovered and filed escalations for suspicious networks in Azerbaijan, Mexico, Argentina and Italy. Throughout the autumn and winter she added networks in the Philippines, Afghanistan, South Korea, Bolivia, Ecuador, Iraq, Tunisia, Turkey, Taiwan, Paraguay, El Salvador, India, the Dominican Republic, Indonesia, Ukraine, Poland and Mongolia.26
According to Zhang, Meta eventually established a policy against “inauthentic behavior,” but didn’t enforce it, and rejected Zhang’s proposal to punish repeat fake-Page creators by banning their personal accounts because of policy staff’s “discomfort with taking action against people connected to high-profile accounts.”27
Zhang discovered that even when she took initiative to track down covert influence campaigns, the teams who could take action to remove them didn’t—not without persistent “lobbying.” So Zhang tried harder. Here’s Karen Hao again:
She was called upon repeatedly to help handle emergencies and praised for her work, which she was told was valued and important.
But despite her repeated attempts to push for more resources, leadership cited different priorities. They also dismissed Zhang’s suggestions for a more sustainable solution, such as suspending or otherwise penalizing politicians who were repeat offenders. It left her to face a never-ending firehose: The manipulation networks she took down quickly came back, often only hours or days later. “It increasingly felt like I was trying to empty the ocean with a colander,” she says.28
Julia Carrie Wong’s Guardian piece reveals something interesting about Zhang’s reporting chain, which is that Meta’s Vice President of Integrity, Guy Rosen, was one of the people giving her the hardest pushback.
Remember Internet.org, also known as Free Basics, aka Meta’s push to dominate global internet use in all those countries it would go on to “deprioritize” and generally ignore?
Guy Rosen, Meta’s then-newish VP of Integrity, is the guy who previously ran Internet.org. He came to lead Integrity directly from being VP of Growth. Before getting acquihired by Meta, Rosen co-founded a company The Information describes as “a startup that analyzed what people did on their smartphones.”29
Meta bought that startup in 2013, nominally because it would help Internet.org. In a very on-the-nose development, Rosen’s company’s supposedly privacy-protecting VPN software allowed Meta to collect huge amounts of data—so much that Apple booted the app from its store.
So that’s Facebook’s VP of Integrity.
“We simply didn’t care enough to stop them”
In the Guardian, Julia Carrie Wong reports that in the fall of 2019, Zhang discovered that the Honduras network was back up, and she couldn’t get Meta’s Threat Intelligence team to deal with it. That December, she posted an internal memo about it. Rosen responded:
Facebook had “moved slower than we’d like because of prioritization” on the Honduras case, Rosen wrote. “It’s a bummer that it’s back and I’m excited to learn from this and better understand what we need to do systematically,” he added. But he also chastised her for making a public [public as in within Facebook —EK] complaint, saying: “My concern is that threads like this can undermine the people that get up in the morning and do their absolute best to try to figure out how to spend the finite time and energy we all have and put their heart and soul into it.”[31]
In a private follow-up conversation (still in December, 2019), Zhang alerted Rosen that she’d been told that the Facebook Threat Intelligence team would only prioritize fake networks affecting “the US/western Europe and foreign adversaries such as Russia/Iran/etc.”
Rosen told her that he agreed with those priorities. Zhang pushed back (my emphasis):
I get that the US/western Europe/etc is important, but for a company with effectively unlimited resources, I don’t understand why this cannot get on the roadmap for anyone … A strategic response manager told me that the world outside the US/Europe was basically like the wild west with me as the part-time dictator in my spare time. He considered that to be a positive development because to his knowledge it wasn’t covered by anyone before he learned of the work I was doing.
Rosen replied, “I wish resources were unlimited.”30
I’ll quote Wong’s next passage in full: “At the time, the company was about to report annual operating profits of $23.9bn on $70.7bn in revenue. It had $54.86bn in cash on hand.”
In early 2020, Zhang’s managers told her she was all done tracking down influence networks—it was time she got back to hunting and erasing “vanity likes” from bots instead.
But Zhang believed that if she stopped, no one else would hunt down big, potentially consequential covert influence networks. So she kept doing at least some of it, including advocating for action on an inauthentic Azerbaijan network that appeared to be connected to the country’s ruling party. In an internal group, she wrote that, “Unfortunately, Facebook has become complicit by inaction in this authoritarian crackdown.”
Although we conclusively tied this network to elements of the government in early February, and have compiled extensive evidence of its violating nature, the effective decision was made not to prioritize it, effectively turning a blind eye.”31
After those messages, Threat Intelligence decided to act on the network after all.
Then Meta fired Zhang for poor performance.
On her way out the door, Zhang posted a long exit memo—7,800 words—describing what she’d seen. Meta deleted it, so Zhang put up a password-protected version on her own website so her colleagues can see it. So Meta got Zhang’s entire website taken down and her domain deactivated. Eventually Meta got enough employee pressure that it put an edited version back up on their internal site.32
Shortly thereafter, someone leaked the memo to Buzzfeed News.
In the memo, Zhang wrote:
I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions. I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count.33
And: “[T]he truth was, we simply didn’t care enough to stop them.”
On her final day at Meta, Zhang left notes for her colleagues, tallying suspicious accounts involved in political influence campaigns that needed to be investigated:
There were 200 suspicious accounts still boosting a politician in Bolivia, she recorded; 100 in Ecuador, 500 in Brazil, 700 in Ukraine, 1,700 in Iraq, 4,000 in India and more than 10,000 in Mexico.34
“With all due respect”
Zhang’s work at Facebook happened after the wrangling over Russian influence ops that Alex Stamos’ team found. And after the genocide in Myanmar. And after Mark Zuckerberg did his press-and-government tour about how hard Meta tried and how much better they’d do after Myanmar.35
It was an entire calendar year after the New York Times found the Tatmadaw’s genocide-fueling fake-Page hatefarm that Guy Rosen, Facebook’s VP of Integrity, told Sophie Zhang that the only coordinated fake networks Facebook would take down were the ones that affected the US, Western Europe, and “foreign adversaries.”36
In response to Zhang’s disclosures, Rosen later hopped onto Twitter to deliver his personal assessment of the networks Zhang found and couldn’t get removed:
With all due respect, what she’s described is fake likes—which we routinely remove using automated detection. Like any team in the industry or government, we prioritize stopping the most urgent and harmful threats globally. Fake likes is not one of them.
One of Frances Haugen’s disclosures includes an internal memo that summarizes Meta’s actual, non-Twitter-snark awareness of the way Facebook has been hollowed out for routine use by covert influence campaigns:
We frequently observe highly-coordinated, intentional activity on the FOAS [Family of Apps and Services] by problematic actors, including states, foreign actors, and actors with a record of criminal, violent or hateful behaviour, aimed at promoting social violence, promoting hate, exacerbating ethnic and other societal cleavages, and/or delegitimizing social institutions through misinformation. This is particularly prevalent—and problematic—in At Risk Countries and Contexts.37
So, they knew.
Because of Haugen’s disclosures, we also know that in 2020, for the category, “Remove, reduce, inform/measure misinformation on FB Apps, Includes Community Review and Matching”—so, that’s moderation targeting misinformation specifically—only 13% of the total budget went to the non-US countries that provide more than 90% of Facebook’s user base, and which include all of those At Risk Countries. The other 87% of the budget was reserved for the 10% of Facebook users who live in the United States.38
In case any of this seems disconnected with the main thread of what happened in Myanmar, here’s what (formerly Myanmar-based) researcher Victoire Rio had to say about covert coordinated influence networks in her extremely good 2020 case study about the role of social media in Myanmar’s violence:
Bad actors spend months—if not years—building networks of online assets, including accounts, pages and groups, that allow them to manipulate the conversation. These inauthentic presences continue to present a major risk in places like Myanmar and are responsible for the overwhelming majority of problematic content.39
Note that Rio says that these inauthentic networks—the exact things Sophie Zhang chased down until she got fired for it—continued to present a major risk in 2020.
It’s time to skip ahead.
Let’s go to Myanmar in 2021, four years after the peak of the genocide. After everything I’ve dealt with in this whole painfully long series so far, it would be fair to assume that Meta would be prioritizing getting everything right in Myanmar. Especially after the coup.
Meta in Myanmar, again (2021)
In 2021, the Tatmadaw deposed Myanmar’s democratically elected government and transferred the leadership of the country to the military’s Commander-in-Chief. Since then, the military has turned the machines of surveillance, administrative repression, torture, and murder that it refined on the Rohingya and other ethnic minorities onto Myanmar’s Buddhist ethnic Bamar majority.
Also in 2021, Facebook’s director of policy for APAC Emerging Countries, Rafael Frankel, told the Associated Press that Facebook had now “built a dedicated team of over 100 Burmese speakers.”
This “dedicated team is,” presumably, the group of contract workers employed by the Accenture-run “Project Honey Badger” team in Malaysia.40 (Which, Jesus.)
In October of 2021, the Associated Press took a look at how that’s working out on Facebook in Myanmar. Right away, they found threatening and violent posts:
One 2 1/2 minute video posted on Oct. 24 of a supporter of the military calling for violence against opposition groups has garnered over 56,000 views.
“So starting from now, we are the god of death for all (of them),” the man says in Burmese while looking into the camera. “Come tomorrow and let’s see if you are real men or gays.”
One account posts the home address of a military defector and a photo of his wife. Another post from Oct. 29 includes a photo of soldiers leading bound and blindfolded men down a dirt path. The Burmese caption reads, “Don’t catch them alive.”41
That’s where content moderation stood in 2021. What about the algorithmic side of things? Is Facebook still boosting dangerous messages in Myanmar?
In the spring of 2021, Global Witness analysts made a clean Facebook account with no history and searched for တပ်မတော်—“Tatmadaw.” They opened the top page in the results, a military fan page, and found no posts that broke Facebook’s new, stricter rules. Then they hit the “like” button, which caused a pop-up with “related pages” to appear. Then the team popped open up the first five recommended pages.
Here’s what they found:
Three of the five top page recommendations that Facebook’s algorithm suggested contained content posted after the coup that violated Facebook’s policies. One of the other pages had content that violated Facebook’s community standards but that was posted before the coup and therefore isn’t included in this article.
Specifically, they found messages that included:
- Incitement to violence
- Content that glorifies the suffering or humiliation of others
- Misinformation that can lead to physical harm42
As well as several kinds of posts that violated Facebook’s new and more specific policies on Myanmar.
So not only were the violent, violence-promoting posts still showing up in Myanmar four years later after the atrocities in Rakhine State—and after the Tatmadaw turned the full machinery of of its violence onto opposition members of Myanmar’s Buddhist ethnic majority—but Facebook was still funneling users directly into them after even the lightest engagement with anodyne pro-military content.
This is in 2021, with Meta throwing vastly more resources at the problem than it ever did during the period leading up to and including the genocide of the Rohingya people. Its algorithms are making active recommendations by Facebook, precisely as outlined in the Meta memos in Haugen’s disclosures.
By any reasonable measure, I think this is a failure.
Meta didn’t respond to requests for comment from Global Witness, but when the Guardian and AP picked up the story, Meta got back to them with…this:
Our teams continue to closely monitor the situation in Myanmar in real-time and take action on any posts, Pages or Groups that break our rules. We proactively detect 99 percent of the hate speech removed from Facebook in Myanmar, and our ban of the Tatmadaw and repeated disruption of coordinated inauthentic behavior has made it harder for people to misuse our services to spread harm.43
One more time: This statement says nothing about how much hate speech is removed. It’s pure misdirection.
Internal Meta memos highlight ways to use Facebook’s algorithmic machinery to sharply reduce the spread of what they called “high-harm misinfo.” For those potentially harmful topics, you “hard demote” (aka “push down” or “don’t show”) reshared posts that were originally made by someone who isn’t friended or followed by the viewer. (Frances Haugen talks about this in interviews as “cutting the reshare chain.”)
And this method works. In Myanmar, “reshare depth demotion” reduced “viral inflammatory prevalence” by 25% and cut “photo misinformation” almost in half.
In a reasonable world, I think Meta would have decided to broaden use of this method and work on refining it to make it even more effective. What they did, though, was decide to roll it back within Myanmar as soon as the upcoming elections were over.44
The same SEC disclosure I just cited also notes that Facebook’s AI “classifier” for Burmese hate speech didn’t seem to be maintained or in use—and that algorithmic recommendations were still shuttling people toward violent, hateful messages that violated Facebook’s Community Standards.
So that’s how the algorithms were going. How about the military’s covert influence campaign?
Reuters reported in late 2021 that:
As Myanmar’s military seeks to put down protest on the streets, a parallel battle is playing out on social media, with the junta using fake accounts to denounce opponents and press its message that it seized power to save the nation from election fraud…
The Reuters reporters explain the military has assigned thousands of the soldiers to wage “information combat” in what appears to be an expanded, distributed version of their earlier secret propaganda ops:
“Soldiers are asked to create several fake accounts and are given content segments and talking points that they have to post,” said Captain Nyi Thuta, who defected from the army to join rebel forces at the end of February. “They also monitor activity online and join (anti-coup) online groups to track them.” 45
(We know this because Reuters journalists got hold of a high-placed defector from the Tatmadaw’s propaganda wing.)
When asked for comment, Facebook’s regional Director of Public Policy told Reuters that Meta “‘proactively’ detected almost 98 percent of the hate speech removed from its platform in Myanmar.”
“Wasting our lives under tarpaulin”
The Rohingya people forced to flee Myanmar have scattered across the region, but the overwhelming majority of those who fled in 2017 ended up in the Cox’s Bazar region of Bangladesh.
The camps are beyond overcrowded, and they make everyone who lives in them vulnerable to the region’s seasonal flooding, to worsening climate impacts, and to waves of disease. This year, the refugees’ food aid was just cut from the equivalent of $12 a month to $8 month, because the international community is focused elsewhere.46
The complex geopolitical situation surrounding post-coup Myanmar—in which many western and Asian countries condemn the situation in Myanmar, but don’t act lest they push the Myanmar junta further toward China—seems likely to ensure a long, bloody conflict, with no relief in sight for the Rohingya.47
The UN estimates that more than 960,000 Rohingya refugees now live in refugee camps in Bangladesh. More than half are children, few of whom have had much education at all since coming to the camps six years ago. The UN estimates that the refugees needed about $70.5 million for education in 2022, of which 1.6% was actually funded. 48
Amnesty International spoke with Mohamed Junaid, a 23-year-old Rohingya volunteer math and chemistry teacher, who is also a refugee. He told Amnesty:
Though there were many restrictions in Myanmar, we could still do school until matriculation at least. But in the camps our children cannot do anything. We are wasting our lives under tarpaulin.49
In their report, “The Social Atrocity,” Amnesty wrote that in 2020, seven Rohingya youth organizations based in the refugee camps made a formal application to Meta’s Director of Human Rights. They requested that, given its role in the crises that led to their expulsion from Myanmar, Meta provide just one million dollars in funding to support a teacher-training initiative within the camps—a way to give the refugee children a chance at an education that might someday serve them in the outside world.
Meta got back to the Rohingya youth organizations in 2021, a year in which the company cleared $39.3B in profits:
Unfortunately, after discussing with our teams, this specific proposal is not something that we’re able to support. As I think we noted in our call, Facebook doesn’t directly engage in philanthropic activities.
In 2022, Global Witness came back for one more look at Meta’s operations in Myanmar, this time with eight examples of real hate speech aimed at the Rohingya—actual posts from the period of the genocide, all taken from the UN Human Rights Council findings I’ve been linking to so frequently in this series. They submitted these real-life examples of hate speech to Meta as Burmese-language Facebook advertisements.
Meta accepted all eight ads.50
The final post in this series, Part IV, will be up in about a week. Thank you for reading.
“Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎
“Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎
“Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called “Demoting on Integrity Signals.”↩︎
“Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called “A first look at the minimum integrity holdout.”↩︎
“Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called “fghanistan Hate Speech analysis.”↩︎
“Transcript of Mark Zuckerberg’s Senate hearing,” The Washington Post (which got the transcript via Bloomberg Government), April 10, 2018.↩︎
“ranscript of Zuckerberg’s Appearance Before House Committee,” The Washington Post (which got the transcript via Bloomberg Government), April 11, 2018.↩︎
“Transcript of Zuckerberg’s Appearance Before House Committee,” The Washington Post (which got the transcript via Bloomberg Government), April 11, 2018.↩︎
“Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎
“Facebook Misled Investors and the Public About Its Role Perpetuating Misinformation and Violent Extremism Relating to the 2020 Election and January 6th Insurrection,” Whistleblower Aid, undated; “Facebook Wrestles With the Features It Used to Define Social Networking,” The New York Times, Oct. 25, 2021. This memo hasn’t been made public even in a redacted form, which is frustrating, but the SEC disclosure and NYT article cited here both contain overlapping but not redundant excerpts from which I was able to reconstruct this slightly longer quote.↩︎
“Facebook and responsibility,” internal Facebook memo, authorship redacted, March 9, 2020, archived at Document Cloud as a series of images.↩︎
“Facebook misled investors and the public about its role perpetuating misinformation and violent extremism relating to the 2020 election and January 6th insurrection,” Whistleblower Aid, undated. (Date of the quoted internal memo comes from The Atlantic.)↩︎
“Facebook Misled Investors and the Public About Its Role Perpetuating Misinformation and Violent Extremism Relating to the 2020 Election and January 6th Insurrection,” Whistleblower Aid, undated.↩︎
“Facebook and the Rohingya Crisis,” Myanmar Internet Project, September 29, 2022. This document is offline right now at the Myanmar Internet Project site, so I’ve used Document Cloud to archive a copy of a PDF version a project affiliate provided.↩︎
Full interview with Alex Stamos filmed for The Facebook Dilemma, Frontline, October, 2018.↩︎
“A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018.↩︎
“A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018; “Removing Myanmar Military Officials From Facebook,” Meta takedown notice, August 28, 2018.↩︎
“A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018.↩︎
“The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎
“Genocide: A Sociological Perspective,” Helen Fein, Curent Sociology, Vol.38, No.1 (Spring 1990), p. 1-126; republished in Genocide: An Anthropological Reader, ed. Alexander Laban Hinton, Blackwell Publishers, 2002, and this quotation appears on p. 84 of that edition.↩︎
Genocide: A Comprehensive Introduction, Adam Jones, Routledge, 2006, p. 267.↩︎
An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎
An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎
“Read Mark Zuckerberg’s full remarks on Russian ads that impacted the 2016 elections,” CNBC News, September 21, 2017.↩︎
“She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎
“How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎
“How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎
“She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎
“The Guy at the Center of Facebook’s Misinformation Mess,” Sylvia Varnham O’Regan, The Information, June 18, 2021.↩︎
“How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎
“How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎
“She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎
“I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation, Craig Silverman, Ryan Mac, Pranav Dixit, BuzzFeed News, September 14, 2020.↩︎
“How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎
“The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎
“How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎
“Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated. This is a single-source statement, but it’s a budget figure, not an opinion, so I’ve used it.↩︎
“Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated. This is a single-source statement, but it’s a budget figure, not an opinion, so I’ve used it.↩︎
“The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎
“Zuckerberg Was Called Out Over Myanmar Violence. Here’s His Apology.” Kevin Roose and Paul Mozur, The New York Times, April 9, 2018.↩︎
“Hate Speech in Myanmar Continues to Thrive on Facebook, Sam McNeil, Victoria Milko, The Associated Press, November 17, 2021.↩︎
“Algorithm of Harm: Facebook Amplified Myanmar Military Propaganda Following Coup,” Global Witness, June 23, 2021.↩︎
“Algorithm of Harm: Facebook Amplified Myanmar Military Propaganda Following Coup,” Global Witness, June 23, 2021.↩︎
“Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated.↩︎
“‘Information Combat’: Inside the Fight for Myanmar’s soul,” Fanny Potkin, Wa Lone, Reuters, November 1, 2021.↩︎
“Rohingya Refugees Face Hunger and Loss of Hope After Latest Ration Cuts,” Christine Pirovolakis, UNHCR, the UN Refugee Agency, July 19, 2023.↩︎
“Is Myanmar the Frontline of a New Cold War?,” Ye Myo Hein and Lucas Myers, Foreign Affairs, June 19, 2023.↩︎
“The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022; the education funding estimates come from “Bangladesh: Rohingya Refugee Crisis Joint Response Plan 2022,” OCHA Financial Tracking Service, 2022, cited by Amnesty.↩︎
“Facebook Approves Adverts Containing Hate Speech Inciting Violence and Genocide Against the Rohingya,” March 20, 2022.↩︎
“Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated.↩︎