Meta in Myanmar, Part IV: Only Connect

The Atlantic Council’s report on the looming challenges of scaling trust and safety on the web opens with this statement:

That which occurs offline will occur online.

I think the reverse is also true: That which occurs online will occur offline.

Our networks don’t create harms, but they reveal, scale, and refine them, making it easier to destabilize societies and destroy human beings. The more densely the internet is woven into our lives and societies, the more powerful the feedback loop becomes.

In this way, our networks—and specifically, the most vulnerable and least-heard people inhabiting them—have served as a very big lab for gain-of-function research by malicious actors.

And as the first three posts in this series make clear, you don’t have to be online at all to experience the internet’s knock-on harms—there’s no opt-out when internet-fueled violence sweeps through and leaves villages razed and humans traumatically displaced or dead. (And the further you are from the centers of tech-industry power—geographically, demographically, culturally—the less likely it is that the social internet’s principal powers will do anything to plan for, prevent, or attempt to repair the ways their products hurt you.)

I think that’s the thing to keep in the center while trying to sort out everything else.

In the previous 30,000 words of this series, I’ve tried to offer a careful accounting of the knowable facts of Myanmar’s experience with Meta. Here’s the argument I outlined toward the end of Part II:

  1. Meta bought and maneuvered its way into the center of Myanmar’s online life and then inhabited that position with a recklessness that was impervious to warnings by western technologists, journalists, and people at every level of Burmese society. (This is most of Part I.)
  2. After the 2012 violence, Meta mounted a content moderation response so inadequate that it would be laughable if it hadn’t been deadly. (Discussed in Part I and also [in Part II].)
  3. With its recommendation algorithms and financial incentive programs, Meta devastated Myanmar’s new and fragile online information sphere and turned thousands of carefully laid sparks into flamethrowers. (Discussed [in Part II] and in Part III.)
  4. Despite its awareness of similar covert influence campaign based on “inauthentic behavior”—aka fake likes, comments, and Pages—Meta allowed an enormous and highly influential covert influence operation to thrive on Burmese-language Facebook throughout the run-up to the peak of the 2016 and 2017 “ethnic cleansing,” and beyond. (Part III.)

I still think that’s right. But this story’s many devils are in the details, and getting at least some of the details down in public was the whole point of this very long exercise.

Here at the end of it, it’s tempting to try to package up a tidy set of anti-Meta action items here and call it a day, but there’s nothing tidy about this story, or about what I think I’ve learned working on it. What I’m going to try to do instead is to try to illuminate some facets of the problem, suggest some directions for mitigations, rotate the problem, and repeat.

The allure of the do-over

After my first month of part-time research on Meta in Myanmar, I was absorbed in the work and roughed up by the awfulness of what I was learning and, frankly, incandescently furious with Meta’s leadership. But sometime after I read Faine Greenwood’s posts—and reread Craig Mod’s essay for the first time since 2016—I started to get scared, for reasons I couldn’t even pin down right away. Like, wake-up-at-3am scared.

At first, I thought I was just worried that the new platforms and networks coming into being would also be vulnerable to the kind of coordinated abuse that Myanmar experienced. And I am worried about that and will explain at great length later in this post. But it wasn’t just that.

Craig’s essay about his 2015 fieldwork with farmers in Myanmar captures something real about the exhilarating possibilities of a reboot:

…there is a wild and distinct freedom to the feeling of working in places like this. It is what intoxicates these consultants. You have seen and lived within a future, and believe—must believe—you can help bring some better version of it to light here. A place like Myanmar is a wireless mulligan. A chance to get things right in a way that we couldn’t or can’t now in our incumbent-laden latticeworks back home.

This rings bells not only because I remember my own early-internet spells of optimism—which were pretty far in the rearview by 2016—but because I recognize a much more recent feeling, which is the way it felt to come back online last fall, as the network nodes on Mastodon were starting to really light up.

I’d been mostly off social media since 2018, with a special exception for covid-data work in 2020 and 2021. But in the fall and winter of 2022, the potential of the fediverse was crackling in ways it hadn’t been in my previous Mastodon experiences in 2017 and 2018. If you’ve been in a room where things are happening, you’ll recognize that feeling forever, and last fall, it really felt like some big chunks of the status quo had changed state and gone suddenly malleable.

I also believe that the window for significant change in our networks doesn’t open all that often and doesn’t usually stay open for long.

So like any self-respecting moth, when I saw it happening on Mastodon, I dropped everything I’d been doing and flew straight into the porch light and I’ve been thinking and writing toward these ideas since.

Then I did the Myanmar project. By the time I got to the end of the research, I recognized myself in the accounts of the tech folks at the beginning of Myanmar’s internet story, so hopeful about the chance to escape the difficulties and disappointments of the recent past.

And I want to be clear: There’s nothing wrong with feeling hopeful or optimistic about something new, as long as you don’t let yourself defend that feeling by rejecting the possibility that the exact things that fill you with hope can also be turned into weapons. (I’ll be more realistic—will be turned into weapons, if you succeed at drawing a mass user base and don’t skill up and load in peacekeeping expertise at the same time.)

A lot of people have written and spoken about the unusual naivety of Burmese Facebook users, and how that made them vulnerable, but I think Meta itself was also dangerously naive—and worked very hard to stay that way as long as possible. And still largely adopts the posture of the naive tech kid who just wants to make good.

It’s an act now, though, to be clear. They know. There are some good people working themselves to shreds at Meta, but the company’s still out doing PR tapdancing while people in Ethiopia and India and (still) Myanmar suffer.

When I first realized how bad Meta’s actions in Myanmar actually were, it felt important to try to pull all the threads together in a way that might be useful to my colleagues and peers who are trying in various ways to make the world better by making the internet better. I thought I would end by saying, Look, here’s what Meta did in Myanmar, so let’s get everyone the fuck off of Meta’s services into better and safer places.”

I’ve landed somewhere more complicated, because although I think Meta’s been a disaster, I’m not confident that there are sustainable better places for the vast majority of people to go. Not yet. Not without a lot more work.

We’re already all living through a series of rolling apocalypses, local and otherwise. Many of us in the west haven’t experienced the full force of them yet—we experience the wildfire smoke, the heat, the rising tide of authoritarianism, and rollbacks of legal rights. Some of us have had to flee. Most of us haven’t lost our homes, or our lives. Nevertheless, these realities chip away at our possible futures. I was born at about 330 PPM; my daughter was born at nearly 400.

The internet in Myanmar was born at a few seconds to midnight. Our new platforms and tools for global connection have been born into a moment in which the worst and most powerful bad actors, both political and commercial, are already prepared to exploit every vulnerability.

We don’t get a do-over planet. We won’t get a do-over network.

Instead, we have to work with the internet we made and find a way to rebuild and fortify it to support the much larger projects of repair—political, cultural, environmental—that are required for our survival.

I think those are the stakes, or I’d be doing something else with my time.

What better” requires

I wrestled a lot with the right way to talk about this and how much to lean on my own opinions vs. the voices of Myanmar’s own civil society organizations and the opinions of whistleblowers and trust and safety experts.

I’ve ended up taking the same approach to this post as I did with the previous three, synthesizing and connecting information from people with highly specific expertise and only sometimes drawing from my own experience and work.

If you’re not super interested in decentralized and federated networks, you probably want to skip down a few sections.

If you’d prefer to get straight to the primary references, here they are:

Notes and recommendations from people who were on the ground in Myanmar, and are still working on the problems the country faces:

Two docs related to large-scale threats. The federation-focused Annex Five” of the big Atlantic Council report, Scaling Trust on the Web. The whole report is worth careful reading, and this annex feels crucial to me, even though I don’t agree with every word.

I’m also including Camille François’ foundational 2019 paper on disinformation threats, because it opens up important ideas.

Three deep dives with Facebook whistleblowers.

…otherwise, here’s what’s worrying me.

1. Adversaries follow the herd

Realistically, a ton of people are going to stay on centralized platforms, which are going to continue to fight very large-scale adversaries. (And realistically, those networks are going to keep ignoring as much as they can for as long as they can—which especially means that outside the US and Western Europe, they’re going to ignore a lot of damage until they’re regulated or threatened with regulation. Especially companies like Google/YouTube, whose complicity in situations like the one in Myanmar has been partially overlooked because Meta’s is so striking.)

But a lot of people are also trying new networks, and as they do, spammers and scammers and griefers will follow, in increasingly large numbers. So will the much more sophisticated people—and pro-level organizations—dedicated to manipulating opinion; targeting, doxxing, and discrediting individuals and organizations; distributing ultra-harmful material; and sowing division among their own adversaries. And these aren’t people who will be deterred by inconvenience.

In her super-informative interview on the Brown Bag podcast from the ICT4Peace Foundation, Myanmar researcher Victoire Rio mentions two things that I think are vital to this facet of the problem: One is that as Myanmar’s resistance moved off of Facebook and onto Telegram for security reasons after the coup, the junta followed suit and weaponized Telegram as a crowdsourced doxxing tool that has resulted in hundreds of arrests—Rio calls it the Gestapo on steroids.”

This brings us to the next thing, which is commonly understood in industrial-grade trust and safety circles, but I think less so on newer networks, which have mostly experienced old-school adversaries—basic scammers and spammers, distributors of illegal and horrible content, and garden-variety amateur Nazis and trolls—which is that although those blunter and less sophisticated harms are still quite bad, the more sophisticated threats that are common on the big centralized platforms are considerably more difficult to identify and root out. And if the people running new networks don’t realize that what we’re seeing right now are the starter levels, they’re going to be way behind the ball when better organized adversaries arrive.

2. Modern adversaries are heavy on resources and time

Myanmar has a population of about 51 million people, and in the years before the coup, it already had an internal adversary in the military that ran a professionalized, Russia-trained online propaganda and deception operation that maxed out at about 700 people, working in shifts to manipulate the online landscape and shout down opposing points of view. It’s hard to imagine that this force has lessened now that the genocidaires are running the country.

Russia’s adversarial operations roll much deeper, and aren’t limited to the well-known, now allegedly disbanded Internet Research Agency.

And although Russia is the best-known adversary in most US and Western European conversations I’ve been in, it’s very far from being the only one. Here’s disinfo and digital rights researcher Camille François, warning about the association of online disinformation with the Russian playbook”:

Russia is neither the most prominent nor the only actor using manipulative behaviors on social media. This framing ignores that other actors have abundantly used these techniques, and often before Russia. Iran’s broadcaster (IRIB), for instance, maintains vast networks of fake accounts impersonating journalists and activists to amplify its views on American social media platforms, and it has been doing so since at least 2012.

What’s more, this kind of work isn’t the exclusive domain of governments. A vast market of for-hire manipulation proliferates around the globe, from Indian public relations firms running fake newspaper pages to defend Qatar’s interests ahead of the World Cup and Israeli lobbying groups running influence campaigns with fake pages targeting audiences in Africa.

This chimes with what Sophie Zhang reported about fake-Page networks on Facebook in 2019—they’re a genuinely global phenomenon, and they’re bigger, more powerful, and more diverse in both intent and tactics than most people suspect.

I think it’s easy to imagine that these heavy-duty threats focus only on the big, centralized services, but an in-depth analysis of just one operation, Secondary Infektion, shows that it operated across at least 300 websites and platforms ranging from Facebook, Reddit, and YouTube (and WordPress, Medium, and Quora) to literally hundreds of other sites and forums.

These adversaries will take advantage of decentralized social networks. Believing otherwise requires a naivety I hope we’ll come to recognize as dangerous.

3. No algorithms ≠ no trouble

Federated networks like Mastodon, which eschews algorithmic acceleration, offer fewer incentives for some kinds of adversarial actors—and that’s very good. But fewer isn’t none.

Here’s what Lai and Roth have to say about networks without built-in algorithmic recommendation surfaces:

The lack of algorithmic recommendations means there’s less of an attack surface for inauthentic engagement and behavioral manipulation. While Mastodon has introduced a version of a trending topics” list—the true battlefield of Twitter manipulation campaigns, where individual posts and behaviors are aggregated into a prominent, platform-wide driver of attention—such features tend to rely on aggregation of local (rather than global or federated) activity, which removes much of the incentive for engaging in large-scale spam. There’s not really a point to trying to juice the metrics on a Mastodon post or spam a hashtag, because there’s no algorithmic reward of attention for doing so…

These disincentives for manipulation have their limits, though. Some of the most successful disinformation campaigns on social media, like the IRAs use of fake accounts, relied less on spam and more on the careful curation of individual high-value” accounts—with uptake of their content being driven by organic sharing, rather than algorithmic amplification. Disinformation is just as much a community problem as it is a technological one (i.e., people share content they’re interested in or get emotionally activated by, which sometimes originates from troll farms)—which can’t be mitigated just by eliminating the algorithmic drivers of virality.

Learning in bloody detail about how thoroughly Meta’s acceleration machine overran all of its attempts to suppress undesirable results has made me want to treat algorithmic virality like a nuclear power source: Maybe it’s good in some circumstances, but if we aren’t prepared to do industrial-grade harm-prevention work and not just halfhearted cleanup, we should not be fucking with it, at all.

But, of course, we already are. Lemmy uses algorithmic recommendations. Bluesky has subscribable, user-built feeds that aren’t opaque and monolithic in the way that, say, Facebook’s are—but they’re still juicing the network’s dynamics, and the platform hasn’t even federated yet.

I think it’s an open question how much running fully transparent, subscribable algorithmic feeds that are controlled by users mitigates the harm recommendation systems can do. I think have a more positive view of AT Protocol than maybe 90% of fediverse advocates—which is to say, I feel neutral and like it’s probably too early to know much—but I’d be lying if I said I’m not nervous about what will happen when the people behind large-scale covert influence networks get to build and promote their own algo feeds using any identity they choose.

4. The benefits and limits of defederation

Another characteristic of fediverse (by which I mean Activity-Pub-based servers, mostly interoperable”) networks is the ability for both individual users and whole instances to defederate from each other. The ability to wall off” instances hosting obvious bad actors and clearly harmful content offers ways for good-faith instance administrators to sharply reduce certain kinds of damage.

It also means, of course, that instances can get false-flagged by adversaries who make accounts on target groups’ instances and post abuse in order to get those instances mass-defederated, as was reportedly happening in early 2023 with Ukrainian servers. I’m inclined to think that this may be a relatively niche threat, but I’m not the right person to evaluate that.

A related threat that was expressed to me by someone who’s been working on the ground in Myanmar for years is that authoritarian governments will corral their citizens on instances/servers that they control, permitting both surveillance and government-friendly moderation of propaganda.

Given the tremendous success of many government-affiliated groups in creating (and, when disrupted, rebuilding) huge fake-Page networks on Facebook, I’d also expect to see harmless-looking instances pop up that are actually controlled by covert influence campaigns and/or organizations that intend to use them to surveil and target activists, journalists, and others who oppose them.

And again, these aren’t wild speculations: Myanmar’s genocidal military turned out to be running many popular, innocuous-looking Facebook Pages (“Let’s Laugh Casually Together,” etc.) and has demonstrated the ability to switch tactics to keep up with both platforms and the Burmese resistance after the coup. It seems bizarre to me to assume that equivalent bad actors won’t work out related ways to take advantage of federated networks.

5. Content removal at mass scale is failing

The simple version of this idea is that content moderation at mass scale can’t be done well, full stop. I tend to think that we haven’t tried a lot of things that would help—not at scale, at least. But I would agree that doing content moderation in old-internet ways on the modern internet at mass scale doesn’t cut it.

Specifically, I think it’s increasingly clear that doing content moderation as a sideline or an afterthought, instead of building safety and integrity work into the heart of product design, is a recipe for failure. In Myanmar, Facebook’s engagement-focused algorithms easily outpaced—and often still defeat—Meta’s attempts to squash the hateful and violence-inciting messages they circulated.

Organizations and activists out of Myanmar are calling on social networks and platforms to build human-rights assessments not merely into their trust and safety work, but into changes to their core product design. Including, specifically, a recommendation to get product teams into direct contact with the people in the most vulnerable places:

Social media companies should increase exposure of their product teams to different user realities, and where possible, facilitate direct engagement with civil society in countries facing high risk of human rights abuse.

Building societal threat assessments into product design decisions is something that I think could move the needle much more efficiently than trying to just stuff more humans into the gaps.

Content moderation that focuses only on messages or accounts, rather than the actors behind them, also comes up short. The Myanmar Internet Project’s report highlights Meta’s failure—as late as 2022—to keep known bad actors involved in the Rohingya genocide off Facebook, despite its big takedowns and rules nominally preventing the military and the extremists of Ma Ba Tha from using Facebook to distribute their propaganda:

…most, if not all, of the key stakeholders in the anti-Rohingya campaign continue to maintain a presence on Facebook and to leverage Facebook and other platforms for influence. As we repeatedly warned the platforms, the bulk of the harmful content we face comes from a handful of actors, who have been consistently violating Terms of Services and Community Standards.

The Myanmar Internet Project recommends that social media companies rethink their moderation approach to more effectively deter and—where warranted—restrict actors with a track record of violating their rules and terms of services, including by enforcing sanctions and restrictions at an actor and not account level, and by developing better strategies to detect and remove accounts of actors under bans.”

This is…going to be complicated on federated networks, even if I set aside the massive question of how federated networks will moderate messages originating outside their instances that require language and culture expertise they lack.

I’ll focus here on Mastodon because it’s big and it’s been federated for years. Getting rid of obvious, known bad actors at the instance level is something Mastodon excels at—viz the full-scale quarantine of Gab. If you’re on a well-moderated, mainstream instance, a ton of truly horrific stuff is going to be excised from your experience on Mastodon because the bad instances get shitcanned. And because there’s no central public square” to contest on Mastodon, with all the corporations-censoring-political-speech-at-scale issues those huge ~public squares raise, many instance admins feel free to use a pretty heavy hand in throwing openly awful individuals and instances out of the pool.

But imagine a sophisticated adversary with a sustained interest in running both a network of covert and overt accounts on Mastodon and things rapidly get more complicated.

Lai and Roth weigh in on this issue, noting that the fediverse currently lacks capability and capacity for tracking bad actors through time in a structured way, and also doesn’t presently have much in the way of infrastructure for collaborative actor-level threat analysis:

First, actor-level analysis requires time-consuming and labor-intensive tracking and documentation. Differentiating between a commercially motivated spammer and a state-backed troll farm often requires extensive research, extending far beyond activity on one platform or website. The already unsustainable economics of fediverse moderation seem unlikely to be able to accommodate this kind of specialized investigation.

Second, even if you assume moderators can, and do, find accounts engaged in this type of manipulation— and understand their actions and motivations with sufficient granularity to target their activity—the burden of continually monitoring them is overwhelming. Perhaps more than anything else, disinformation campaigns demonstrate the persistent” in advanced persistent threat”: a single disinformation campaign, like China-based Spamouflage Dragon, can be responsible for tens or even hundreds of thousands of fake accounts per month, flooding the zone with low-quality content. The moderation tools built into platforms like Mastodon do not offer appropriate targeting mechanisms or remediations to moderators that could help them keep pace with this volume of activity.… Without these capabilities to automate enforcement based on long-term adversarial understanding, the unit economics of manipulation are skewed firmly in favor of bad actors, not defenders.

There’s also the perhaps even greater challenge of working across instances—and ideally, across platforms—to identify and root out persistent threats. Lai and Roth again:

From an analytic perspective, it can be challenging, if not impossible, to recognize individual accounts or posts as connected to a disinformation campaign in the absence of cross-platform awareness of related conduct. The largest platforms—chiefly, Meta, Google, and Twitter (pre-acquisition)—regularly shared information, including specific indicators of compromise tied to particular campaigns, with other companies in the ecosystem in furtherance of collective security. Information sharing among platform teams represents a critical way to build this awareness—and to take advantage of gaps in adversaries’ operational security to detect additional deceptive accounts and campaigns.… Federated moderation makes this kind of cross-platform collaboration difficult.

I predict that many advocates of federated and decentralized networks will believe that Lai and Roth are overstating these gaps in safety capabilities, but I hope more developers, instance administrators, and especially funders, will take this as an opportunity to prioritize scaled-up tooling and institution-building.

Edited to add, October 16, 2023: Independent Federated Trust and Safety (IFTAS), an organization working on supporting and improving trust and safety on federated networks, just released the results of their Moderator Needs Assessment results, highlighting needs for financial, legal, technical, and cultural support.

Meta’s fatal flaw

I think if you ask people why Meta failed to keep itself from being weaponized in Myanmar, they’ll tell you about optimizing for engagement and ravenously, heedlessly pursuing expansion and profits and continuously fucking up every part of content moderation.

I think those things are all correct, but there’s something else, too, though heedless” nods toward it: As a company determined to connect the world at all costs, Meta failed, spectacularly, over and over, to make the connections that mattered, between their own machinery and the people it hurt.

So I think there are two linked things Meta could have done to prevent so much damage, which are to listen out for people in trouble and meaningfully correct course.

Listening out” is from Ursula Le Guin, who said it in a 2015 interview with Choire Sicha that has never left my mind. She was speaking about the challenge of working while raising children while her partner taught:

…it worked out great, but it took full collaboration between him and me. See, I cannot write when I’m responsible for a child. They are full-time occupations for me. Either you’re listening out for the kids or you’re writing. So I wrote when the kids went to bed. I wrote between nine and midnight those years.

This passage is always with me because the only time I’m not listening out, at least a little bit, is when my kid is completely away from the house at school. Even when she’s sleeping, I’m half-concentrating on whatever I’m doing and…listening out. I can’t wear earplugs at night or I hallucinate her calling for me in my sleep. This is not rational! But it’s hardwired. Presumably this will lessen once she leaves for college or goes to sea or whatever, but I’m not sure it will.

So listening out is meaningful to me for embarrassingly, viscerally human reasons. Which makes it not something a serious person puts into an essay about the worst things the internet can do. I’m putting it here anyway because it cuts to the thing I think everyone who works on large-scale social networks and tools needs to wire into our brainstems.

In Myanmar and in Sophie Zhang’s disclosures about the company’s refusal to prioritize the elimination of covert influence networks, Meta demonstrated not just an unwillingness to listen to warnings, but a powerful commitment to not permitting itself to understand or act on information about the dangers it was worsening around the world.

It’s impossible for me to read the Haugen and Zhang disclosures and not think of the same patterns dismissing and hiding dangerous knowledge that we’ve seen from tobacco companies (convicted of racketeering and decades-long patterns of deception over tobacco’s dangers), oil companies (being sued by the state of California over decades-long patterns of deception over their contributions to climate change), or the Sacklers (who pled guilty to charges based on a decade-long pattern of deception over their contribution to the opioid epidemic).

But you don’t have to be a villain to succumb to the temptation to push away inconvenient knowledge. It often takes nothing more than being idealistic or working hard for little (or no) pay to believe that the good your work does necessarily outweighs its potential harms—and that especially if you’re offering it for free, any trouble people get into is their own fault. They should have done their own research, after all.

And if some people are warning about safety problems on an open source network where the developers and admins are trying their best, maybe they should just go somewhere else, right? Or maybe they’re just exaggerating, which is the claim I saw the most on Mastodon when the Stanford Internet Observatory published its report on CSAM on the fediverse.

We can’t have it both ways. Either people making and freely distributing tools and systems have some responsibility for their potential harms, or they don’t. If Meta is on the hook, so are people working in open technology. Even nice people with good intentions.

So: Listening out. Listening out for signals that we’re steering into the shoals. Listening out like it’s our own children at the sharp end of the worst things our platforms can do.

The warnings about Myanmar came from academics and digital rights people. They came, above all, from Myanmar, nearly 8,000 miles from Palo Alto. Twenty hours on a plane. Too far to matter, for too many years.


The civil society people who issued many of the warnings to Meta have clear thoughts about the way to avoid recapitulating Meta’s disastrous structural callousness during the years leading up to the genocide of the Rohingya. Several of those recommendations involve diligent, involved, hyper-specific listening to people on the ground about not only content moderation problems, but also dangers in the core functionality of social products themselves.

Nadah Feteih and Elodie Vialle’s recent piece in Tech Policy Press, Centering Community Voices: How Tech Companies Can Better Engage with Civil Society Organizations offers a really strong introduction to what that kind of consultative process might be like for big platforms. I think it also offers about a dozen immediately useful clues about how smaller, more distributed, and newer networks might proceed as well.

But let’s get a little more operational.

Do better” requires material support

It’s impossible to talk about any of this without talking about the resource problem in open source and federated networks—most of the sector is critically underfunded and built on gift labor, which has shaping effects on who can contribute, who gets listened to, and what gets done.

It would be unrealistic bordering on goofy to expect everyone who contributes to projects like Mastodon and Lemmy or runs a small instance on a federated network to independently develop in-depth human-rights expertise. It’s just about as unrealistic to expect that even lead developers who are actively concerned about safety to have the resources and expertise to arrange close consultation with relevant experts in digital rights, disinformation, and complex, culturally specific issues globally.

There are many possible remedies to the problems and gaps I’ve tried to sketch above, but the one I’ve been daydreaming about a lot is the development of dedicated, cross-cutting, collaborative institutions that work not only within the realm of trust and safety as it’s constituted on centralized platforms, but also on hands-on research that brings the needs and voices of vulnerable people and groups into the heart of design work on protocols, apps, and tooling.

Maintainers and admins all over the networks are at various kinds of breaking points. Relatively few have the time and energy to push through year after year of precariousness and keep the wheels on out of sheer cussedness. And load-bearing personalities are not, I think, a great way to run a stable and secure network.

Put another way, rapidly growing, dramatically underfunded networks characterized by overtaxed small moderation crews and underpowered safety tooling present a massive attack surface. Believing that the same kinds of forces that undermined the internet in Myanmar won’t be able to weaponize federated networks because the nodes are smaller is a category error—most of the advantages of decentralized networks can be turned to adversaries’ advantage almost immediately.

Flinging money indiscriminately isn’t a cure, but without financial support that extends beyond near-subsistence for a few people, it’s very hard to imagine free and open networks being able to skill up in time to handle the kinds of threats and harms I looked at in the first three posts of this series.

The problem may look different for venture-funded projects like Bluesky, but I don’t know. I think in a just world, the new CTO of Mastodon wouldn’t be working full-time for free.

I also think that in that just world, philanthropic organizations with interests in the safety of new networks would press for and then amply fund collective, collaborative work across protocols and projects, because regardless of my own concerns and preferences, everyone who uses any of the new generation of networks and platforms deserves to be safe.

We all deserve places to be together online that are, at minimum, not inimical to offline life.

So what if you’re not a technologist, but you nevertheless care about this stuff? Unsurprisingly, I have thoughts.

Everything, everywhere, all at once

The inescapable downside of not relying on centralized networks to fix things is that there’s no single entity to try to pressure. The upside is that we can all work toward the same goals—better, safer, freer networks—from wherever we are. And we can work toward holding both centralized and new-school networks accountable, too.

If you live someplace with at least semi-democratic representation in government, you may be able to accomplish a lot by sending things like Amnesty International’s advocacy report and maybe even this series to your representatives, where there’s a chance their staffers will read them and be able to mount a more effective response to technological and corporate failings.

If you have an account on a federated/network, you can learn about the policies and plans of your own instance administrators—and you can press them (I would recommend politely) about their plans for handling big future threats like covert influence networks, organized but distributed hate campaigns, actor-level threats, and other threats we’ve seen on centralized networks and can expect to see on decentralized ones.

And if you have time or energy or money to spare, you can throw your support (material or otherwise) behind collaborative institutions that seek to reduce societal harms.

On Meta itself

It’s my hope that the 30,000-odd words of context, evidence, and explanation in parts 1–3 of this series speak for themselves.

I’m sure some people, presumably including some who’ve worked for Meta or still do, will read all of those words and decide that Meta had no responsibility for its actions and failures to act in Myanmar. I don’t think I have enough common ground with those readers to try to discuss anything.

There are quite clearly people at Meta who have tried to fix things. A common thread across internal accounts is that Facebook’s culture of pushing dangerous knowledge away from its center crushes many employees who try to protect users and societies. In cases like Sophie Zhang’s, Meta’s refusal to understand and act on what its own employees had uncovered is clearly a factor in employee health breakdowns.

And the whistleblower disclosures from the past few years make it clear that many people over many years were trying to flag, prevent, and diagnose harm. And to be fair, I’m sure lots of horrible things were prevented. But it’s impossible to read Frances Haugen’s disclosures or Sophie Zhang’s story and believe that the company is doing everything it can, except in the sense that it seems unable to conceive of meaningfully redesigning its products—and rearranging its budgets—to stop hurting people.

It’s also impossible for me to read anything Meta says on the record without thinking about the deceptive, blatant, borderline contemptuous runarounds it’s been doing for years over its content moderation performance. (That’s in Part III, if you missed it.)

Back in 2018, Adam Mosseri, who was in charge of News Feed—a major recommendation surface” on which Facebook’s algorithms boosted genocidal anti-Rohingya messages in Myanmar—during the genocide, wrote that he’d lost some sleep over what had happened.

The lost sleep apparently didn’t amount to much in the way of product-design changes, considering that Global Witness found Facebook doing pretty much the exact same things with the same kinds of messages three years later.

But let’s look at what Mosseri actually said:

There is false news, not only on Facebook but in general in Myanmar. But there are no, as far as we can tell, third-party fact-checking organizations with which we can partner, which means that we need to rely instead on other methods of addressing some of these issues. We would look heavily, actually, for bad actors and things like whether or not they’re violating our terms of service or community standards to try and use those levers to try and address the proliferation of some problematic content. We also try to rely on the community and be as effective as we can at changing incentives around things like click-bait or sensational headlines, which correlate, but aren’t the same as false news.

Those are all examples of how we’re trying to take the issue seriously, but we lose some sleep over this. I mean, real-world harm and what’s happening on the ground in that part of the world is actually one of the most concerning things for us and something that we talk about on a regular basis. Specifically, about how we might be able to do more and be more effective, and more quickly.

This is in 2018, so six years after Myanmar’s digital-rights and civil-society organizations started contacting Meta to tell them about the organized hate campaigns on Facebook in Myanmar, which Meta appears to have ignored, because all those organized campaigns were still running through the peak of the Rohingya genocide in 2016 and 2017.

This interview also happens several years after Meta started relying on members of those same Burmese organizations to report content—because, if you remember from the earlier posts in this series, they hadn’t actually translated the Community Standards or the reporting interface itself. Or hired enough Burmese-speaking moderators to handle a country bigger than a cruise ship. It’s also interesting that Mosseri reported that Meta couldn’t find any third-party fact-checking organizations” given that MIDO, which was one of the organizations reporting content problems to them, actually ran its own fact-checking operation.

And the incentives on the click-bait Mosseri mentions? That would be the market for fake and sensationalist news that Meta created by rolling out Instant Articles, which directly funded the development of Burmese-language clickfarms, and which pretty much destroyed Myanmar’s online media landscape back in 2016.

Mosseri and his colleagues talked about it on a regular basis, though.

I was going to let myself be snarky here and note that being in charge of News Feed during a genocide the UN Human Rights Council linked to Facebook doesn’t seem to have slowed Mosseri down personally, either. He’s the guy in charge of Meta’s latest social platform, Threads, after all.

But maybe it goes toward explaining why Threads refuses to allow users to search for potentially controversial topics, including the effects of an ongoing pandemic. This choice is being widely criticized as a failure to let people discuss important things. It feels to me like more of an admission that Meta doesn’t think it can do the work of content moderation, so it’s designing the product to avoid the biggest dangers.

It’s a clumsy choice, certainly. And it’s weird, after a decade of social media platforms charging in with no recognition that they’re making things worse. But if the alternative is returning to the same old unwinnable fight, maybe just not going there is the right call. (I expect that it won’t last.)

The Rohingya are still waiting

The Rohingya are people, not lessons. Nearly a million of them have spent at least six years in Bangladeshi camps that make up the densest refugee settlement on earth. Underfunded, underfed, and prevented from working, Rohingya people in the camps are vulnerable to climate-change-worsened weather, monsoon flooding, disease, fire, and gang violence. The pandemic has concentrated the already intense restrictions and difficulties of life in these camps.

If you have money, Global Giving’s Rohingya Refugee Relief Fund will get it into the hands of people who can use it.

The Canadian documentary Wandering: A Rohingya Story provides an intimate look at life in Kutupalong, the largest of the refugee camps. It’s beautifully and lovingly made.

Screenshot from the film, Wandering: A Rohingya Story, showing a Rohingya mother smoothing her hands over her daugher's laughing face.

From Wandering: A Rohingya Story. This mother and her daughter destroyed me.

Back in post-coup Myanmar, hundreds of thousands of people are risking their lives resisting the junta’s brutal oppression. Mutual Aid Myanmar is supporting their work. James C. Scott (yes) is on their board.

In the wake of the coup, the National Unity Government—the shadow-government wing of the Burmese resistance against the junta—has officially recognized the wrongs done to the Rohingya, and committed itself to dramatic change, should the resistance prevail:

The National Unity Government recognises the Rohingya people as an integral part of Myanmar and as nationals. We acknowledge with great shame the exclusionary and discriminatory policies, practices, and rhetoric that were long directed against the Rohingya and other religious and ethnic minorities. These words and actions laid the ground for military atrocities, and the impunity that followed them emboldened the military’s leaders to commit countrywide crimes at the helm of an illegal junta.

Acting on our Policy Position on the Rohingya in Rakhine State’, the National Unity Government is committed to creating the conditions needed to bring the Rohingya and other displaced communities home in voluntary, safe, dignified, and sustainable ways.

We are also committed to social change and to the complete overhaul of discriminatory laws in consultation with minority communities and their representatives. A Rohingya leader now serves as Deputy Minister of Human Rights to ensure that Rohingya perspectives support the development of government policies and programs and legislative reform.

From the refugee camps, Rohingya youth activists are working to build solidarity between the Rohingya people and the mostly ethnically Bamar people in Myanmar who, until the coup, allowed themselves to believe the Tatmadaw’s messages casting the Rohingya as existential threats. Others, maybe more understandably, remain wary of the NUGs claims that the Rohingya will be welcomed back home.


In Part II of this series, I tried to explain—clearly but at speed—how the 2016 and 2017 attacks on the Rohingya, which accelerated into full-scale ethnic cleansing and genocide, began when Rohingya insurgents carried out attacks on Burmese security forces and committed atrocities against civilians, after decades of worsening repression and deprivation in Myanmar’s Rakhine State by the Burmese government.

This week, the social media feed of one of the young Rohingya activists featured in a story I linked above is filled with photographs from Gaza, where two million people live inside fences and walls, and whose hospitals and schools and places of worship are being bombed by the Israeli military because of Hamas’s horrific attacks on Israeli civilians, after decades of worsening repression and deprivation in Gaza and the West Bank by the Israeli government.

We don’t need the internet to make our world a hell. But I don’t think we should forgive ourselves for letting our technology make the world worse.

I want to make our technologies into better tools for the many, many people devoted to building the kinds of human-level solidarity and connection that can get more of us through our present disasters to life on the other side.

13 October 2023

Meta in Myanmar, Part III: The Inside View

Well, Congressman, I view our responsibility as not just building services that people like to use, but making sure that those services are also good for people and good for society overall.” — Mark Zuckerberg, 2018

In the previous two posts in this series, I did a long but briskly paced early history of Meta and the internet in Myanmar—and the hateful and dehumanizing speech that came with it—and then looked at what an outside-the-company view could reveal about Meta’s role in the genocide of the Rohingya in 2016 and 2017.

In this post, I’ll look at what two whistleblowers and a crucial newspaper investigation reveal about what was happening inside Meta at the time. Specifically, the disclosed information:

  • gives us a quantitative view of Meta’s content moderation performance—which, in turn, highlights a deceptive PR move routine Meta uses when questioned about moderation;
  • clarifies what Meta knew about the effects of its algorithmic recommendations systems; and
  • reveals a parasitic takeover of the Facebook platform by covert influence campaigns around the world—including in Myanmar.

Before we get into that, a brief personal note. There are few ways to be in the world that I enjoy less than breathless conspiratorial.” That rhetorical mode muddies the water when people most need clarity and generates an emotional charge that works against effective decision-making. I really don’t like it. So it’s been unnerving to synthesize a lot of mostly public information and come up with results that wouldn’t look completely out of place in one of those overwrought threads.

I don’t know what to do with that except to be forthright but not dramatic, and to treat my readers’ endocrine systems with respect by avoiding needless flourishes. But the story is just rough and many people in it do bad things. (You can read my meta-post about terminology and sourcing if you want to see me agonize over the minutiae.)

Content warnings for the post: The whole series is about genocide and hate speech. There are no graphic descriptions or images, and this post includes no slurs or specific examples of hateful and inciting messages, but still. (And there’s a fairly unpleasant photograph of a spider at about the 40% mark.)

The disclosures

When Frances Haugen, a former product manager on Meta’s Civic Integrity team, disclosed a ton of internal Meta docs to the SEC—and several media outlets—in 2021, I didn’t really pay attention. I was pandemic-tired and I didn’t think there’d be much in there that I didn’t know. I was wrong!

Frances Haugen’s disclosures are of generational importance, especially if you’re willing to dig down past the US-centric headlines. Haugen has stated that she came forward because of things outside the US—Myanmar and its horrific echo years later in Ethiopia, specifically, and the likelihood that it would all just keep happening. So it makes sense that the docs she disclosed would be highly relevant, which they are.

There are eight disclosures in the bundle of information Haugen delivered via lawyers to the SEC, and each is about one specific way Meta misled investors and the public.” Each disclosure takes the form of a letter (which probably has a special legal name I don’t know) and a huge stack of primary documents. The majority of those documents—internal posts, memos, emails, comments—haven’t yet been made public, but the letters themselves include excerpts, and subsequent media coverage and straightforward doc dumps have revealed a little bit more. When I cite the disclosures, I’ll point to the place where you can read the longest chunk of primary text—often that’s just the little excerpts in the letters, but sometimes we have a whole—albeit redacted—document to look at.

Before continuing, I think it’s only fair to note that the disclosures we see in public are necessarily those that run counter to Meta’s public statements, because otherwise there would be no need to disclose them. And because we’re only getting excerpts, there’s obviously a ton of context missing—including, presumably, dissenting internal views. I’m not interested in making a handwavey case based on one or two people inside a company making wild statements. So I’m only emphasizing points that are supported in multiple, specific excerpts.

Let’s start with content moderation and what the disclosures have to say about it.

How much dangerous stuff gets taken down?

We don’t know how much objectionable content” is actually on Facebook—or on Instagram, or Twitter, or any other big platform. The companies running those platforms don’t know the exact numbers either, but what they do have are reasonably accurate estimates. We know they have estimates because sampling and human-powered data classification is how you train the AI classifiers required to do content-based moderation—removing posts and comments—at mass scale. And that process necessarily lets you estimate from your samples roughly how much of a given kind of problem you’re seeing. (This is pretty common knowledge, but it’s also confirmed in an internal doc I quote below.)

The platforms aren’t sharing those estimates with us because no one’s forcing them to. And probably also because, based on what we’ve seen from the disclosures, the numbers are quite bad. So I want to look at how bad they are, or recently were, on Facebook. Alongside that, I want to point out the most common way Meta distracts reporters and governing bodies from its terrible stats, because I think it’s a very useful thing to be able to spot.

One of Frances Haugen’s SEC disclosure letters is about Meta’s failures to moderate hate speech. It’s helpfully titled, Facebook misled investors and the public about transparency’ reports boasting proactive removal of over 90% of identified hate speech when internal records show that as little as 3-5% of hate’ speech is actually removed.”1

Here’s the excerpt from the internal Meta document from which that 3–5%” figure is drawn:

…we’re deleting less than 5% of all of the hate speech posted to Facebook. This is actually an optimistic estimate—previous (and more rigorous) iterations of this estimation exercise have put it closer to 3%, and on V&I [violence and incitement] we’re deleting somewhere around 0.6%…we miss 95% of violating hate speech.2

Here’s another quote from different memo excerpted in the same disclosure letter:

[W]e do not … have a model that captures even a majority of integrity harms, particularly in sensitive areas … We only take action against approximately 2% of the hate speech on the platform. Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.3

Another estimate from a third internal document:

We seem to be having a small impact in many language-country pairs on Hate Speech and Borderline Hate, probably ~3% … We are likely having little (if any) impact on violence.4

Here’s a fourth one, specific to a study about Facebook in Afghanistan, which I include to help contextualize the global numbers:

While Hate Speech is consistently ranked as one of the top abuse categories in the Afghanistan market, the action rate for Hate Speech is worryingly low at 0.23 per cent.5

I don’t think these figures need a ton of commentary, honestly. I would agree that removing less than a quarter of a percent of hate speech is indeed worryingly low,” as is removing 0.6% of violence and incitement messages. I think removing even 5% of hate speech—the highest number cited in the disclosures—is objectively terrible performance, and I think most people outside of the tech industry would agree with that. Which is presumably why Meta has put a ton of work into muddying the waters around content moderation.

So back to that SEC letter with the long name. It points something out, which is that Meta has long claimed that Facebook proactively” detects between 95% (in 2020, globally) and 98% (in Myanmar, in 2021) of all the posts it removes because they’re hate speech—before users even see them.

At a glance, this looks good. Ninety-five percent is a lot! But since we know from the disclosed material that based on internal estimates the takedown rates for hate speech are at or below 5%, what’s going on here?

Here’s what Meta is actually saying: Sure, they might identify and remove only a tiny fraction of dangerous and hateful speech on Facebook, but of that tiny fraction, their AI classifiers catch about 95–98% before users report it. That’s literally the whole game, here.

So…the most generous number from the disclosed memos has Meta removing 5% of hate speech on Facebook. That would mean that for every 2,000 hateful posts or comments, Meta removes about 100–95 automatically and 5 via user reports. In this example, 1,900 of the original 2,000 messages remain up and circulating. So based on the generous 5% removal rate, their AI systems nailed…4.75% of hate speech. That’s the level of performance they’re bragging about.

You don’t need to take my word for any of this—Wired ran a critique breaking it down in 2021 and Ranking Digital Rights has a strongly worded post about what Meta claims in public vs. what the leaked documents reveal to be true, including this content moderation math runaround.

Meta does this particular routine all the time.

The shell game

Here’s Mark Zuckerberg on April 10th, 2018, answering a question in front of the Senate’s Commerce and Judiciary committees. He says that hate speech is really hard to find automatically and then pivots to something that he says is a real success, which is terrorist propaganda,” which he simplifies immediately to ISIS and Al Qaida content.” But that stuff? No problem:

Contrast [hate speech], for example, with an area like finding terrorist propaganda, which we’ve actually been very successful at deploying A.I. tools on already. Today, as we sit here, 99 percent of the ISIS and Al Qaida content that we take down on Facebook, our A.I. systems flag before any human sees it. So that’s a success in terms of rolling out A.I. tools that can proactively police and enforce safety across the community.6

So that’s 99% of…the unknown percentage of this kind of content that’s actually removed.

Zuckerberg actually tries to do the same thing the next day, April 11th, before the House Energy and Commerce Committee, but he whiffs the maneuver:

…we’re getting good in certain areas. One of the areas that I mentioned earlier was terrorist content, for example, where we now have A.I. systems that can identify and—and take down 99 percent of the al-Qaeda and ISIS-related content in our system before someone—a human even flags it to us. I think we need to do more of that.7

The version Zuckerberg says right there, on April 11th, is what I’m pretty sure most people think Meta means when they go into this stuff—but as stated, it’s a lie.

No one in those hearings presses Zuckerberg on those numbers—and when Meta repeats the move in 2020, plenty of reporters fall into the trap and make untrue claims favorable to Meta:

…between its AI systems and its human content moderators, Facebook says it’s detecting and removing 95% of hate content before anyone sees it. —Fast Company

About 95 percent of hate speech on Facebook gets caught by algorithms before anyone can report it… —Ars Technica

Facebook said it took action on 22.1 million pieces of hate speech content to its platform globally last quarter and about 6.5 million pieces of hate speech content on Instagram. On both platforms, it says about 95% of that hate speech was proactively identified and stopped by artificial intelligence. —Axios

The company said it now finds and eliminates about 95% of the hate speech violations using automated software systems before a user ever reports them… —Bloomberg

This is all not just wrong but wildly wrong if you have the internal numbers in front of you.

I’m hitting this point so hard not because I want to point out ~corporate hypocrisy~ or whatever, but because this deceptive runaround is consequential for two reasons: The first is that it provides instructive context about how to interpret Meta’s public statements. The second is that it actually says extremely dire things about Meta’s only hope for content-based moderation at scale, which is their AI-based classifiers.

Here’s Zuckerberg saying as much to a congressional committee:

…one thing that I think is important to understand overall is just the sheer volume of content on Facebook makes it so that we can’t—no amount of people that we can hire will be enough to review all of the content.… We need to rely on and build sophisticated A.I. tools that can help us flag certain content.8

This statement is kinda disingenuous in a couple of ways, but the central point is true: the scale of these platforms makes human review incredibly difficult. And Meta’s reasonable-sounding explanation is that this means they have to focus on AI. But by their own internal estimates, Meta’s AI classifiers are only identifying something in the range of 4.75% of hate speech on Facebook, and often considerably less. That seems like a dire stat for the thing you’re putting forward to Congress as your best hope!

The same disclosed internal memo that told us Meta was deleting between 3% and 5% of hate speech had this to say about the potential of AI classifiers to handle mass-scale content removals:

[O]ur current approach of grabbing a hundred thousand pieces of content, paying people to label them as Hate or Not Hate, training a classifier, and using it to automatically delete content at 95% precision is just never going to make much of a dent.9

Getting content moderation to work for even extreme and widely reviled categories of speech is obviously genuinely difficult, so I want to be extra clear about a foundational piece of my argument.

Responsibility for the machine

I think that if you make a machine and hand it out for free to everyone in the world, you’re at least partially responsible for the harm that the machine does.

Also, even if you say, but it’s very difficult to make the machine safer!” I don’t think that reduces your responsibility so much as it makes you look shortsighted and bad at machines.

Beyond the bare fact of difficulty, though, I think the more what harm the machine does deviates from what people might expect a machine that looks like this to do, the more responsibility you bear: If you offer everyone in the world a grenade, I think that’s bad, but also it won’t be surprising when people who take the grenade get hurt or hurt someone else. But when you offer everyone a cute little robot assistant that turns out to be easily repurposed as a rocket launcher, I think that falls into another category.

Especially if you see that people are using your cute little robot assistant to murder thousands of people and elect not to disarm it because that would make it a little less cute.

This brings us to the algorithms.

Core product mechanics”

Screencap of an internal Meta document titled Facebook and responsibility, with a header image of the Bart Simson writing on the chalkboard meme in which the writing on the board reads, Facebook is responsible for ranking and recommendations!Screencap of an internal Meta document titled Facebook and responsibility, with a header image of the Bart Simson writing on the chalkboard meme in which the writing on the board reads, Facebook is responsible for ranking and recommendations!

From a screencapped version of Facebook and responsibility,” one of the disclosed internal documents.

In the second post in this series, I quoted people in Myanmar who were trying to cope with an overwhelming flood of hateful and violence-inciting messages. It felt obvious on the ground that the worst, most dangerous posts were getting the most juice.

Thanks to the Haugen disclosures, we can confirm that this was also understood inside Meta.

In 2019, a Meta employee wrote a memo called What is Collateral damage.” It included these statements (my emphasis):

We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and the family of apps are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.

If integrity takes a hands-off stance for these problems, whether for technical (precision) or philosophical reasons, then the net result is that Facebook, taken as a whole, will be actively (if not necessarily consciously) promoting these types of activities. The mechanics of our platform are not neutral. 10

If you work in tech or if you’ve been following mainstream press accounts about Meta over the years, you presumably already know this, but I think it’s useful to establish this piece of the internal conversation.

Here’s a long breakdown from 2020 about the specific parts of the platform that actively put unconnected content”—messages that aren’t from friends or Groups people subscribe to—in front of Facebook users. It comes from an internal post called Facebook and responsibility (my emphasis):

Facebook is most active in delivering content to users on recommendation surfaces like Pages you may like,” Groups you should join,” and suggested videos on Watch. These are surfaces where Facebook delivers unconnected content. Users don’t opt-in to these experiences by following other users or Pages. Instead, Facebook is actively presenting these experiences…

News Feed ranking is another way Facebook becomes actively involved in these harmful experiences. Of course users also play an active role in determining the content they are connected to through feed, by choosing who to friend and follow. Still, when and whether a user sees a piece of content is also partly determined by the ranking scores our algorithms assign, which are ultimately under our control. This means, according to ethicists, Facebook is always at least partially responsible for any harmful experiences on News Feed.

This doesn’t owe to any flaw with our News Feed ranking system, it’s just inherent to the process of ranking. To rank items in Feed, we assign scores to all the content available to a user and then present the highest-scoring content first. Most feed ranking scores are determined by relevance models. If the content is determined to be an integrity harm, the score is also determined by some additional ranking machinery to demote it lower than it would have appeared given its score. Crucially, all of these algorithms produce a single score; a score Facebook assigns. Thus, there is no such thing as inaction on Feed. We can only choose to take different kinds of actions.11

The next few quotes will apply directly to US concerns, but they’re clearly broadly applicable to the 90% of Facebook users who are outside the US and Canada, and whose disinfo concerns receive vastly fewer resources.

This one is from an internal Meta doc from November 5, 2020:

Not only do we not do something about combustible election misinformation in comments, we amplify them and give them broader distribution.12

When Meta staff tried to take the measure of their own recommendation systems’ behavior, they found that the systems led a fresh, newly made account into disinfo-infested waters very quickly:

After a small number of high quality/verified conservative interest follows… within just one day Page recommendations had already devolved towards polarizing content.

Although the account set out to follow conservative political news and humor content generally, and began by following verified/high quality conservative pages, Page recommendations began to include conspiracy recommendations after only 2 days (it took <1 week to get a QAnon recommendation!)

Group recommendations were slightly slower to follow suit - it took 1 week for in-feed GYSJ recommendations to become fully political/right-leaning, and just over 1 week to begin receiving conspiracy recommendations.13

The same document reveals that several of the Pages and Groups Facebook’s systems recommend to its test user show multiple signs of association with coordinated inauthentic behavior,” aka foreign and domestic covert influence campaigns, which we’ll get to very soon.

Before that, I want to offer just one example of algorithmic malpractice from Myanmar.

Flower speech

Panzagar campaign illustration depicting a Burmese girl with thanaka on her cheek and illustrated flowers coming from her mouth, flowing toward speech bubbles labeled with the names of Burmese towns and cities.Panzagar campaign illustration depicting a Burmese girl with thanaka on her cheek and illustrated flowers coming from her mouth, flowing toward speech bubbles labeled with the names of Burmese towns and cities.

Back in 2014, Burmese organizations including MIDO and Yangon-based tech accelerator Phandeeyar collaborated on a carefully calibrated counter-speech project called Panzagar (flower speech). The campaign—which was designed to be delivered in person, in printed materials, and online—encouraged ordinary Burmese citizens to push back on hate speech in Myanmar.

Later that year, Meta, which had just been implicated in the deadly communal violence in Mandalay, joined with the Burmese orgs to turn their imagery into digital Facebook stickers that users could apply to posts calling for things like the annihilation of the Rohingya people. The stickers depict cute cartoon characters, several of which offer admonishments like, Don’t be the source of a fire,” Think before you share,” Don’t you be spawning hate,” and Let it go buddy!”

The campaign was widely and approvingly covered by western organizations and media outlets, and Meta got a lot of praise for its involvement.

But according to members of the Burmese civil society coalition behind the campaign, it turned out that the Panzagar Facebook stickers—which were explicitly designed as counterspeech—“carried significant weight in their distribution algorithm,” so anyone who used them to counter hateful and violent messages inadvertently helped those messages gain wider distribution.14

I mention the Panzagar incident not only because it’s such a head-smacking example of Meta favoring cosmetic, PR-friendly tweaks over meaningful redress, or because it reveals plain incompetence in the face of already-serious violence, but also because it gets to what I see as a genuinely foundational problem with Meta in Myanmar.

Even when the company was finally (repeatedly) forced to take notice of the dangers it was contributing to, actions that could actually have made a difference—like rolling out new programs only after local consultation and adaptation, scaling up culturally and linguistically competent human moderation teams in tandem with increasing uptake, and above all, altering the design of the product to stop amplifying the most charged messages—remained not just undone, but unthinkable because they were outside the company’s understanding of what the product’s design should take into consideration.

This refusal to connect core project design with accelerating global safety problems means that attempts at prevention and repair are relegated to window-dressing—or which are actually counterproductive, as in the case of the Panzagar stickers, which absorbed the energy and efforts of local Burmese civil society groups and turned them into something that made the situation worse.

In a 2018 interview with Frontline about problems with Facebook, Meta’s former Chief Security Officer, Alex Stamos, returns again and again to the idea that security work properly happens at the product design level. Toward the end of the interview, he gets very clear:

Stamos: I think there was a structural problem here in that the people who were dealing with the downsides were all working together over kind of in the corner, right, so you had the safety and security teams, tight-knit teams that deal with all the bad outcomes, and we didn’t really have a relationship with the people who are actually designing the product.

Interviewer: You did not have a relationship?

Stamos: Not like we should have, right? It became clear—one of the things that became very clear after the election was that the problems that we knew about and were dealing with before were not making it back into how these products are designed and implemented.15

Meta’s content moderation was a disaster in Myanmar—and around the world—not only because it was treated and staffed like an afterthought, but because it was competing against Facebook’s core machinery.

And just as the house always wins, the core machinery of a mass-scale product built to boost engagement always defeats retroactive and peripheral attempts at cleanup.

This is especially true once organized commercial and nation-state actors figured out how to take over that machinery with large-scale fake Page networks boosted by fake engagement, which brings us to a less-discussed revelation: By the mid-2010s, Facebook had effectively become the equivalent of botnet in the hands of any group, governmental or commercial, who could summon the will and resources to exploit it.

A lot of people did, including, predictably, some of the worst people in the world.

Meta’s zombie networks

Ophiocordyceps formicarum observed at the Mushroom Research Centre, Chiang Mai, Thailand; Steve Axford (CC BY-SA 3.0)

Content warning: The NYT article I link to below is important, but it includes photographs of mishandled bodies, including those of children. If you prefer not to see those, a reader view” or equivalent may remove the images. (Sarah Sentilles’ 2018 article on which kinds of bodies US newspapers put on display may be of interest.)

In 2018, the New York Times published a front-page account of what really happened on Facebook in Myanmar, which is that beginning around 2013, Myanmar’s military, the Tatmadaw, set up a dedicated, ultra-secret anti-Rohingya hatefarm spread across military bases in which up to 700 staffers worked in shifts to manufacture the appearance of overwhelming support for the genocide the same military then carried out.16

When the NYT did their investigation in 2018, all those fake Pages were still up.

Here’s how it worked: First, the military set up a sprawling network of fake accounts and Pages on Facebook. The fake accounts and Pages focused on innocuous subjects like beauty, entertainment, and humor. These Pages were called things like, Beauty and Classic,” Down for Anything,” You Female Teachers,” We Love Myanmar,” and Let’s Laugh Casually.” Then military staffers, some trained by Russian propaganda specialists, spent years tending the Pages and gradually building up followers.17

Then, using this array of long-nurtured fake Pages—and Groups, and accounts—the Tatmadaw’s propagandists used everything they’d learned about Facebook’s algorithms to post and boost viral messages that cast Rohingya people as part of a global Islamic threat, and as the perpetrators of a never-ending stream of atrocities. The Times reports:

Troll accounts run by the military helped spread the content, shout down critics and fuel arguments between commenters to rile people up. Often, they posted sham photos of corpses that they said were evidence of Rohingya-perpetrated massacres…18

That the Tatmadaw was capable of such a sophisticated operation shouldn’t have come as a surprise. Longtime Myanmar digital rights and technology researcher Victoire Rio notes that the Tatmadaw had been openly sending its officers to study in Russia since 2001, was among the first adopters of the Facebook platform in Myanmar” and launched a dedicated curriculum as part of its Defense Service Academy Information Warfare training.”19

What these messages did

I don’t have the access required to sort out which specific messages originated from extremist religious networks vs. which were produced by military operations, but I’ve seen a lot of the posts and comments central to these overlapping campaigns in the UN documents and human rights reports.

They do some very specific things:

  • They dehumanize the Rohingya: The Facebook messages speak of the Rohingya as invasive species that outbreed Buddhists and Myanmar’s real ethnic groups. There are a lot of bestiality images.
  • They present the Rohingya as inhumane, as sexual predators, and as an immediate threat: There are a lot of graphic photos of mangled bodies from around the world, most of them presented as Buddhist victims of Muslim killers—usually Rohingya. There are a lot of posts about Rohingya men raping, forcibly marrying, beating, and murdering Buddhist women. One post that got passed around a lot includes a graphic photo of a woman tortured and murdered by a Mexican cartel, presented as a Buddhist woman in Myanmar murdered by the Rohingya.
  • They connect the Rohingya to the global Islamic threat”: There’s a lot of equating Rohingya people with ISIS terrorists and assigning them group responsibility for real attacks and atrocities by distant Islamic terror organizations.

Ultimately, all of these moves flow into demands for violence. The messages call incessantly and graphically for mass killings, beatings, and forced deportations. They call not for punishment, but annihilation.

This is, literally, textbook preparation for genocide, and I want to take a moment to look at how it works.

Helen Fein is the author of several definitive books on genocide, a co-founder and first president of the International Association of Genocide Scholars, and the founder of the Institute for the Study of Genocide. I think her description of the ways genocidaires legitimize their attacks holds up extremely well despite having been published 30 years ago. Here, she classifies a specific kind of rhetoric as one of the defining characteristics of genocide:

Is there evidence of an ideology, myth, or an articulated social goal which enjoins or justifies the destruction of the victim? Besides the above, observe religious traditions of contempt and collective defamation, stereotypes, and derogatory metaphor indicating the victim is inferior, subhuman (animals, insects, germs, viruses) or superhuman (Satanic, omnipotent), or other signs that the victims were pre-defined as alien, outside the universe of obligation of the perpetrator, subhuman or dehumanized, or the enemy—i.e., the victim needs to be eliminated in order that we may live (Them or Us).20

It’s also necessary for genocidaires to make claims—often supported by manufactured evidence—that the targeted group itself is the true danger, often by projecting genocidal intent onto the group that will be attacked.

Adam Jones, the guy who wrote a widely used textbook on genocide, puts it this way:

One justifies genocidal designs by imputing such designs to perceived opponents. The Tutsis/ Croatians/Jews/Bolsheviks must be killed because they harbor intentions to kill us, and will do so if they are not stopped/prevented/annihilated. Before they are killed, they are brutalized, debased, and dehumanized—turning them into something approaching subhumans” or animals” and, by a circular logic, justifying their extermination.21

So before their annihilation, the target group is presented as outcast, subhuman, vermin, but also themselves genocidal—a mortal threat. And afterward, the extraordinary cruelties characteristic of genocide reassure those committing the atrocities that their victims aren’t actually people.

The Tatmadaw committed atrocities in Myanmar. I touched on them in Part II and I’m not going to detail them here. But the figuratively dehumanizing rhetoric I described in parts one and two can’t be separated from the literally dehumanizing things the Tatmadaw did to the humans they maimed and traumatized and killed. Especially now that it’s clear that the military was behind much of the rhetoric as well as the violent actions that rhetoric worked to justify.

In some cases, even the methods match up: The military’s campaign of intense and systematic sexual violence toward and mutilation of women and girls, combined with the concurrent mass murder of children and babies, feels inextricably connected to the rhetoric that cast the Rohingya as both a sexual and reproductive threat who endanger the safety of Buddhist women and outbreed the ethnicities that belong in Myanmar.

Genocidal communications are an inextricable part of a system that turns ethnic tensions” into mass death. When we see that the Tatmadaw was literally the operator of covert hate and dehumanization propaganda networks on Facebook, I think the most rational way to understand those networks is as an integral part of the genocidal campaign.


After the New York Times article went live, Meta did two big takedowns. Nearly four million people were following the fake Pages identified by either the NYT or by Meta in follow-up investigations. (Meta had previously removed the Tatmadaw’s own official Pages and accounts and 46 news and opinion” Pages that turned out to be covertly operated by the military—those Pages were followed by nearly 12 million people.)

So given these revelations and disclosures, here’s my question: Does the deliberate, adversarial use of Facebook by Myanmar’s military as a platform for disinformation and propaganda take any of the heat off of Meta? After all, a sovereign country’s military is a significant adversary.

But here’s the thing—Alex Stamos, Facebook’s Chief Security Officer, had been trying since 2016 to get Meta’s management and executives to acknowledge and meaningfully address the fact that Facebook was being used as host for both commercial and state-sponsored covert influence ops around the world. Including in the only place where it was likely to get the company into really hot water: the United States.

Oh fuck”

On December 16, 2016, Facebook’s newish Chief Security Officer, Alex Stamos—who now runs Stanford’s Internet Observatory—rang Meta’s biggest alarm bells by calling an emergency meeting with Mark Zuckerberg and other top-level Meta executives.

In that meeting, documented in Sheera Frenkel and Cecilia Kang’s book, The Ugly Truth, Stamos handed out a summary outlining the Russian capabilities. It read:

We assess with moderate to high confidence that Russian state-sponsored actors are using Facebook in an attempt to influence the broader political discourse via the deliberate spread of questionable news articles, the spread of information from data breaches intended to discredit, and actively engaging with journalists to spread said stolen information.22

Oh fuck, how did we miss this?” Zuckerberg responded.

Stamos’ team had also uncovered a huge network of false news sites on Facebook” posting and cross-promoting sensationalist bullshit, much of it political disinformation, along with examples of governmental propaganda operations from Indonesia, Turkey, and other nation-state actors. And the team had recommendations on what to do about it.

Frenkel and Kang paraphrase Stamos’ message to Zuckerberg (my emphasis):

Facebook needed to go on the offensive. It should no longer merely monitor and analyze cyber operations; the company had to gear up for battle. But to do so required a radical change in culture and structure. Russia’s incursions were missed because departments across Facebook hadn’t communicated and because no one had taken the time to think like Vladimir Putin.23

Those changes in culture and structure didn’t happen. Stamos began to realize that to Meta’s executives, his work uncovering the foreign influence networks, and his choice to bring them to the executives’ attention, were both unwelcome and deeply inconvenient.

All through the spring and summer of 2017, instead of retooling to fight the massive international category of abuse Stamos and his colleagues had uncovered, Facebook played hot potato with the information about the ops Russia had already run.

On September 21, 2017, while the Tatmadaw’s genocidal clearance operations” were approaching their completion, Mark Zuckerberg finally spoke publicly about the Russian influence campaign for the first time.24

In the intervening months, the massive covert influence networks operating in Myanmar ground along, unnoticed.

Thanks to Sophie Zhang, a data scientist who spent two years at Facebook fighting to get networks like the Tatmadaw’s removed, we know quite a lot about why.

What Sophie Zhang found

In 2018, Facebook hired a data scientist named Sophie Zhang and assigned her to a new team working on fake engagement—and specifically on scripted inauthentic activity,” or bot-driven fake likes and shares.

Within her first year on the team, Zhang began finding examples of bot-driven engagement being used for political messages in both Brazil and India ahead of their national elections. Then she found something that concerned her a lot more. Karen Hao of the MIT Technology Review writes:

The administrator for the Facebook page of the Honduran president, Juan Orlando Hernández, had created hundreds of pages with fake names and profile pictures to look just like users—and was using them to flood the president’s posts with likes, comments, and shares. (Facebook bars users from making multiple profiles but doesn’t apply the same restriction to pages, which are usually meant for businesses and public figures.)

The activity didn’t count as scripted, but the effect was the same. Not only could it mislead the casual observer into believing Hernández was more well-liked and popular than he was, but it was also boosting his posts higher up in people’s newsfeeds. For a politician whose 2017 reelection victory was widely believed to be fraudulent, the brazenness—and implications—were alarming.25

When Zhang brought her discovery back to the teams working on Pages Integrity and News Feed Integrity, both refused to act, either to stop fake Pages from being created, or to keep the fake engagement signals the fake Pages generate from making posts go viral.

But Zhang kept at it, and after a year, Meta finally removed the Honduran network. The very next day, Zhang reported a network of fake Pages in Albania. The Guardians Julia Carrie Wong explains what came next:

In August, she discovered and filed escalations for suspicious networks in Azerbaijan, Mexico, Argentina and Italy. Throughout the autumn and winter she added networks in the Philippines, Afghanistan, South Korea, Bolivia, Ecuador, Iraq, Tunisia, Turkey, Taiwan, Paraguay, El Salvador, India, the Dominican Republic, Indonesia, Ukraine, Poland and Mongolia.26

According to Zhang, Meta eventually established a policy against inauthentic behavior,” but didn’t enforce it, and rejected Zhang’s proposal to punish repeat fake-Page creators by banning their personal accounts because of policy staff’s discomfort with taking action against people connected to high-profile accounts.”27

Zhang discovered that even when she took initiative to track down covert influence campaigns, the teams who could take action to remove them didn’t—not without persistent lobbying.” So Zhang tried harder. Here’s Karen Hao again:

She was called upon repeatedly to help handle emergencies and praised for her work, which she was told was valued and important.

But despite her repeated attempts to push for more resources, leadership cited different priorities. They also dismissed Zhang’s suggestions for a more sustainable solution, such as suspending or otherwise penalizing politicians who were repeat offenders. It left her to face a never-ending firehose: The manipulation networks she took down quickly came back, often only hours or days later. It increasingly felt like I was trying to empty the ocean with a colander,” she says.28

Julia Carrie Wong’s Guardian piece reveals something interesting about Zhang’s reporting chain, which is that Meta’s Vice President of Integrity, Guy Rosen, was one of the people giving her the hardest pushback.

Remember Internet.org, also known as Free Basics, aka Meta’s push to dominate global internet use in all those countries it would go on to deprioritize” and generally ignore?

Guy Rosen, Meta’s then-newish VP of Integrity, is the guy who previously ran Internet.org. He came to lead Integrity directly from being VP of Growth. Before getting acquihired by Meta, Rosen co-founded a company The Information describes as a startup that analyzed what people did on their smartphones.”29

Meta bought that startup in 2013, nominally because it would help Internet.org. In a very on-the-nose development, Rosen’s company’s supposedly privacy-protecting VPN software allowed Meta to collect huge amounts of data—so much that Apple booted the app from its store.

So that’s Facebook’s VP of Integrity.

We simply didn’t care enough to stop them”

In the Guardian, Julia Carrie Wong reports that in the fall of 2019, Zhang discovered that the Honduras network was back up, and she couldn’t get Meta’s Threat Intelligence team to deal with it. That December, she posted an internal memo about it. Rosen responded:

Facebook had moved slower than we’d like because of prioritization” on the Honduras case, Rosen wrote. It’s a bummer that it’s back and I’m excited to learn from this and better understand what we need to do systematically,” he added. But he also chastised her for making a public [public as in within Facebook —EK] complaint, saying: My concern is that threads like this can undermine the people that get up in the morning and do their absolute best to try to figure out how to spend the finite time and energy we all have and put their heart and soul into it.”[31]

In a private follow-up conversation (still in December, 2019), Zhang alerted Rosen that she’d been told that the Facebook Threat Intelligence team would only prioritize fake networks affecting the US/western Europe and foreign adversaries such as Russia/Iran/etc.”

Rosen told her that he agreed with those priorities. Zhang pushed back (my emphasis):

I get that the US/western Europe/etc is important, but for a company with effectively unlimited resources, I don’t understand why this cannot get on the roadmap for anyone … A strategic response manager told me that the world outside the US/Europe was basically like the wild west with me as the part-time dictator in my spare time. He considered that to be a positive development because to his knowledge it wasn’t covered by anyone before he learned of the work I was doing.

Rosen replied, I wish resources were unlimited.”30

I’ll quote Wong’s next passage in full: At the time, the company was about to report annual operating profits of $23.9bn on $70.7bn in revenue. It had $54.86bn in cash on hand.”

In early 2020, Zhang’s managers told her she was all done tracking down influence networks—it was time she got back to hunting and erasing vanity likes” from bots instead.

But Zhang believed that if she stopped, no one else would hunt down big, potentially consequential covert influence networks. So she kept doing at least some of it, including advocating for action on an inauthentic Azerbaijan network that appeared to be connected to the country’s ruling party. In an internal group, she wrote that, Unfortunately, Facebook has become complicit by inaction in this authoritarian crackdown.”

Although we conclusively tied this network to elements of the government in early February, and have compiled extensive evidence of its violating nature, the effective decision was made not to prioritize it, effectively turning a blind eye.”31

After those messages, Threat Intelligence decided to act on the network after all.

Then Meta fired Zhang for poor performance.

On her way out the door, Zhang posted a long exit memo—7,800 words—describing what she’d seen. Meta deleted it, so Zhang put up a password-protected version on her own website so her colleagues can see it. So Meta got Zhang’s entire website taken down and her domain deactivated. Eventually Meta got enough employee pressure that it put an edited version back up on their internal site.32

Shortly thereafter, someone leaked the memo to Buzzfeed News.

In the memo, Zhang wrote:

I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions. I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count.33

And: [T]he truth was, we simply didn’t care enough to stop them.”

On her final day at Meta, Zhang left notes for her colleagues, tallying suspicious accounts involved in political influence campaigns that needed to be investigated:

There were 200 suspicious accounts still boosting a politician in Bolivia, she recorded; 100 in Ecuador, 500 in Brazil, 700 in Ukraine, 1,700 in Iraq, 4,000 in India and more than 10,000 in Mexico.34

With all due respect”

Zhang’s work at Facebook happened after the wrangling over Russian influence ops that Alex Stamos’ team found. And after the genocide in Myanmar. And after Mark Zuckerberg did his press-and-government tour about how hard Meta tried and how much better they’d do after Myanmar.35

It was an entire calendar year after the New York Times found the Tatmadaw’s genocide-fueling fake-Page hatefarm that Guy Rosen, Facebook’s VP of Integrity, told Sophie Zhang that the only coordinated fake networks Facebook would take down were the ones that affected the US, Western Europe, and foreign adversaries.”36

In response to Zhang’s disclosures, Rosen later hopped onto Twitter to deliver his personal assessment of the networks Zhang found and couldn’t get removed:

With all due respect, what she’s described is fake likes—which we routinely remove using automated detection. Like any team in the industry or government, we prioritize stopping the most urgent and harmful threats globally. Fake likes is not one of them.

One of Frances Haugen’s disclosures includes an internal memo that summarizes Meta’s actual, non-Twitter-snark awareness of the way Facebook has been hollowed out for routine use by covert influence campaigns:

We frequently observe highly-coordinated, intentional activity on the FOAS [Family of Apps and Services] by problematic actors, including states, foreign actors, and actors with a record of criminal, violent or hateful behaviour, aimed at promoting social violence, promoting hate, exacerbating ethnic and other societal cleavages, and/or delegitimizing social institutions through misinformation. This is particularly prevalent—and problematic—in At Risk Countries and Contexts.37

So, they knew.

Because of Haugen’s disclosures, we also know that in 2020, for the category, Remove, reduce, inform/measure misinformation on FB Apps, Includes Community Review and Matching”—so, that’s moderation targeting misinformation specifically—only 13% of the total budget went to the non-US countries that provide more than 90% of Facebook’s user base, and which include all of those At Risk Countries. The other 87% of the budget was reserved for the 10% of Facebook users who live in the United States.38

In case any of this seems disconnected with the main thread of what happened in Myanmar, here’s what (formerly Myanmar-based) researcher Victoire Rio had to say about covert coordinated influence networks in her extremely good 2020 case study about the role of social media in Myanmar’s violence:

Bad actors spend months—if not years—building networks of online assets, including accounts, pages and groups, that allow them to manipulate the conversation. These inauthentic presences continue to present a major risk in places like Myanmar and are responsible for the overwhelming majority of problematic content.39

Note that Rio says that these inauthentic networks—the exact things Sophie Zhang chased down until she got fired for it—continued to present a major risk in 2020.

It’s time to skip ahead.

Let’s go to Myanmar in 2021, four years after the peak of the genocide. After everything I’ve dealt with in this whole painfully long series so far, it would be fair to assume that Meta would be prioritizing getting everything right in Myanmar. Especially after the coup.

Meta in Myanmar, again (2021)

In 2021, the Tatmadaw deposed Myanmar’s democratically elected government and transferred the leadership of the country to the military’s Commander-in-Chief. Since then, the military has turned the machines of surveillance, administrative repression, torture, and murder that it refined on the Rohingya and other ethnic minorities onto Myanmar’s Buddhist ethnic Bamar majority.

Also in 2021, Facebook’s director of policy for APAC Emerging Countries, Rafael Frankel, told the Associated Press that Facebook had now built a dedicated team of over 100 Burmese speakers.”

This dedicated team is,” presumably, the group of contract workers employed by the Accenture-run Project Honey Badger” team in Malaysia.40 (Which, Jesus.)

In October of 2021, the Associated Press took a look at how that’s working out on Facebook in Myanmar. Right away, they found threatening and violent posts:

One 2 1/2 minute video posted on Oct. 24 of a supporter of the military calling for violence against opposition groups has garnered over 56,000 views.

So starting from now, we are the god of death for all (of them),” the man says in Burmese while looking into the camera. Come tomorrow and let’s see if you are real men or gays.”

One account posts the home address of a military defector and a photo of his wife. Another post from Oct. 29 includes a photo of soldiers leading bound and blindfolded men down a dirt path. The Burmese caption reads, Don’t catch them alive.”41

That’s where content moderation stood in 2021. What about the algorithmic side of things? Is Facebook still boosting dangerous messages in Myanmar?

In the spring of 2021, Global Witness analysts made a clean Facebook account with no history and searched for တပ်မ​တော်—“Tatmadaw.” They opened the top page in the results, a military fan page, and found no posts that broke Facebook’s new, stricter rules. Then they hit the like” button, which caused a pop-up with related pages” to appear. Then the team popped open up the first five recommended pages.

Here’s what they found:

Three of the five top page recommendations that Facebook’s algorithm suggested contained content posted after the coup that violated Facebook’s policies.  One of the other pages had content that violated Facebook’s community standards but that was posted before the coup and therefore isn’t included in this article.

Specifically, they found messages that included:

  • Incitement to violence
  • Content that glorifies the suffering or humiliation of others
  • Misinformation that can lead to physical harm42

As well as several kinds of posts that violated Facebook’s new and more specific policies on Myanmar.

So not only were the violent, violence-promoting posts still showing up in Myanmar four years later after the atrocities in Rakhine State—and after the Tatmadaw turned the full machinery of of its violence onto opposition members of Myanmar’s Buddhist ethnic majority—but Facebook was still funneling users directly into them after even the lightest engagement with anodyne pro-military content.

This is in 2021, with Meta throwing vastly more resources at the problem than it ever did during the period leading up to and including the genocide of the Rohingya people. Its algorithms are making active recommendations by Facebook, precisely as outlined in the Meta memos in Haugen’s disclosures.

By any reasonable measure, I think this is a failure.

Meta didn’t respond to requests for comment from Global Witness, but when the Guardian and AP picked up the story, Meta got back to them with…this:

Our teams continue to closely monitor the situation in Myanmar in real-time and take action on any posts, Pages or Groups that break our rules. We proactively detect 99 percent of the hate speech removed from Facebook in Myanmar, and our ban of the Tatmadaw and repeated disruption of coordinated inauthentic behavior has made it harder for people to misuse our services to spread harm.43

One more time: This statement says nothing about how much hate speech is removed. It’s pure misdirection.


Internal Meta memos highlight ways to use Facebook’s algorithmic machinery to sharply reduce the spread of what they called high-harm misinfo.” For those potentially harmful topics, you hard demote” (aka push down” or don’t show”) reshared posts that were originally made by someone who isn’t friended or followed by the viewer. (Frances Haugen talks about this in interviews as cutting the reshare chain.”)

And this method works. In Myanmar, reshare depth demotion” reduced viral inflammatory prevalence” by 25% and cut photo misinformation” almost in half.

In a reasonable world, I think Meta would have decided to broaden use of this method and work on refining it to make it even more effective. What they did, though, was decide to roll it back within Myanmar as soon as the upcoming elections were over.44

The same SEC disclosure I just cited also notes that Facebook’s AI classifier” for Burmese hate speech didn’t seem to be maintained or in use—and that algorithmic recommendations were still shuttling people toward violent, hateful messages that violated Facebook’s Community Standards.

So that’s how the algorithms were going. How about the military’s covert influence campaign?

Reuters reported in late 2021 that:

As Myanmar’s military seeks to put down protest on the streets, a parallel battle is playing out on social media, with the junta using fake accounts to denounce opponents and press its message that it seized power to save the nation from election fraud…

The Reuters reporters explain the military has assigned thousands of the soldiers to wage information combat” in what appears to be an expanded, distributed version of their earlier secret propaganda ops:

Soldiers are asked to create several fake accounts and are given content segments and talking points that they have to post,” said Captain Nyi Thuta, who defected from the army to join rebel forces at the end of February. They also monitor activity online and join (anti-coup) online groups to track them.” 45

(We know this because Reuters journalists got hold of a high-placed defector from the Tatmadaw’s propaganda wing.)

When asked for comment, Facebook’s regional Director of Public Policy told Reuters that Meta ‘proactively’ detected almost 98 percent of the hate speech removed from its platform in Myanmar.”

Wasting our lives under tarpaulin”

The Rohingya people forced to flee Myanmar have scattered across the region, but the overwhelming majority of those who fled in 2017 ended up in the Cox’s Bazar region of Bangladesh.

The camps are beyond overcrowded, and they make everyone who lives in them vulnerable to the region’s seasonal flooding, to worsening climate impacts, and to waves of disease. This year, the refugees’ food aid was just cut from the equivalent of $12 a month to $8 month, because the international community is focused elsewhere.46

The complex geopolitical situation surrounding post-coup Myanmar—in which many western and Asian countries condemn the situation in Myanmar, but don’t act lest they push the Myanmar junta further toward China—seems likely to ensure a long, bloody conflict, with no relief in sight for the Rohingya.47

The UN estimates that more than 960,000 Rohingya refugees now live in refugee camps in Bangladesh. More than half are children, few of whom have had much education at all since coming to the camps six years ago. The UN estimates that the refugees needed about $70.5 million for education in 2022, of which 1.6% was actually funded. 48

Amnesty International spoke with Mohamed Junaid, a 23-year-old Rohingya volunteer math and chemistry teacher, who is also a refugee. He told Amnesty:

Though there were many restrictions in Myanmar, we could still do school until matriculation at least. But in the camps our children cannot do anything. We are wasting our lives under tarpaulin.49

In their report, The Social Atrocity,” Amnesty wrote that in 2020, seven Rohingya youth organizations based in the refugee camps made a formal application to Meta’s Director of Human Rights. They requested that, given its role in the crises that led to their expulsion from Myanmar, Meta provide just one million dollars in funding to support a teacher-training initiative within the camps—a way to give the refugee children a chance at an education that might someday serve them in the outside world.

Meta got back to the Rohingya youth organizations in 2021, a year in which the company cleared $39.3B in profits:

Unfortunately, after discussing with our teams, this specific proposal is not something that we’re able to support. As I think we noted in our call, Facebook doesn’t directly engage in philanthropic activities.


In 2022, Global Witness came back for one more look at Meta’s operations in Myanmar, this time with eight examples of real hate speech aimed at the Rohingya—actual posts from the period of the genocide, all taken from the UN Human Rights Council findings I’ve been linking to so frequently in this series. They submitted these real-life examples of hate speech to Meta as Burmese-language Facebook advertisements.

Meta accepted all eight ads.50

The final post in this series, Part IV, will be up in about a week. Thank you for reading.


  1. Facebook Misled Investors and the Public About Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎

  2. Facebook Misled Investors and the Public About Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎

  3. Facebook Misled Investors and the Public About Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called Demoting on Integrity Signals.”↩︎

  4. Facebook Misled Investors and the Public About Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called A first look at the minimum integrity holdout.”↩︎

  5. Facebook Misled Investors and the Public About Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called fghanistan Hate Speech analysis.”↩︎

  6. Transcript of Mark Zuckerberg’s Senate hearing,” The Washington Post (which got the transcript via Bloomberg Government), April 10, 2018.↩︎

  7. Transcript of Zuckerberg’s Appearance Before House Committee,” The Washington Post (which got the transcript via Bloomberg Government), April 11, 2018.↩︎

  8. Transcript of Zuckerberg’s Appearance Before House Committee,” The Washington Post (which got the transcript via Bloomberg Government), April 11, 2018.↩︎

  9. Facebook Misled Investors and the Public About Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎

  10. Facebook Misled Investors and the Public About Its Role Perpetuating Misinformation and Violent Extremism Relating to the 2020 Election and January 6th Insurrection,” Whistleblower Aid, undated; Facebook Wrestles With the Features It Used to Define Social Networking,” The New York Times, Oct. 25, 2021. This memo hasn’t been made public even in a redacted form, which is frustrating, but the SEC disclosure and NYT article cited here both contain overlapping but not redundant excerpts from which I was able to reconstruct this slightly longer quote.↩︎

  11. Facebook and responsibility,” internal Facebook memo, authorship redacted, March 9, 2020, archived at Document Cloud as a series of images.↩︎

  12. Facebook misled investors and the public about its role perpetuating misinformation and violent extremism relating to the 2020 election and January 6th insurrection,” Whistleblower Aid, undated. (Date of the quoted internal memo comes from The Atlantic.)↩︎

  13. Facebook Misled Investors and the Public About Its Role Perpetuating Misinformation and Violent Extremism Relating to the 2020 Election and January 6th Insurrection,” Whistleblower Aid, undated.↩︎

  14. Facebook and the Rohingya Crisis,” Myanmar Internet Project, September 29, 2022. This document is offline right now at the Myanmar Internet Project site, so I’ve used Document Cloud to archive a copy of a PDF version a project affiliate provided.↩︎

  15. Full interview with Alex Stamos filmed for The Facebook Dilemma, Frontline, October, 2018.↩︎

  16. A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018.↩︎

  17. A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018; Removing Myanmar Military Officials From Facebook,” Meta takedown notice, August 28, 2018.↩︎

  18. A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018.↩︎

  19. The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎

  20. Genocide: A Sociological Perspective,” Helen Fein, Current Sociology, Vol.38, No.1 (Spring 1990), p. 1-126; republished in Genocide: An Anthropological Reader, ed. Alexander Laban Hinton, Blackwell Publishers, 2002, and this quotation appears on p. 84 of that edition.↩︎

  21. Genocide: A Comprehensive Introduction, Adam Jones, Routledge, 2006, p. 267.↩︎

  22. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎

  23. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎

  24. Read Mark Zuckerberg’s full remarks on Russian ads that impacted the 2016 elections,” CNBC News, September 21, 2017.↩︎

  25. She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎

  26. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  27. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  28. She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎

  29. The Guy at the Center of Facebook’s Misinformation Mess,” Sylvia Varnham O’Regan, The Information, June 18, 2021.↩︎

  30. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  31. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  32. She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎

  33. I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation, Craig Silverman, Ryan Mac, Pranav Dixit, BuzzFeed News, September 14, 2020.↩︎

  34. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  35. The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎

  36. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  37. Facebook Misled Investors and the Public About Bringing the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated. This is a single-source statement, but it’s a budget figure, not an opinion, so I’ve used it.↩︎

  38. Facebook Misled Investors and the Public About Bringing the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated. This is a single-source statement, but it’s a budget figure, not an opinion, so I’ve used it.↩︎

  39. The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎

  40. Zuckerberg Was Called Out Over Myanmar Violence. Here’s His Apology. Kevin Roose and Paul Mozur, The New York Times, April 9, 2018.↩︎

  41. Hate Speech in Myanmar Continues to Thrive on Facebook, Sam McNeil, Victoria Milko, The Associated Press, November 17, 2021.↩︎

  42. Algorithm of Harm: Facebook Amplified Myanmar Military Propaganda Following Coup,” Global Witness, June 23, 2021.↩︎

  43. Algorithm of Harm: Facebook Amplified Myanmar Military Propaganda Following Coup,” Global Witness, June 23, 2021.↩︎

  44. Facebook Misled Investors and the Public About Bringing the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated.↩︎

  45. Information Combat’: Inside the Fight for Myanmar’s soul,” Fanny Potkin, Wa Lone, Reuters, November 1, 2021.↩︎

  46. Rohingya Refugees Face Hunger and Loss of Hope After Latest Ration Cuts,” Christine Pirovolakis, UNHCR, the UN Refugee Agency, July 19, 2023.↩︎

  47. Is Myanmar the Frontline of a New Cold War?,” Ye Myo Hein and Lucas Myers, Foreign Affairs, June 19, 2023.↩︎

  48. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022; the education funding estimates come from Bangladesh: Rohingya Refugee Crisis Joint Response Plan 2022,” OCHA Financial Tracking Service, 2022, cited by Amnesty.↩︎

  49. Facebook Approves Adverts Containing Hate Speech Inciting Violence and Genocide Against the Rohingya,” March 20, 2022.↩︎

  50. Facebook Misled Investors and the Public About Bringing the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated.↩︎

6 October 2023

Meta in Myanmar, Part II: The Crisis

This is the second post in a series on what Meta did in Myanmar and what the broader technology community can learn from it. It will make a lot more sense if you read the first post—these first two are especially tightly linked and best understood as a single story. There’s also a meta-post with things like terminology notes, sourcing information, and a corrections changelog.

But in case you haven’t read Part I, or in case you don’t remember all billion words of it…

Let’s recap

In the years leading up to the worst violence against the Rohingya people, a surge of explicit calls for the violent annihilation of the Rohingya ethnic minority flare up across Myanmar—in speeches by military officers and political party members, in Buddhist temples, in YouTube videos, through anonymous Bluetooth-transmitted messages in cafes, and, of course, on Facebook.

What makes Facebook special, though, is that it’s everywhere. It’s on every phone, which is in just about every home. Under ultra-rigid military control, the Burmese have long relied on unofficial information—rumors—to get by. And now the country’s come online extremely quickly, even in farming villages that aren’t yet wired for electricity.

And into all the phones held in all the hands of all these people who are absolutely delighted to connect and learn and better understand the world around them, Facebook is distributing and accelerating professional-grade hatred and disinformation whipped up in part by the extremist wing of Myanmar’s widely beloved Buddhist religious establishment.

It’s a very bad setup.

The dangers rising in Myanmar in the mid-2010s aren’t only clear in hindsight: For years, Burmese and western civil society experts, digital rights advocates, tech folks—even Myanmar’s own government—have been warning Meta that Facebook is fueling a slide toward genocide. In 2012 and 2014, waves of—sometimes state-supported—communal violence occur; the Burmese government even directly connects unchecked incitement on Facebook to one of the riots and blocks the site to stop the violence.

Meta has responded by getting local Burmese groups to help it translate its rules and reporting flow, but there’s no one to deal with the reports. For years, Meta employs a total of one Burmese-speaking moderator for this country of 50M+ people—which by the end of 2015 they increased to four.

This brings us to 2016, when Meta doubles down on connection.

The next billion

In 2013, Mark Zuckerberg announces the launch of Facebook’s new global internet-expansion initiative, Internet.org. Facebook will lead the program with six other for-profit technology companies: two semiconductor companies, two handset makers, a telecom, and Opera. There’s a launch video, too, with lots of very global humans doing celebratory human things set to pensive piano notes with a JFK speech about world peace playing over it.1

Alongside the big announcement, Zuckerberg posts a memo about his plans, titled Is Connectivity a Human Right? Facebook’s whole deal, he writes, is to make the world more open and connected:

But as we started thinking about connecting the next 5 billion people, we realized something important: the vast majority of people in the world don’t have any access to the internet.

The problem, according to Zuckerberg, is that data plans were too costly—which is because of missing infrastructure. His memo then makes a brief detour through economics, explaining that internet access == no more zero-sum resources == global prosperity and happiness:

Before the internet and the knowledge economy, our economy was primarily industrial and resource-based. Many dynamics of resource-based economies are zero sum. For example, if you own an oil field, then I can’t also own that same oil field. This incentivizes those with resources to hoard rather than share them. But a knowledge economy is different and encourages worldwide prosperity. It’s not zero sum. If you know something, that doesn’t stop me from knowing it too. In fact, the more things we all know, the better ideas, products and services we can all offer and the better all of our lives will be.

And in Zuckerberg’s account, Facebook is really doing the work, putting in the resources required to open all of these benefits to everyone:

Since the internet is so fundamental, we believe everyone should have access and we’re investing a significant amount of our energy and resources into making this happen. Facebook has already invested more than $1 billion to connect people in the developing world over the past few years, and we plan to do more.2

As various boondoggles have recently demonstrated, social media executives are not necessarily brilliant people, but neither is Mark Zuckerberg a hayseed. What his new Next Billion” initiative to connect the world“ will do is build and reinforce monopolistic structures that give underdeveloped countries not real internet access” but…mostly just Facebook, stripped down and zero-rated so that using it doesn’t rack up data charges.

The Internet.org initiative debuts to enthusiastic coverage in the US tech press, and many mainstream outlets.3 The New York Times contributes a more skeptical perspective:

[Social media] companies have little choice but to look overseas for growth. More than half of Americans already use Facebook at least once a month, for instance, and usage in the rest of the developed world is similarly heavy. There is nearly one active cellphone for every person on earth, making expansion a challenge for carriers and phone makers.

Poorer countries in Asia, Africa and Latin America present the biggest opportunity to reach new customers—if companies can figure out how to get people there online at low cost.4

In June of 2013, Facebook had 1.1 billion monthly active users, only 198 million of which were in the US. As I write this post in 2023, the number of monthly active users is up to 3 billion, only 270 million of which are in the US. So usage numbers in the US have only risen 36% in ten years, while monthly active users everywhere else went up 188%.5 By 2022, 55% of all social media use was in Asia.6

Whenever you read about Meta’s work connecting the world,” I think it’s good to keep those figures in mind.

But just because the growth was happening globally didn’t mean that Meta was attending to what its subsidized access was doing outside the US and Western Europe.

In An Ugly Truth, their 2021 book about Meta’s inner workings, New York Times reporters Sheera Frenkel and Cecelia Kang write that no one at Meta was responsible for assessing cultural and political dynamics as new communities came online, or even tracking whether they had linguistically and culturally competent moderators to support each new country.

A Meta employee who worked on the Next One Billion initiative couldn’t remember anyone directly questioning Mark or Sheryl about whether there were safeguards in place or raising something that would qualify as a concern or warning for how Facebook would integrate into non-American cultures.”7

In 2015, Internet.org rebrands as Free Basics after the initiative attracts broad criticism for working against net neutrality—it’s a PR move that foreshadows the big rebrand from Facebook to Meta shortly after Frances Haugen delivers her trove of internal documents to the SEC in 2021.8

In 2016, it’s time to roll out Free Basics in Myanmar, alongside a stripped-down version of Facebook called Facebook Flex that lets people view text for free and then pay for image and video data.9 Facebook is already super-popular in Myanmar for reasons covered in the previous post, but when Myanmar’s largest telecom, MPT, launches Free Basics and Facebook Flex, Facebook’s Myanmar monthly active user count more than doubles from a little over 7 million users in 2015 to at least 15 million in 2017. (Several US media sources say 30 million, though I don’t think I believe them.)10

But I want to be clear—for a ton of people across Myanmar, getting even a barebones internet was life-changingly great.

Before, I just had to watch the clouds”

In early 2017, journalist Doug Bock Clark interviewed people in Myanmar—including MIDO cofounder Nay Phone Latt—about the internet for Wired.

Clark quotes a farmer who cultivates the tea plantation his family has worked for generations in Shan State:

I have always lived in the same town with about 900 people, which is in a very beautiful forest but also very isolated. When I was a child, we lived in wooden houses and used candles at night, and the mountain footpaths were too small even for oxcarts. For a long time, life didn’t change.

In 2014, the tea farmer’s town got a cell tower, and in 2016 a local NGO demonstrated an app that offered weather forecasts, market prices, and more. That really changed things:

Being able to know the weather in advance is amazing—before, I just had to watch the clouds! And the market information is very important. Before, we would sell our products to the brokers for very low prices, because we had no idea they sold them for higher prices in the city. But in the app I can see what the prices are in the big towns, so I don’t get cheated…

This brings me back to Craig Mod’s essay about his ethnographic work in rural Myanmar that I quoted from a lot in Part I of this series. Here, Mod is talking about internet use with a group of farmers: The lead farmer mentions Facebook and the others fall in. Facebook! Yes yes! They use Facebook every day. They feel that spending data on Facebook is a worthwhile investment.”

One of the farmers wants to show Mod a post, and Mod and his colleagues speculate while the post loads:

Earlier, he said to us, lelthamar asit—Like any real farmer, I know the land. And so we wonder: What will he show us? A new farming technique? News about the upcoming election? Analysis on its impact on farmers? He shows us: A cow with five legs. He laughs. Amazing, no? Have you ever seen such a thing?11

It’s a charming story. But it’s hard not to feel a little ill, reading back from the perspective of 2023.

In the middle of a video podcast interview, Frances Haugen relates a story in the context of Meta trying to make tooling for reporting misinformation:

And one of our researchers said, you know, that sounds really obvious. Like that sounds like it would be a thing that would work. Except for when we went in and did interviews in India, people are coming online so fast that when we talk to people with master’s degrees… They say things like, why would someone put something fake on the Internet? That sounds like a lot of work.12

This anecdote is meant to point to the relative naiveté of Indian Facebook users, but honestly I recognize the near-universal humanity of the idea—that all of that manufacturing would just be too much work for regular people to do! It’s the argument against conspiracies in general. For those of us whose brains haven’t been ruined by the internet, it’s reasonable to think that regular people just wouldn’t go to all that trouble.

As it happens, in Myanmar and lots of other places, it’s not only regular people doing the work of disinformation and incitement, and we’ll get to that later. But regular people across Myanmar are reading all these anti-Rohingya messages and looking at the images and watching the videos, and…a lot of them are buying it.

Everyone knows they’re terrorists”

This brings me back to Faine Greenwood’s essay that I also quoted from a lot in the previous post, and specifically to Greenwood’s honest-to-god Thomas Friedman moment” in a Burmese cab back in 2013:

The driver was a charming young Burmese man who spoke good English, and we chatted about the usual things for a bit: the weather (sticky), how I liked Yangon (quite a bit, hungry dogs aside), and my opinion on Burmese food (I’m a fan).

Then he asked me what I was in town for, and I told him that I’d come to write about the Internet. Oh, yes, I’ve got a Facebook account now,” he said, with great enthusiasm. It is very interesting. Learning a lot. I didn’t know about all the bad things the Bengalis had been doing.” 

Bad things?” I asked, though I knew what he was going to say next. 

Killing Buddhists, stealing their land. There’s pictures on Facebook. Everyone knows they’re terrorists,” he replied. 

Oh, fuck,” I thought.13

Greenwood’s story closely parallels one Matt Schissler tells reporters Sheera Frenkel and Cecilia Kang for An Ugly Truth. (Schissler is one of the people delivering dire warnings to Meta in Part I of this series.)

In Schissler’s story, it’s also 2013, and he’s starting to see some really hair-raising stuff. His Buddhist friends start relating their conspiracy theories about the Rohingya and showing him grainy cell phone photos of bodies they said were of Buddhist monks killed by Muslims.” They’re telling him ISIS fighters are on their way to Myanmar.

This narrative is even coming from a journalist friend, who calls to warn Schissler of a Muslim plot to attack the country. The journalist shows him a video as proof:

Schissler could tell that the video was obviously edited, dubbed over in Burmese with threatening language. He was a person who should have known better, and he was just falling for, believing, all this stuff.”14

It’s miserably hot in Myanmar when Craig Mod is there in 2016—steam-broiling even in the shade, and the heat shows up a lot in Mod’s notes. His piece ends with a grace note about a weather forecast:

Farmer Number Fifteen loves the famous Myanmar weatherman U Tun Lwin, now follows him on Facebook. I hunt U Tun Lwin down, follow him too, in solidarity, although I’m pretty sure I know what tomorrow’s weather will be.15

When I reread Mod’s essay about halfway through my research for this series, my eye caught on that name: U Tun Lwin. I’d just seen it somewhere.

It was in the findings report of the United Nations Human Rights Council’s Independent International Fact-Finding Mission on Myanmar (just the UN Mission” in the rest of this post).

It was there because in the fall of 2016, about a year after Craig was in Myanmar and as a wave of extreme state violence against the Rohingya is kicking off, there’s this Facebook post. The UN Mission reports that Dr. Tun Lwin, a well-known meteorologist with over 1.5 million followers on Facebook, called on the Myanmar people to be united to secure the west gate.’” (The west gate” is the border with Bangladesh, and this is a reference to the idea that the Rohingya are actually all illegal Bengali” immigrants.)

Myanmar, Tun Lwin continued in his post, does not tolerate invaders,” and its people must be alert now that there is a common enemy.” As of August 2018, when the UN Mission published their report, Tun Lwin’s post was still up on Facebook. It had 47,000 reactions, over 830 comments, and nearly 10,000 shares. In the comments, people called the existence of the Rohingya in Rakhine State a Muslim invasion” and demanded that the Rohingya be uprooted and eradicated.16

The longest civil war

I need to say a little bit about the Tatmadaw, for reasons that will almost immediately become clear.

Tatmadaw (literally grand army”) is the umbrella term for Myanmar’s armed forces—it includes the army, navy, and air force, but a Tatmadaw officer also oversees the national police force. There’s a ton of history I have to elide, but the two crucial things to know are that Tatmadaw generals have been running Myanmar (or heavily influencing its government) since the country gained independence, and the military’s been at war with multiple ethnic armed groups throughout Myanmar since just after the end of WWII.17

These conflicts—by some accountings, the longest-running civil war in the world—have been marked by the Tatmadaw’s intense violence against civilians. The UN Mission findings report that I cite throughout this series includes detailed accounts of Tatmadaw atrocities targeting civilian members of ethnic minorities in Kachin and Shan States. Human Rights Watch and many other organizations have detailed Tatmadaw brutalities focusing on ethnic minorities in Karen State and elsewhere in Myanmar.18

Information about these conflicts and atrocities was readily available in English throughout Meta’s expansion into the region. I include this brief and inadequate history to explain that it was not difficult, in this period, to learn what the Tatmadaw really was, and what they were capable of doing to civilians.

Which brings us, finally, to what happened to the Rohingya in 2016 and 2017.

Clearance operations

Content warning for these next two sections: I’m going to be brief and avoid graphic descriptions, but these are atrocities, including the torture, rape, and murder of adults and children.

2016 was supposed to be the first year in Myanmar’s new story. In the landmark 2015 general elections in Myanmar, Aung San Suu Kyi’s party wins a supermajority, and takes office in the spring of 2016. This is a huge deal—obviously most of all within Myanmar, but also internationally, because it looks like Myanmar’s moving closer to operating as a true democracy. But the Rohingya are excluded from the vote, and from a national peace conference held that summer to try to establish a ceasefire between the Tatmadaw and armed ethnic minority groups.19

The approximately 140,000 Rohingya people displaced in the 2012 violence are at this point largely still living in IDP (internally displaced person) camps and deprived of the neccesities of life, and the Myanmar government has continued tightening—or eliminating—the already nearly impossible paths to citizenship and a more normal life for the Rohingya as whole.20

The violence has continued, as well. According to a 2016 US State Department Atrocities Prevention Report,” the Rohingya also continued to experience extremist mob attacks, alongside governmental abuses including torture, unlawful arrest and detention, restricted movement, restrictions on religious practice, and discrimination in employment and access to social services.”21

This is all background for what happens next.

I give this accounting not to be shocking or emotionally manipulative, but because I don’t think we can assess and rationally discuss Meta’s responsibilities—in Myanmar and elsewhere—unless we allow ourselves to understand what happened to the human beings who took the damage.

In October of 2016, a Rohingya insurgent group, the Arakan Rohingya Salvation Army (ARSA), attacks Burmese posts on the Myanmar-Bangladesh border, killing nine border officers and four Burmese soldiers. The Tatmadaw respond with what they called clearance operations,” nominally aimed at the insurgents but in fact broadly targeting all Rohingya people.22

A 2016 report from Amnesty International—and, later, the UN Human Rights Council’s Independent Fact-Finding Mission in Myanmar—document the Tatmadaw’s actions, including the indiscriminate rape and murder of Rohingya civilians, the arbitrary arrests of hundreds of Rohingya men including the elderly, forced starvation, and the destruction of Rohingya villages.23 Tens of thousands of Rohingya flee over the border to Bangladesh.24

Through the winter of 2016 and into 2017, bursts of violence continue—Tatmadaw officers beating Rohingya civilians, Buddhist mobs in Rakhine State attacking Rohingya people, Rohingya militants killing people they saw as betrayers. Uneasy times.

Then, on the morning of August 25th, 2017, ARSA fighters mount crude, largely unsuccessful attacks on about 30 Burmese security posts.25 Simultaneously, according to an Amnesty investigation, ARSA fighters murder at least 99 Hindu civilians, including women and children, in two villages in Northern Rakhine State.26 (Despite the mass-scale horrors that would follow, this act was, by any measure, an atrocity.)

And after that, everything really goes to hell.

In response to the ARSA attacks, the Tatmadaw begins its second wave of clearance operations and begins, in Amnesty International’s words, systematically attacking the entire Rohingya population in villages across northern Rakhine State.”27

Accelerating genocide

I’ve worked with atrocity documentation before. I still don’t know a right way to approach what comes next. I do know that the people who document incidents of communal and state violence for organizations like Medicins Sans Frontieres and the UN Human Rights Council use precise, economical language. Spend enough time with their meticulous tables and figures the precision itself begins to feel like rage.

Based on their extensive and intimate survey work with refugees who escaped to Bangladesh, Médecins Sans Frontières estimates that in a single month between August 25th and September 24th of 2017, about 11,000 Rohingya die in Myanmar, including 1,700 children. Of these, about 8,000 people are violently killed, including about 1,200 children under the age of five.28

The UN Mission’s report notes that in attacks on Rohingya villages, women and children, including infants, are specifically targeted.”29 According to MSF, most of the murdered children under five are shot or burned, but they note that about 7% are beaten to death.30

In what Amnesty International calls a relentless and systematic campaign,” the Tatmadaw publicly rape hundreds—almost certainly thousands—of Rohingya women and girls, many of whom they also mutilate. They indiscriminately arrest and torture Rohingya men and boys as terrorists.” They push whole communities into starvation by burning their markets and blocking access to their farms. They burn hundreds of Rohingya villages to the ground.31

Over the ensuing weeks, more than 700,000 people (”more than 702,000 people,” Amnesty writes, including children”) flee to squalid, overcrowded, climate-vulnerable refugee camps in Bangladesh.32 That’s more than 80% of the Rohingya previously living in Rakhine State.

The UN Mission’s findings report comes out about a year later.

I’ve cited it a lot already in this post and the previous one. The document runs to 444 pages, opens with a detailed background for the 2017 crisis and then becomes a catalog of thousands of collective and individual incidents of the Tatmadaw’s systematic torture, rape, and murder of members of Rohingya—and, to a lesser but still horrific extent, of other ethnic minorities across Myanmar. The scale and level of detail are beyond anything else I’ve encountered; accounts of mutilations, violations, and the murder of children in front of their parents go on page after page after page. My honest advice is that you don’t read it.33

Classifying incidents of violence as genocide is a lengthy, fraught, and uneven process. The UN Human Rights Council’s High Commissioner calls the events in Myanmar a textbook example of ethnic cleansing.”34 The International Court of Justice is currently hearing a case against Myanmar brought under the international Genocide Convention.35 The US State Department officially classifies the events in Myanmar as a genocide, as do many genocide scholars and institutions. In this series, I follow the usage of the United States Holocaust Memorial Museum in Washington, DC, in whose work I have complete confidence.36

But…Facebook?

If you’ve read this far, then first, thank you. Maybe get a drink of water or something.

Second, I think you may be—probably should be—wondering how many of the things I’ve just related can be connected to something as relatively inconsequential as Facebook posts.

I want to do a tiny summary and then preview some arguments that I won’t really be able to dig into until the end of this post and especially in the next one, when I finally get into the documents and investigations that show what was happening under the hood of Meta’s content recommendation engines.

The escalation from relatively isolated incidents of anti-Rohingya violence pre-2012 into the two big waves of attacks that year, the semi-communal semi-state violence in 2016, and the full-on Tatmadaw-led genocide in 2017 was accompanied by an overwhelming rise in Facebook-mediated disinformation and violence-inciting messages.

And as I’ve tried to show and will keep illustrating with examples, these messages built intense anti-Rohingya beliefs and fears throughout Myanmar’s mainstream Buddhist culture. Those beliefs and fears quite clearly led to direct incidents of communal (non-state) violence.

Determining whether those beliefs also constituted even a partial manfactured mainstream consent to the Tatmadaw’s actions in 2016 and 2017 is both out of my lane and honestly maybe unknowable, given the impossibility of untangling what was known by whom, and when. What I think I can say is that they ran in exact parallel to the Tatmadaw’s genocidal operations.

The overwhelming volume and velocity of this hate campaign would not have been possible without Meta, which did four main things to enable it:

  1. Meta bought and maneuvered its way into the center of Myanmar’s online life and then inhabited that position with a recklessness that was impervious to warnings by western technologists, journalists, and people at every level of Burmese society. (This is most of Part I.)
  2. After the 2012 violence, Meta mounted a content moderation response so inadequate that it would be laughable if it hadn’t been deadly. (Discussed in Part I and also below.)
  3. With its recommendation algorithms and financial incentive programs, Meta devastated Myanmar’s new and fragile online information sphere and turned thousands of carefully laid sparks into flamethrowers. (Discussed below and in Part III.)
  4. Despite its awareness of similar covert influence campaign based on inauthentic behavior”—aka fake likes, comments, and Pages—Meta allowed an enormous and highly influential covert influence operation to thrive on Burmese-language Facebook throughout the run-up to the peak of the 2016 and 2017 ethnic cleansing,” and beyond. (Part III.)

The lines of this argument have all been drawn by better informed people than me. Amnesty International’s 2022 report, The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” directly implicates Meta in the genocidal propaganda campaigns and furor that led up to the Tatmadaw’s atrocities in Rakhine State. The viral acceleration of dehumanizing and violent posts in 2017, Amnesty writes, made those messages appear ubiquitous on Burmese-language Facebook, creating a sense that everyone in Myanmar shared these views, helping to build a shared sense of urgency in finding a solution’ to the Bengali problem’ and ultimately building support for the military’s 2017 clearance operations’.”37

And as I noted in the intro to the first post in this series, the UN Mission’s own lead investigator stated that Facebook played a determining role” in the violence.38

But again, I think it’s reasonable and important to ask whether that can really be possible, and to look carefully at the evidence.

On one hand it seems obvious that Meta was indeed negligent about expanding content moderation, and deeply misguided in continuing to expand into Myanmar without fixing the tide of genocidal messages that experts had been warning them about since at least 2012. Meta’s behavior, after all those years of warnings, is hard to describe as anything but callous.

But does any of that make them responsible for what the Tatmadaw did?

Let’s start with the content moderation problem. Which means that we have to look at some of the actual content Meta allowed to circulate on Burmese-language Facebook during the waves of violence in 2016 and 2017.

Rumors and lies

Content warning: Hate speech, ethnic slurs.

On September 12, 2017, during the peak of the Tatmadaw’s genocidal attacks on the Rohingya, the Institute for War and Peace Reporting released an update on their two-year project in Myanmar with a dozen-odd local journalists and monitors who tracked and reported on hate speech and incitement to violence.

The post is called How Social Media Spurred Myanmar’s Latest Violence,” and it’s written by IWPRs regional director, Alan Davis. It’s both cringey—Davis starts with a dig at how backward and superstitious the Buddhist establishment is—and obviously rooted in real moral anguish at having failed to prevent the disaster. Much of the meat of the post is focused on Facebook, and Davis’s observations are sharp (emphasis mine):

The vast majority of hate speech was on social media, particularly Facebook.… while not all hate speech was anti-Muslim or anti-Rohingya, the overwhelming majority certainly was. Much was juvenile and just plain nasty, while a good deal was insidious and seemed to be increasingly organised. A lot of it was also smart and it was clear a great deal of time and energy had gone into some of the postings. 

Over time, we saw the hate speech becoming more targeted and militaristic. Wild allegations spread, including claims of Islamic State (IS) flags flying over mosques in Yangon where munitions were being stored, of thwarted plots to blow up the 2,500 year-old Shwedagon Pagoda in Yangon and supposed cases of Islamic agents smuggling themselves across the border.  

…we felt a clear sense that in the absence of any kind of political leadership that a darkening and deepening vacuum that would ultimately result in a violent reckoning.… Most importantly, we warned that rumours and lies peddled and left unchecked might end up creating their own reality. 39

On October 30, 2017, just after the full-scale ethnic cleansing began, Sitagu Sayadaw, a Buddhist monk and one of the most respected religious leaders in Myanmar, gave a sermon to an audience of soldiers—and to the rest of the country, via a Facebook livestream. His sermon featured a passage from the Mahavamsa in which monks comfort a Buddhist king consumed by guilt after leading a war in which millions died:

Don’t worry, your Highness. Not a single one of those you killed was Buddhist. They didn’t follow the Buddhist teachings and therefore they did not know what was good or bad. Not knowing good or bad is the nature of animals. Out of over five hundred thousand you killed, only one and a half were worth to be humans. Therefore it is a small sin and does not deserve your worry.40

The UN Mission’s report includes many other examples of religious, governmental, and military figures comparing Rohingya people to fleas, weeds, and animals—and in some cases, making explicit reference to the necessity of emulating both the Holocaust and the United States’ bombing of Hiroshima and Nagasaki.41

The report also includes specific examples of the kinds of dehumanizing and inciting posts and comments going around Facebook in 2017. I’m only going to include a few, but I think it’s important to be clear about what Meta let circulate, months into a full-on ethnic cleansing operation:

  • In early 2017, a Burmese patriot” posts a graphic video of police beating citizens in another country, with the comment: Watch this video. The kicks and the beatings are very brutal. I watch the video and feel that it is not enough. In the future […] Bengali disgusting race of Kalar terrorists who sneaked into our country by boat, need to be beaten like that. We need to beat them until we are satisfied.” (The post was still up on Facebook in July 2018.)
  • A widely shared August 2017 post: …the international community all condemned the actions of the Khoe Win Bengali [“Bengali that sneaked in”] terrorists. So, in this just war, to avenge the deaths of the ethnic people who got beheaded, and the policemen who got hacked into pieces, we are asking the Tatmadaw to turn these terrorists into powder and not leave any piece of them behind.”
  • Another post: Accusations of genocide are unfounded, because those that the Myanmar army is killing are not people, but animals. We won’t go to hell for killing these creatures that are not worth to be humans.”
  • Another post: If the (Myanmar) army is killing them, we Myanmar people can accept that… current killing of the Kalar is not enough, we need to kill more!”42

Let’s look at some quantifiable data on the volume of extremist posts during the period—we don’t have much, because only Meta really knows, but we do have a couple of windows into the way things escalated.

By 2016, data analyst Raymond Serrato, who eventually goes to work for the Office of the United Nations High Commissioner for Human Rights, has been studying social media in Myanmar for a couple of years. So when when the Tatmadaw’s clearance operations swing into action in 2016, he’s already watching what’s happening in a big (55k member) Facebook group run by Ma Ba Tha supporters—a hangout for Buddhist patriots,” as Seratto describes it.43

What Serrato sees in this group is a rising curve in posting volume in the late summer of 2017 before the Arakan Rohingya Salvation Army attacks, and then spiking hard immediately after the attacks, as the Tatmadaw began the concentrated genocidal operation against the Rohingya.

Ray Serrato's graph of Facebook posts in the extremist Group, showing enormous spikes followed by long-term elevation of posting rates beginning in August, 2016.Ray Serrato's graph of Facebook posts in the extremist Group, showing enormous spikes followed by long-term elevation of posting rates beginning in August, 2016.

Visualization by Raymond Serrato.

Serrato’s research is limited in scope—he’s only using the Groups API—but his snapshot of how hardline nationalist post volume went through the roof in 2017 clearly runs alongside the qualitative reports from Burmese and western observers—and victims.

What Meta did about it

Across the first-person narratives from Burmese and western tech and civil society people, there’s a thread of increasingly intense frustration—bleeding into desperation—among the people who tried, over and over, to get individual pieces of dehumanizing propaganda, graphic disinformation, and calls to violence removed from Facebook by reporting them to Facebook.

They report posts and never hear anything. They report posts that clearly call for violence and eventually hear back that they’re not against Facebook’s Community Standards. This is also true of the Rohingya refugees Amnesty International interviews in Bangladesh—they were also reporting posts demonizing and threatening their communities, and it didn’t help.44

Writing on behalf of the Burmese and western people in the private Facebook group with Facebook employees, Htaike Htaike Aung and Victoire Rio summarize the situation in 2016, during the first wave of clearance operations”:

…Facebook was unequipped to proactively address risk concerns. They relied nearly exclusively on us, as local partners, to point them to problematic content. Upon receiving our escalations…they would typically address the copy we escalated but take no further steps to remove duplicate copies or address the systemic policy or enforcement gaps that these escalations brought to light.… We kept asking for more points of contact, better escalation protocols, and interlocutors with knowledge of the language and context who could make decisions on the violations without requiring the need for translators and further delays. We got none of that.45

And as we now know, Meta’s fleet of Burmese-speaking contractors had grown to a total of four at the end of 2015. According to Reuters, in 2018, Meta had about 60 people reviewing reported content from Myanmar via the Accenture-run Project Honey Badger” contract operation in Kuala Lumpur, plus three more in Dublin, to monitor Myanmar’s approximately 18 million Facebook users.46 So in 2016 and 2017, Meta has somewhere between 4 and 63-ish Burmese speakers monitoring hate speech and violence-inciting messages in Myanmar. And zero of them, incidentally, in Myanmar itself.

I don’t know how many content reviewers Meta employed globally in 2016 and 2017, so we have to skip ahead to get an estimate. In his 2018 appearance before the US House Energy and Commerce Committee, Mark Zuckerberg is asked by Texas House Representative Pete Olson whether Meta employs about 27,000 people. Zuckerberg says yes.

OLSON: I’ve also been told that about 20,000 of those people, including contractors, do work on data security. Is that correct?

ZUCKERBERG: Yes. The 27,000 number is full time employees. And the security and content review includes contractors, of which there are tens of thousands. Or will be. Will be by the time that we hire those.47

There are several remarkable things about this exchange, including that when Rep. Olsen afterward sums up, incorrectly, that this means that more than half of Meta’s employees deal with security practices,” Zuckerberg doesn’t correct him, but I’ll just emphasize that Meta is claiming to have (or be hiring!) tens of thousands of contractors to work on security and content review, in 2018. And for Myanmar, where by 2018 the genocide of the Rohingya has already peaked, they’ve managed to assemble about 63.

As it turns out, even the United Nations’ own Mission, acting in an official capacity, can’t get Facebook to remove posts explicitly calling for the murder of a human rights defender.

Don’t leave him alive”

Both the UN Mission’s findings and Amnesty International’s big report tell the story of this person—an international aid worker targeted for his alleged cooperation with the Mission. He’s unnamed in the UN report; Amnesty calls him Michael.”

Here’s how it happens: Michael” does an interview with a local journalist in Myanmar about the situation he’d observed in Rakhine State, and the interview goes viral on Facebook.

The response by anti-Rohingya extremists is immediate and intense: The most dangerous Facebook post made about Michael features a picture of his opened passport and describes him as a Muslim” and national traitor.” The comments on the Facebook post call for Michael’s murder: If this animal is still around, find him and kill him. There needs to be government officials in NGOs.” He is a Muslim. Muslims are dogs and need to be shot.” Don’t leave him alive. Remove his whole race. Time is ticking.”48

Strangers start recognizing Michael from the viral posts, and warning him that he’s in danger. The threats expand to include his family.

The UN Mission team investigating the attacks on the Rohingya knows Michael. They get involved, reporting the post with the photo of Michael’s passport in it to Facebook four times. Each time, they get the same response: the post had been reviewed and doesn’t go against one of [Facebook’s] specific Community Standards.”49

By this point, the post has been shared more than 1,000 times, and many others have appeared. Michael’s friends and colleagues in Myanmar and in the US are reporting everything they can find—some posts get deleted, but hundreds more appear, like a game of whack-a-mole.”50

The UN team escalates and emails an official Facebook email account; no one responds. At this point, the team tells Michael that it’s time to get out of Myanmar—it’s too dangerous to stay.

Several weeks later, the UN Mission is finally able to get Facebook to take down the original post, but only then with the help of a contact at Facebook. And copies of the post keep circulating on Facebook.

The Mission team write that they encountered many similar cases where individuals, usually human rights defenders or journalists, become the target of an online hate campaign that incites or threatens violence.”

In their briefing document about the many attempts to get Facebook to stop fueling the violence in Myanmar, Htaike Htaike Aung and Victoire Rio write:

Despite the escalating risks, we did not see much progress over that period, and Facebook was just as unequipped to deal with the escalation of anti-Rohingya rhetoric and violence in August 2017 as they had been in 2016.… Ultimately, it was still down to us, as local partners, to warn them. We simply couldn’t cope with the scale.51

Meta’s active harms: the incentives

In a 2016 interview, Burmese civil-society and digital-literacy activists Htaike Htaike Aung and Phyu Phyu Thi speak about the work their organization, MIDO, was doing to counter hate speech and misinformation. Which was a lot: They’re doing digital literacy and media literacy training, they’ve built more than 60 digital literacy centers throughout Myanmar, they monitor online hate speech, and they run a Real or Not” fact-checking page for Burmese users.52

Even so, Myanmar’s civil society organizations and under-resourced activists simply can’t keep pace with what’s happening online—not without action on Meta’s part to sharply reduce and deviralize genocidal content at the product-design level.

There were—and are—ways for Meta to change its inner machinery to reduce or eliminate the harms it does. But in 2016, the company actually does something that makes the situation much worse.

In addition to continuing to algorithmically accelerate extremist messages, Meta introduces a new program that took a wrecking ball to Myanmar’s online media landscape: Instant Articles.

If you’re from North America or Europe, you probably know Instant Articles as one of the ways Meta persuaded media organizations to publish their work directly on Facebook, ostensibly in exchange for fast loading and and shared ad revenue.

Instant Articles was kind of a bust for actual media organizations, but in many places, including in Myanmar, it became a way for clickfarms to make a lot of money—ten times the average Burmese salary—by producing and propagating super-sensationalist fake news.

In a country where Facebook is synonymous with the internet,” the MIT Technology Review’s Karen Hao writes, the low-grade content overwhelmed other information sources.”53

The result for Myanmar’s millions of Facebook users is an explosive decompression of its online information sphere. In 2015, before Instant Articles expands to Myanmar, 6 out of 10 websites getting the most engagement on Facebook in Myanmar are legitimate” media organizations. A year after Instant Articles hits the country, legitimate publishers make up only 2 of the top 10 publishers on Facebook. By 2018, the number of legit publishers on the list is zero—all 10 are fake news.54

This is the online landscape in place in 2016 and 2017.

Then there are the algorithms.

People saw the vilest content the most”

When he speaks to Amnesty International about his experience being targeted on Facebook, Michael (who was in Myanmar 2013–2018) also talks about what Facebook’s News Feed looked like in Myanmar in more general terms:

The vitriol against the Rohingya was unbelievable online—the amount of it, the violence of it. It was overwhelming. There was just so much. That spilled over into everyday life…

The news feed in general [was significant]—seeing a mountain of hatred and disinformation being levelled [against the Rohingya], as a Burmese person seeing that, I mean, that’s all that was on people’s news feeds in Myanmar at the time. It reinforced the idea that these people were all terrorists not deserving of rights. This mountain of misinformation definitely contributed [to the outbreak of violence].”

And elsewhere in the same interview:

The fact that the comments with the most reactions got priority in terms of what you saw first was big—if someone posted something hate-filled or inflammatory it would be promoted the most—people saw the vilest content the most. I remember the angry reactions seemed to get the highest engagement. Nobody who was promoting peace or calm was getting seen in the news feed at all.”55

So let’s remember, by 2016, active observers of social media—and Facebook in particular—have a pretty good sense of what makes things go viral. And clearly there are organized groups in Myanmar—MaBaTha’s hardline monks, for one—who are super skilled at getting a lot of eyes on their anti-Rohingya Facebook.

But the big, super-frustrating problem with trying to understand Facebook’s effects through accounts like these is that they only describe what can be deduced from the network’s exterior surfaces—what people see, what they report, what happens afterward. I believe these accounts—I especially trust the statements from Burmese people working on the ground—but they’re all coming from outside Facebook’s machinery.

Which is why we’re incredibly lucky to get, just a few years later, an inside view of what was really happening—and what Meta knew about it as it happened.

Next up: Part III: The Inside View.


  1. Technology Leaders Launch Partnership to Make Internet Access Available to All,” Facebook.com August 20, 2013, archived at Archive.org. The promotional video is Every one of us,” Internet.org, August 20, 2013.↩︎

  2. Is Connectivity A Human Right? Mark Zuckerberg, Facebook.com, August 20, 2013 (the memo is undated, so I’m taking the date from contemporary reports and other launch documents).↩︎

  3. Facebook And 6 Phone Companies Launch Internet.org To Bring Affordable Access To Everyone,” Josh Constine, TechCrunch, August 20, 2013; Facebook’s internet.org initiative aims to connect the next 5 billion people’,” Stuart Dredge, The Guardian, August 21, 2013; Facebook project aims to connect global poor,” AlJazeera America, August 21, 2013.↩︎

  4. Facebook Leads an Effort to Lower Barriers to Internet Access,” Vindu Goel, The New York Times, August 20, 2013.↩︎

  5. Meta Earnings Presentation, Q2 2023, July 26, 2023 (date from associated press release); Facebook’s Q2: Monthly Users Up 21% YOY To 1.15B, Dailies Up 27% To 699M, Mobile Monthlies Up 51% To 819M,” TechCrunch, July 24, 2013. (The actual earnings presentation deck from the 2013 call doesn’t seem to be online except as a few screencaps here and there, which is irritating.)↩︎

  6. Distribution of Worldwide Social Media Users in 2022, by Region,” Statista, 2022.↩︎

  7. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: Think Before You Share”).↩︎

  8. What Happened to Facebook’s Grand Plan to Wire the World? Jessi Hempel, Wired, May 17, 2018; Facebook is changing its name to Meta as it focuses on the virtual world,” Elizabeth Dwoskin, The Washington Post, October 28, 2021. (That should be a paywall-free WaPo link, but it doesn’t always work.)↩︎

  9. Myanmar’s MPT launches Facebook’s Free Basics,” Joseph Waring, Mobile World Live, June 7, 2016.↩︎

  10. Hatebook: Why Facebook is losing the war on hate speech in Myanmar,” Reuters, August 15, 2018. (You may see bigger numbers elsewhere—in a 2017 New York Times article, Kevin Roose claims that Facebook had 30 million users in Myanmar in 2017. Roose doesn’t cite his sources, but the same range his uses, from two million to more than 30 million, shows up in The Atlantic and CBS News. I don’t think there’s any way this number can be right, but Meta doesn’t disclose this information.)↩︎

  11. The Facebook-Loving Farmers of Myanmar,” Craig Mod, The Atlantic, January 21, 2016.↩︎

  12. Facebook is Worse than You Think: Whistleblower Reveals All | Frances Haugen x Rich Roll,” The Rich Roll Podcast, September 7, 2023. This is a little outside my usual sourcing zone—Roll is a vegan athlete…influencer, I gather?—but Haugen does a lot of interviews, and sometimes the least formal ones turn up the most interesting statements. The context for the bit I quote comes in around 8:30 and the quote is at 9:19.↩︎

  13. [“Facebook Destroys Everything: Part 1,” Faine Greenwood, August 8, 2023.↩︎

  14. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎

  15. The Facebook-Loving Farmers of Myanmar,” Craig Mod, The Atlantic, January 21, 2016.↩︎

  16. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018—the report landing page includes summaries, metadata, and infographics. Content warnings apply throughout, this is atrocity material.↩︎

  17. Ethnic Insurgencies and Peacemaking in Myanmar,” Tin Maung Maung Than, The Newsletter of the International Institute for Asian Studies, No.66, Winter 2013.↩︎

  18. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018; They Came and Destroyed Our Village Again: The Plight of Internally Displaced Persons in Karen State,” Human Rights Watch, June 9, 2005.↩︎

  19. Civil War in Myanmar,” the Center for Preventive Action at the Council on Foreign Relations, publish date not provided; updated April 25, 2023.↩︎

  20. The Burmese Labyrinth: A History of the Rohingya Tragedy, Carlos Sardiña Galache, Verso, 2020. The 140,000” figure is drawn from One year on: Displacement in Rakhine state, Myanmar,” a briefing note from the UN Human Rights Council published June 7, 2013.↩︎

  21. Atrocities Prevention Report: Targeting of and Attacks on Members of Religious Groups in the Middle East and Burma,” US Department of State, March 17, 2016.↩︎

  22. Myanmar: Security Forces Target Rohingya During Vicious Rakhine Scorched-Earth Campaign, Amnesty International, December 19, 2016.↩︎

  23. Myanmar: Security Forces Target Rohingya During Vicious Rakhine Scorched-Earth Campaign, Amnesty International, December 19, 2016; Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  24. 21,000 Rohingya Muslims Flee to Bangladesh to Escape Persecution in Myanmar,” Ludovica Iaccino, The International Business Times, December 6, 2016.↩︎

  25. Rohingya Crisis: Finding out the Truth about Arsa Militants,” Jonathan Head, BBC, October 11, 2017.↩︎

  26. Myanmar: New evidence reveals Rohingya armed group massacred scores in Rakhine State,” Amnesty International, May 22, 2018.↩︎

  27. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  28. Rohingya Crisis—A Summary of Findings from Six Pooled Surveys,” Médecins Sans Frontières, December 9, 2017.↩︎

  29. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018↩︎

  30. Rohingya Crisis—A Summary of Findings from Six Pooled Surveys,” Médecins Sans Frontières, December 9, 2017.↩︎

  31. Crimes Against Humanity in Myanmar,” Amnesty International, May 15, 2019 (dated PDF version).↩︎

  32. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  33. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  34. UN Human Rights Chief Points to Textbook Example of Ethnic Cleansing’ in Myanmar,” UN News, September 11, 2017.↩︎

  35. World Court Rejects Myanmar Objections to Genocide Case,” Human Rights Watch, July 22, 2022.↩︎

  36. Genocide, Crimes Against Humanity and Ethnic Cleansing of Rohingya in Burma,” Anthony Blinken, US Department of State, March 21, 2022. Country Case Studies: Burma,” United States Holocaust Memorial Museum, undated resource.↩︎

  37. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  38. U.N. investigators cite Facebook role in Myanmar crisis,” Reuters, March 12, 2018.↩︎

  39. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  40. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  41. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  42. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  43. Revealed: Facebook hate speech exploded in Myanmar during Rohingya crisis,” Michael Safi, The Guardian, April 2, 2018.↩︎

  44. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021; The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022;↩︎

  45. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  46. Hatebook: Why Facebook is losing the war on hate speech in Myanmar,” Reuters, August 15, 2018.↩︎

  47. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  48. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  49. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  50. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  51. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  52. If It’s on the Internet It Must Be Right’: an Interview With Myanmar ICT for Development Organisation on the Use of the Internet and Social Media in Myanmar,” Rainer Einzenberger, Advances in Southeast Asian Studies (ASEAS), formerly the Austrian Journal of South-East Asian Studies, December 30, 2016.↩︎

  53. How Facebook and Google Fund Global Misinformation,” Karen Hao, The MIT Technology Review, November 20, 2021. Karen Hao is so good on all of this, btw. One of the best.↩︎

  54. Revealed: Facebook hate speech exploded in Myanmar during Rohingya crisis,” Michael Safi, The Guardian, April 2, 2018.↩︎

  55. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

30 September 2023