Meta in Myanmar, Part IV: Only Connect

The Atlantic Council’s report on the looming challenges of scaling trust and safety on the web opens with this statement:

That which occurs offline will occur online.

I think the reverse is also true: That which occurs online will occur offline.

Our networks don’t create harms, but they reveal, scale, and refine them, making it easier to destabilize societies and destroy human beings. The more densely the internet is woven into our lives and societies, the more powerful the feedback loop becomes.

In this way, our networks—and specifically, the most vulnerable and least-heard people inhabiting them—have served as a very big lab for gain-of-function research by malicious actors.

And as the first three posts in this series make clear, you don’t have to be online at all to experience the internet’s knock-on harms—there’s no opt-out when internet-fueled violence sweeps through and leaves villages razed and humans traumatically displaced or dead. (And the further you are from the centers of tech-industry power—geographically, demographically, culturally—the less likely it is that the social internet’s principal powers will do anything to plan for, prevent, or attempt to repair the ways their products hurt you.)

I think that’s the thing to keep in the center while trying to sort out everything else.

In the previous 30,000 words of this series, I’ve tried to offer a careful accounting of the knowable facts of Myanmar’s experience with Meta. Here’s the argument I outlined toward the end of Part II:

  1. Meta bought and maneuvered its way into the center of Myanmar’s online life and then inhabited that position with a recklessness that was impervious to warnings by western technologists, journalists, and people at every level of Burmese society. (This is most of Part I.)
  2. After the 2012 violence, Meta mounted a content moderation response so inadequate that it would be laughable if it hadn’t been deadly. (Discussed in Part I and also [in Part II].)
  3. With its recommendation algorithms and financial incentive programs, Meta devastated Myanmar’s new and fragile online information sphere and turned thousands of carefully laid sparks into flamethrowers. (Discussed [in Part II] and in Part III.)
  4. Despite its awareness of similar covert influence campaign based on “inauthentic behavior”—aka fake likes, comments, and Pages—Meta allowed an enormous and highly influential covert influence operation to thrive on Burmese-language Facebook throughout the run-up to the peak of the 2016 and 2017 “ethnic cleansing,” and beyond. (Part III.)

I still think that’s right. But this story’s many devils are in the details, and getting at least some of the details down in public was the whole point of this very long exercise.

Here at the end of it, it’s tempting to try to package up a tidy set of anti-Meta action items here and call it a day, but there’s nothing tidy about this story, or about what I think I’ve learned working on it. What I’m going to try to do instead is to try to illuminate some facets of the problem, suggest some directions for mitigations, rotate the problem, and repeat.

The allure of the do-over

After my first month of part-time research on Meta in Myanmar, I was absorbed in the work and roughed up by the awfulness of what I was learning and, frankly, incandescently furious with Meta’s leadership. But sometime after I read Faine Greenwood’s posts—and reread Craig Mod’s essay for the first time since 2016—I started to get scared, for reasons I couldn’t even pin down right away. Like, wake-up-at-3am scared.

At first, I thought I was just worried that the new platforms and networks coming into being would also be vulnerable to the kind of coordinated abuse that Myanmar experienced. And I am worried about that and will explain at great length later in this post. But it wasn’t just that.

Craig’s essay about his 2015 fieldwork with farmers in Myanmar captures something real about the exhilarating possibilities of a reboot:

…there is a wild and distinct freedom to the feeling of working in places like this. It is what intoxicates these consultants. You have seen and lived within a future, and believe—must believe—you can help bring some better version of it to light here. A place like Myanmar is a wireless mulligan. A chance to get things right in a way that we couldn’t or can’t now in our incumbent-laden latticeworks back home.

This rings bells not only because I remember my own early-internet spells of optimism—which were pretty far in the rearview by 2016—but because I recognize a much more recent feeling, which is the way it felt to come back online last fall, as the network nodes on Mastodon were starting to really light up.

I’d been mostly off social media since 2018, with a special exception for covid-data work in 2020 and 2021. But in the fall and winter of 2022, the potential of the fediverse was crackling in ways it hadn’t been in my previous Mastodon experiences in 2017 and 2018. If you’ve been in a room where things are happening, you’ll recognize that feeling forever, and last fall, it really felt like some big chunks of the status quo had changed state and gone suddenly malleable.

I also believe that the window for significant change in our networks doesn’t open all that often and doesn’t usually stay open for long.

So like any self-respecting moth, when I saw it happening on Mastodon, I dropped everything I’d been doing and flew straight into the porch light and I’ve been thinking and writing toward these ideas since.

Then I did the Myanmar project. By the time I got to the end of the research, I recognized myself in the accounts of the tech folks at the beginning of Myanmar’s internet story, so hopeful about the chance to escape the difficulties and disappointments of the recent past.

And I want to be clear: There’s nothing wrong with feeling hopeful or optimistic about something new, as long as you don’t let yourself defend that feeling by rejecting the possibility that the exact things that fill you with hope can also be turned into weapons. (I’ll be more realistic—will be turned into weapons, if you succeed at drawing a mass user base and don’t skill up and load in peacekeeping expertise at the same time.)

A lot of people have written and spoken about the unusual naivety of Burmese Facebook users, and how that made them vulnerable, but I think Meta itself was also dangerously naive—and worked very hard to stay that way as long as possible. And still largely adopts the posture of the naive tech kid who just wants to make good.

It’s an act now, though, to be clear. They know. There are some good people working themselves to shreds at Meta, but the company’s still out doing PR tapdancing while people in Ethiopia and India and (still) Myanmar suffer.

When I first realized how bad Meta’s actions in Myanmar actually were, it felt important to try to pull all the threads together in a way that might be useful to my colleagues and peers who are trying in various ways to make the world better by making the internet better. I thought I would end by saying, Look, here’s what Meta did in Myanmar, so let’s get everyone the fuck off of Meta’s services into better and safer places.”

I’ve landed somewhere more complicated, because although I think Meta’s been a disaster, I’m not confident that there are sustainable better places for the vast majority of people to go. Not yet. Not without a lot more work.

We’re already all living through a series of rolling apocalypses, local and otherwise. Many of us in the west haven’t experienced the full force of them yet—we experience the wildfire smoke, the heat, the rising tide of authoritarianism, and rollbacks of legal rights. Some of us have had to flee. Most of us haven’t lost our homes, or our lives. Nevertheless, these realities chip away at our possible futures. I was born at about 330 PPM; my daughter was born at nearly 400.

The internet in Myanmar was born at a few seconds to midnight. Our new platforms and tools for global connection have been born into a moment in which the worst and most powerful bad actors, both political and commercial, are already prepared to exploit every vulnerability.

We don’t get a do-over planet. We won’t get a do-over network.

Instead, we have to work with the internet we made and find a way to rebuild and fortify it to support the much larger projects of repair—political, cultural, environmental—that are required for our survival.

I think those are the stakes, or I’d be doing something else with my time.

What better” requires

I wrestled a lot with the right way to talk about this and how much to lean on my own opinions vs. the voices of Myanmar’s own civil society organizations and the opinions of whistleblowers and trust and safety experts.

I’ve ended up taking the same approach to this post as I did with the previous three, synthesizing and connecting information from people with highly specific expertise and only sometimes drawing from my own experience and work.

If you’re not super interested in decentralized and federated networks, you probably want to skip down a few sections.

If you’d prefer to get straight to the primary references, here they are:

Notes and recommendations from people who were on the ground in Myanmar, and are still working on the problems the country faces:

Two docs related to large-scale threats. The federation-focused Annex Five” of the big Atlantic Council report, Scaling Trust on the Web. The whole report is worth careful reading, and this annex feels crucial to me, even though I don’t agree with every word.

I’m also including Camille François’ foundational 2019 paper on disinformation threats, because it opens up important ideas.

Three deep dives with Facebook whistleblowers.

…otherwise, here’s what’s worrying me.

1. Adversaries follow the herd

Realistically, a ton of people are going to stay on centralized platforms, which are going to continue to fight very large-scale adversaries. (And realistically, those networks are going to keep ignoring as much as they can for as long as they can—which especially means that outside the US and Western Europe, they’re going to ignore a lot of damage until they’re regulated or threatened with regulation. Especially companies like Google/YouTube, whose complicity in situations like the one in Myanmar has been partially overlooked because Meta’s is so striking.)

But a lot of people are also trying new networks, and as they do, spammers and scammers and griefers will follow, in increasingly large numbers. So will the much more sophisticated people—and pro-level organizations—dedicated to manipulating opinion; targeting, doxxing, and discrediting individuals and organizations; distributing ultra-harmful material; and sowing division among their own adversaries. And these aren’t people who will be deterred by inconvenience.

In her super-informative interview on the Brown Bag podcast from the ICT4Peace Foundation, Myanmar researcher Victoire Rio mentions two things that I think are vital to this facet of the problem: One is that as Myanmar’s resistance moved off of Facebook and onto Telegram for security reasons after the coup, the junta followed suit and weaponized Telegram as a crowdsourced doxxing tool that has resulted in hundreds of arrests—Rio calls it the Gestapo on steroids.”

This brings us to the next thing, which is commonly understood in industrial-grade trust and safety circles, but I think less so on newer networks, which have mostly experienced old-school adversaries—basic scammers and spammers, distributors of illegal and horrible content, and garden-variety amateur Nazis and trolls—which is that although those blunter and less sophisticated harms are still quite bad, the more sophisticated threats that are common on the big centralized platforms are considerably more difficult to identify and root out. And if the people running new networks don’t realize that what we’re seeing right now are the starter levels, they’re going to be way behind the ball when better organized adversaries arrive.

2. Modern adversaries are heavy on resources and time

Myanmar has a population of about 51 million people, and in the years before the coup, it already had an internal adversary in the military that ran a professionalized, Russia-trained online propaganda and deception operation that maxed out at about 700 people, working in shifts to manipulate the online landscape and shout down opposing points of view. It’s hard to imagine that this force has lessened now that the genocidaires are running the country.

Russia’s adversarial operations roll much deeper, and aren’t limited to the well-known, now allegedly disbanded Internet Research Agency.

And although Russia is the best-known adversary in most US and Western European conversations I’ve been in, it’s very far from being the only one. Here’s disinfo and digital rights researcher Camille François, warning about the association of online disinformation with the Russian playbook”:

Russia is neither the most prominent nor the only actor using manipulative behaviors on social media. This framing ignores that other actors have abundantly used these techniques, and often before Russia. Iran’s broadcaster (IRIB), for instance, maintains vast networks of fake accounts impersonating journalists and activists to amplify its views on American social media platforms, and it has been doing so since at least 2012.

What’s more, this kind of work isn’t the exclusive domain of governments. A vast market of for-hire manipulation proliferates around the globe, from Indian public relations firms running fake newspaper pages to defend Qatar’s interests ahead of the World Cup and Israeli lobbying groups running influence campaigns with fake pages targeting audiences in Africa.

This chimes with what Sophie Zhang reported about fake-Page networks on Facebook in 2019—they’re a genuinely global phenomenon, and they’re bigger, more powerful, and more diverse in both intent and tactics than most people suspect.

I think it’s easy to imagine that these heavy-duty threats focus only on the big, centralized services, but an in-depth analysis of just one operation, Secondary Infektion, shows that it operated across at least 300 websites and platforms ranging from Facebook, Reddit, and YouTube (and WordPress, Medium, and Quora) to literally hundreds of other sites and forums.

These adversaries will take advantage of decentralized social networks. Believing otherwise requires a naivety I hope we’ll come to recognize as dangerous.

3. No algorithms ≠ no trouble

Federated networks like Mastodon, which eschews algorithmic acceleration, offer fewer incentives for some kinds of adversarial actors—and that’s very good. But fewer isn’t none.

Here’s what Lai and Roth have to say about networks without built-in algorithmic recommendation surfaces:

The lack of algorithmic recommendations means there’s less of an attack surface for inauthentic engagement and behavioral manipulation. While Mastodon has introduced a version of a trending topics” list—the true battlefield of Twitter manipulation campaigns, where individual posts and behaviors are aggregated into a prominent, platform-wide driver of attention—such features tend to rely on aggregation of local (rather than global or federated) activity, which removes much of the incentive for engaging in large-scale spam. There’s not really a point to trying to juice the metrics on a Mastodon post or spam a hashtag, because there’s no algorithmic reward of attention for doing so…

These disincentives for manipulation have their limits, though. Some of the most successful disinformation campaigns on social media, like the IRAs use of fake accounts, relied less on spam and more on the careful curation of individual high-value” accounts—with uptake of their content being driven by organic sharing, rather than algorithmic amplification. Disinformation is just as much a community problem as it is a technological one (i.e., people share content they’re interested in or get emotionally activated by, which sometimes originates from troll farms)—which can’t be mitigated just by eliminating the algorithmic drivers of virality.

Learning in bloody detail about how thoroughly Meta’s acceleration machine overran all of its attempts to suppress undesirable results has made me want to treat algorithmic virality like a nuclear power source: Maybe it’s good in some circumstances, but if we aren’t prepared to do industrial-grade harm-prevention work and not just halfhearted cleanup, we should not be fucking with it, at all.

But, of course, we already are. Lemmy uses algorithmic recommendations. Bluesky has subscribable, user-built feeds that aren’t opaque and monolithic in the way that, say, Facebook’s are—but they’re still juicing the network’s dynamics, and the platform hasn’t even federated yet.

I think it’s an open question how much running fully transparent, subscribable algorithmic feeds that are controlled by users mitigates the harm recommendation systems can do. I think have a more positive view of AT Protocol than maybe 90% of fediverse advocates—which is to say, I feel neutral and like it’s probably too early to know much—but I’d be lying if I said I’m not nervous about what will happen when the people behind large-scale covert influence networks get to build and promote their own algo feeds using any identity they choose.

4. The benefits and limits of defederation

Another characteristic of fediverse (by which I mean Activity-Pub-based servers, mostly interoperable”) networks is the ability for both individual users and whole instances to defederate from each other. The ability to wall off” instances hosting obvious bad actors and clearly harmful content offers ways for good-faith instance administrators to sharply reduce certain kinds of damage.

It also means, of course, that instances can get false-flagged by adversaries who make accounts on target groups’ instances and post abuse in order to get those instances mass-defederated, as was reportedly happening in early 2023 with Ukrainian servers. I’m inclined to think that this may be a relatively niche threat, but I’m not the right person to evaluate that.

A related threat that was expressed to me by someone who’s been working on the ground in Myanmar for years is that authoritarian governments will corral their citizens on instances/servers that they control, permitting both surveillance and government-friendly moderation of propaganda.

Given the tremendous success of many government-affiliated groups in creating (and, when disrupted, rebuilding) huge fake-Page networks on Facebook, I’d also expect to see harmless-looking instances pop up that are actually controlled by covert influence campaigns and/or organizations that intend to use them to surveil and target activists, journalists, and others who oppose them.

And again, these aren’t wild speculations: Myanmar’s genocidal military turned out to be running many popular, innocuous-looking Facebook Pages (“Let’s Laugh Casually Together,” etc.) and has demonstrated the ability to switch tactics to keep up with both platforms and the Burmese resistance after the coup. It seems bizarre to me to assume that equivalent bad actors won’t work out related ways to take advantage of federated networks.

5. Content removal at mass scale is failing

The simple version of this idea is that content moderation at mass scale can’t be done well, full stop. I tend to think that we haven’t tried a lot of things that would help—not at scale, at least. But I would agree that doing content moderation in old-internet ways on the modern internet at mass scale doesn’t cut it.

Specifically, I think it’s increasingly clear that doing content moderation as a sideline or an afterthought, instead of building safety and integrity work into the heart of product design, is a recipe for failure. In Myanmar, Facebook’s engagement-focused algorithms easily outpaced—and often still defeat—Meta’s attempts to squash the hateful and violence-inciting messages they circulated.

Organizations and activists out of Myanmar are calling on social networks and platforms to build human-rights assessments not merely into their trust and safety work, but into changes to their core product design. Including, specifically, a recommendation to get product teams into direct contact with the people in the most vulnerable places:

Social media companies should increase exposure of their product teams to different user realities, and where possible, facilitate direct engagement with civil society in countries facing high risk of human rights abuse.

Building societal threat assessments into product design decisions is something that I think could move the needle much more efficiently than trying to just stuff more humans into the gaps.

Content moderation that focuses only on messages or accounts, rather than the actors behind them, also comes up short. The Myanmar Internet Project’s report highlights Meta’s failure—as late as 2022—to keep known bad actors involved in the Rohingya genocide off Facebook, despite its big takedowns and rules nominally preventing the military and the extremists of Ma Ba Tha from using Facebook to distribute their propaganda:

…most, if not all, of the key stakeholders in the anti-Rohingya campaign continue to maintain a presence on Facebook and to leverage Facebook and other platforms for influence. As we repeatedly warned the platforms, the bulk of the harmful content we face comes from a handful of actors, who have been consistently violating Terms of Services and Community Standards.

The Myanmar Internet Project recommends that social media companies rethink their moderation approach to more effectively deter and—where warranted—restrict actors with a track record of violating their rules and terms of services, including by enforcing sanctions and restrictions at an actor and not account level, and by developing better strategies to detect and remove accounts of actors under bans.”

This is…going to be complicated on federated networks, even if I set aside the massive question of how federated networks will moderate messages originating outside their instances that require language and culture expertise they lack.

I’ll focus here on Mastodon because it’s big and it’s been federated for years. Getting rid of obvious, known bad actors at the instance level is something Mastodon excels at—viz the full-scale quarantine of Gab. If you’re on a well-moderated, mainstream instance, a ton of truly horrific stuff is going to be excised from your experience on Mastodon because the bad instances get shitcanned. And because there’s no central public square” to contest on Mastodon, with all the corporations-censoring-political-speech-at-scale issues those huge ~public squares raise, many instance admins feel free to use a pretty heavy hand in throwing openly awful individuals and instances out of the pool.

But imagine a sophisticated adversary with a sustained interest in running both a network of covert and overt accounts on Mastodon and things rapidly get more complicated.

Lai and Roth weigh in on this issue, noting that the fediverse currently lacks capability and capacity for tracking bad actors through time in a structured way, and also doesn’t presently have much in the way of infrastructure for collaborative actor-level threat analysis:

First, actor-level analysis requires time-consuming and labor-intensive tracking and documentation. Differentiating between a commercially motivated spammer and a state-backed troll farm often requires extensive research, extending far beyond activity on one platform or website. The already unsustainable economics of fediverse moderation seem unlikely to be able to accommodate this kind of specialized investigation.

Second, even if you assume moderators can, and do, find accounts engaged in this type of manipulation— and understand their actions and motivations with sufficient granularity to target their activity—the burden of continually monitoring them is overwhelming. Perhaps more than anything else, disinformation campaigns demonstrate the persistent” in advanced persistent threat”: a single disinformation campaign, like China-based Spamouflage Dragon, can be responsible for tens or even hundreds of thousands of fake accounts per month, flooding the zone with low-quality content. The moderation tools built into platforms like Mastodon do not offer appropriate targeting mechanisms or remediations to moderators that could help them keep pace with this volume of activity.… Without these capabilities to automate enforcement based on long-term adversarial understanding, the unit economics of manipulation are skewed firmly in favor of bad actors, not defenders.

There’s also the perhaps even greater challenge of working across instances—and ideally, across platforms—to identify and root out persistent threats. Lai and Roth again:

From an analytic perspective, it can be challenging, if not impossible, to recognize individual accounts or posts as connected to a disinformation campaign in the absence of cross-platform awareness of related conduct. The largest platforms—chiefly, Meta, Google, and Twitter (pre-acquisition)—regularly shared information, including specific indicators of compromise tied to particular campaigns, with other companies in the ecosystem in furtherance of collective security. Information sharing among platform teams represents a critical way to build this awareness—and to take advantage of gaps in adversaries’ operational security to detect additional deceptive accounts and campaigns.… Federated moderation makes this kind of cross-platform collaboration difficult.

I predict that many advocates of federated and decentralized networks will believe that Lai and Roth are overstating these gaps in safety capabilities, but I hope more developers, instance administrators, and especially funders, will take this as an opportunity to prioritize scaled-up tooling and institution-building.

Edited to add, October 16, 2023: Independent Federated Trust and Safety (IFTAS), an organization working on supporting and improving trust and safety on federated networks, just released the results of their Moderator Needs Assessment results, highlighting needs for financial, legal, technical, and cultural support.

Meta’s fatal flaw

I think if you ask people why Meta failed to keep itself from being weaponized in Myanmar, they’ll tell you about optimizing for engagement and ravenously, heedlessly pursuing expansion and profits and continuously fucking up every part of content moderation.

I think those things are all correct, but there’s something else, too, though heedless” nods toward it: As a company determined to connect the world at all costs, Meta failed, spectacularly, over and over, to make the connections that mattered, between their own machinery and the people it hurt.

So I think there are two linked things Meta could have done to prevent so much damage, which are to listen out for people in trouble and meaningfully correct course.

Listening out” is from Ursula Le Guin, who said it in a 2015 interview with Choire Sicha that has never left my mind. She was speaking about the challenge of working while raising children while her partner taught:

…it worked out great, but it took full collaboration between him and me. See, I cannot write when I’m responsible for a child. They are full-time occupations for me. Either you’re listening out for the kids or you’re writing. So I wrote when the kids went to bed. I wrote between nine and midnight those years.

This passage is always with me because the only time I’m not listening out, at least a little bit, is when my kid is completely away from the house at school. Even when she’s sleeping, I’m half-concentrating on whatever I’m doing and…listening out. I can’t wear earplugs at night or I hallucinate her calling for me in my sleep. This is not rational! But it’s hardwired. Presumably this will lessen once she leaves for college or goes to sea or whatever, but I’m not sure it will.

So listening out is meaningful to me for embarrassingly, viscerally human reasons. Which makes it not something a serious person puts into an essay about the worst things the internet can do. I’m putting it here anyway because it cuts to the thing I think everyone who works on large-scale social networks and tools needs to wire into our brainstems.

In Myanmar and in Sophie Zhang’s disclosures about the company’s refusal to prioritize the elimination of covert influence networks, Meta demonstrated not just an unwillingness to listen to warnings, but a powerful commitment to not permitting itself to understand or act on information about the dangers it was worsening around the world.

It’s impossible for me to read the Haugen and Zhang disclosures and not think of the same patterns dismissing and hiding dangerous knowledge that we’ve seen from tobacco companies (convicted of racketeering and decades-long patterns of deception over tobacco’s dangers), oil companies (being sued by the state of California over decades-long patterns of deception over their contributions to climate change), or the Sacklers (who pled guilty to charges based on a decade-long pattern of deception over their contribution to the opioid epidemic).

But you don’t have to be a villain to succumb to the temptation to push away inconvenient knowledge. It often takes nothing more than being idealistic or working hard for little (or no) pay to believe that the good your work does necessarily outweighs its potential harms—and that especially if you’re offering it for free, any trouble people get into is their own fault. They should have done their own research, after all.

And if some people are warning about safety problems on an open source network where the developers and admins are trying their best, maybe they should just go somewhere else, right? Or maybe they’re just exaggerating, which is the claim I saw the most on Mastodon when the Stanford Internet Observatory published its report on CSAM on the fediverse.

We can’t have it both ways. Either people making and freely distributing tools and systems have some responsibility for their potential harms, or they don’t. If Meta is on the hook, so are people working in open technology. Even nice people with good intentions.

So: Listening out. Listening out for signals that we’re steering into the shoals. Listening out like it’s our own children at the sharp end of the worst things our platforms can do.

The warnings about Myanmar came from academics and digital rights people. They came, above all, from Myanmar, nearly 8,000 miles from Palo Alto. Twenty hours on a plane. Too far to matter, for too many years.


The civil society people who issued many of the warnings to Meta have clear thoughts about the way to avoid recapitulating Meta’s disastrous structural callousness during the years leading up to the genocide of the Rohingya. Several of those recommendations involve diligent, involved, hyper-specific listening to people on the ground about not only content moderation problems, but also dangers in the core functionality of social products themselves.

Nadah Feteih and Elodie Vialle’s recent piece in Tech Policy Press, Centering Community Voices: How Tech Companies Can Better Engage with Civil Society Organizations offers a really strong introduction to what that kind of consultative process might be like for big platforms. I think it also offers about a dozen immediately useful clues about how smaller, more distributed, and newer networks might proceed as well.

But let’s get a little more operational.

Do better” requires material support

It’s impossible to talk about any of this without talking about the resource problem in open source and federated networks—most of the sector is critically underfunded and built on gift labor, which has shaping effects on who can contribute, who gets listened to, and what gets done.

It would be unrealistic bordering on goofy to expect everyone who contributes to projects like Mastodon and Lemmy or runs a small instance on a federated network to independently develop in-depth human-rights expertise. It’s just about as unrealistic to expect that even lead developers who are actively concerned about safety to have the resources and expertise to arrange close consultation with relevant experts in digital rights, disinformation, and complex, culturally specific issues globally.

There are many possible remedies to the problems and gaps I’ve tried to sketch above, but the one I’ve been daydreaming about a lot is the development of dedicated, cross-cutting, collaborative institutions that work not only within the realm of trust and safety as it’s constituted on centralized platforms, but also on hands-on research that brings the needs and voices of vulnerable people and groups into the heart of design work on protocols, apps, and tooling.

Maintainers and admins all over the networks are at various kinds of breaking points. Relatively few have the time and energy to push through year after year of precariousness and keep the wheels on out of sheer cussedness. And load-bearing personalities are not, I think, a great way to run a stable and secure network.

Put another way, rapidly growing, dramatically underfunded networks characterized by overtaxed small moderation crews and underpowered safety tooling present a massive attack surface. Believing that the same kinds of forces that undermined the internet in Myanmar won’t be able to weaponize federated networks because the nodes are smaller is a category error—most of the advantages of decentralized networks can be turned to adversaries’ advantage almost immediately.

Flinging money indiscriminately isn’t a cure, but without financial support that extends beyond near-subsistence for a few people, it’s very hard to imagine free and open networks being able to skill up in time to handle the kinds of threats and harms I looked at in the first three posts of this series.

The problem may look different for venture-funded projects like Bluesky, but I don’t know. I think in a just world, the new CTO of Mastodon wouldn’t be working full-time for free.

I also think that in that just world, philanthropic organizations with interests in the safety of new networks would press for and then amply fund collective, collaborative work across protocols and projects, because regardless of my own concerns and preferences, everyone who uses any of the new generation of networks and platforms deserves to be safe.

We all deserve places to be together online that are, at minimum, not inimical to offline life.

So what if you’re not a technologist, but you nevertheless care about this stuff? Unsurprisingly, I have thoughts.

Everything, everywhere, all at once

The inescapable downside of not relying on centralized networks to fix things is that there’s no single entity to try to pressure. The upside is that we can all work toward the same goals—better, safer, freer networks—from wherever we are. And we can work toward holding both centralized and new-school networks accountable, too.

If you live someplace with at least semi-democratic representation in government, you may be able to accomplish a lot by sending things like Amnesty International’s advocacy report and maybe even this series to your representatives, where there’s a chance their staffers will read them and be able to mount a more effective response to technological and corporate failings.

If you have an account on a federated/network, you can learn about the policies and plans of your own instance administrators—and you can press them (I would recommend politely) about their plans for handling big future threats like covert influence networks, organized but distributed hate campaigns, actor-level threats, and other threats we’ve seen on centralized networks and can expect to see on decentralized ones.

And if you have time or energy or money to spare, you can throw your support (material or otherwise) behind collaborative institutions that seek to reduce societal harms.

On Meta itself

It’s my hope that the 30,000-odd words of context, evidence, and explanation in parts 1–3 of this series speak for themselves.

I’m sure some people, presumably including some who’ve worked for Meta or still do, will read all of those words and decide that Meta had no responsibility for its actions and failures to act in Myanmar. I don’t think I have enough common ground with those readers to try to discuss anything.

There are quite clearly people at Meta who have tried to fix things. A common thread across internal accounts is that Facebook’s culture of pushing dangerous knowledge away from its center crushes many employees who try to protect users and societies. In cases like Sophie Zhang’s, Meta’s refusal to understand and act on what its own employees had uncovered is clearly a factor in employee health breakdowns.

And the whistleblower disclosures from the past few years make it clear that many people over many years were trying to flag, prevent, and diagnose harm. And to be fair, I’m sure lots of horrible things were prevented. But it’s impossible to read Frances Haugen’s disclosures or Sophie Zhang’s story and believe that the company is doing everything it can, except in the sense that it seems unable to conceive of meaningfully redesigning its products—and rearranging its budgets—to stop hurting people.

It’s also impossible for me to read anything Meta says on the record without thinking about the deceptive, blatant, borderline contemptuous runarounds it’s been doing for years over its content moderation performance. (That’s in Part III, if you missed it.)

Back in 2018, Adam Mosseri, who was in charge of News Feed—a major recommendation surface” on which Facebook’s algorithms boosted genocidal anti-Rohingya messages in Myanmar—during the genocide, wrote that he’d lost some sleep over what had happened.

The lost sleep apparently didn’t amount to much in the way of product-design changes, considering that Global Witness found Facebook doing pretty much the exact same things with the same kinds of messages three years later.

But let’s look at what Mosseri actually said:

There is false news, not only on Facebook but in general in Myanmar. But there are no, as far as we can tell, third-party fact-checking organizations with which we can partner, which means that we need to rely instead on other methods of addressing some of these issues. We would look heavily, actually, for bad actors and things like whether or not they’re violating our terms of service or community standards to try and use those levers to try and address the proliferation of some problematic content. We also try to rely on the community and be as effective as we can at changing incentives around things like click-bait or sensational headlines, which correlate, but aren’t the same as false news.

Those are all examples of how we’re trying to take the issue seriously, but we lose some sleep over this. I mean, real-world harm and what’s happening on the ground in that part of the world is actually one of the most concerning things for us and something that we talk about on a regular basis. Specifically, about how we might be able to do more and be more effective, and more quickly.

This is in 2018, so six years after Myanmar’s digital-rights and civil-society organizations started contacting Meta to tell them about the organized hate campaigns on Facebook in Myanmar, which Meta appears to have ignored, because all those organized campaigns were still running through the peak of the Rohingya genocide in 2016 and 2017.

This interview also happens several years after Meta started relying on members of those same Burmese organizations to report content—because, if you remember from the earlier posts in this series, they hadn’t actually translated the Community Standards or the reporting interface itself. Or hired enough Burmese-speaking moderators to handle a country bigger than a cruise ship. It’s also interesting that Mosseri reported that Meta couldn’t find any third-party fact-checking organizations” given that MIDO, which was one of the organizations reporting content problems to them, actually ran its own fact-checking operation.

And the incentives on the click-bait Mosseri mentions? That would be the market for fake and sensationalist news that Meta created by rolling out Instant Articles, which directly funded the development of Burmese-language clickfarms, and which pretty much destroyed Myanmar’s online media landscape back in 2016.

Mosseri and his colleagues talked about it on a regular basis, though.

I was going to let myself be snarky here and note that being in charge of News Feed during a genocide the UN Human Rights Council linked to Facebook doesn’t seem to have slowed Mosseri down personally, either. He’s the guy in charge of Meta’s latest social platform, Threads, after all.

But maybe it goes toward explaining why Threads refuses to allow users to search for potentially controversial topics, including the effects of an ongoing pandemic. This choice is being widely criticized as a failure to let people discuss important things. It feels to me like more of an admission that Meta doesn’t think it can do the work of content moderation, so it’s designing the product to avoid the biggest dangers.

It’s a clumsy choice, certainly. And it’s weird, after a decade of social media platforms charging in with no recognition that they’re making things worse. But if the alternative is returning to the same old unwinnable fight, maybe just not going there is the right call. (I expect that it won’t last.)

The Rohingya are still waiting

The Rohingya are people, not lessons. Nearly a million of them have spent at least six years in Bangladeshi camps that make up the densest refugee settlement on earth. Underfunded, underfed, and prevented from working, Rohingya people in the camps are vulnerable to climate-change-worsened weather, monsoon flooding, disease, fire, and gang violence. The pandemic has concentrated the already intense restrictions and difficulties of life in these camps.

If you have money, Global Giving’s Rohingya Refugee Relief Fund will get it into the hands of people who can use it.

The Canadian documentary Wandering: A Rohingya Story provides an intimate look at life in Kutupalong, the largest of the refugee camps. It’s beautifully and lovingly made.

Screenshot from the film, Wandering: A Rohingya Story, showing a Rohingya mother smoothing her hands over her daugher's laughing face.

From Wandering: A Rohingya Story. This mother and her daughter destroyed me.

Back in post-coup Myanmar, hundreds of thousands of people are risking their lives resisting the junta’s brutal oppression. Mutual Aid Myanmar is supporting their work. James C. Scott (yes) is on their board.

In the wake of the coup, the National Unity Government—the shadow-government wing of the Burmese resistance against the junta—has officially recognized the wrongs done to the Rohingya, and committed itself to dramatic change, should the resistance prevail:

The National Unity Government recognises the Rohingya people as an integral part of Myanmar and as nationals. We acknowledge with great shame the exclusionary and discriminatory policies, practices, and rhetoric that were long directed against the Rohingya and other religious and ethnic minorities. These words and actions laid the ground for military atrocities, and the impunity that followed them emboldened the military’s leaders to commit countrywide crimes at the helm of an illegal junta.

Acting on our Policy Position on the Rohingya in Rakhine State’, the National Unity Government is committed to creating the conditions needed to bring the Rohingya and other displaced communities home in voluntary, safe, dignified, and sustainable ways.

We are also committed to social change and to the complete overhaul of discriminatory laws in consultation with minority communities and their representatives. A Rohingya leader now serves as Deputy Minister of Human Rights to ensure that Rohingya perspectives support the development of government policies and programs and legislative reform.

From the refugee camps, Rohingya youth activists are working to build solidarity between the Rohingya people and the mostly ethnically Bamar people in Myanmar who, until the coup, allowed themselves to believe the Tatmadaw’s messages casting the Rohingya as existential threats. Others, maybe more understandably, remain wary of the NUGs claims that the Rohingya will be welcomed back home.


In Part II of this series, I tried to explain—clearly but at speed—how the 2016 and 2017 attacks on the Rohingya, which accelerated into full-scale ethnic cleansing and genocide, began when Rohingya insurgents carried out attacks on Burmese security forces and committed atrocities against civilians, after decades of worsening repression and deprivation in Myanmar’s Rakhine State by the Burmese government.

This week, the social media feed of one of the young Rohingya activists featured in a story I linked above is filled with photographs from Gaza, where two million people live inside fences and walls, and whose hospitals and schools and places of worship are being bombed by the Israeli military because of Hamas’s horrific attacks on Israeli civilians, after decades of worsening repression and deprivation in Gaza and the West Bank by the Israeli government.

We don’t need the internet to make our world a hell. But I don’t think we should forgive ourselves for letting our technology make the world worse.

I want to make our technologies into better tools for the many, many people devoted to building the kinds of human-level solidarity and connection that can get more of us through our present disasters to life on the other side.


Date
13 October 2023