Untangling Threads

Back in the fall, I wrote a series of posts on a particularly horrific episode in Meta’s past. I hadn’t planned to revisit the topic immediately, but here we are, with Threads federation with the ActivityPub-based fediverse ecosystem an increasingly vivid reality.

My own emotional response to Meta and Threads is so intense that it hasn’t been especially easy to think clearly about the risks and benefits Threads’ federation brings, and to whom. So I’m writing through it in search of understanding, and in hopes of planting some helpful trail markers as I go.

What federation with Threads offers

Doing a good-faith walkthrough of what seem to me to be the strongest arguments for federating with Threads has been challenging but useful. I’m focusing on benefits to people who use networks—rather than benefits to, for instance, protocol reputation—because first- and second-order effects on humans are my thing. (Note: If you’re concerned that I’m insufficiently skeptical about Meta, skip down a bit.)

Finding our people

Back in July, I did some informal research with people who’d left Mastodon and landed on Bluesky, and one of the biggest problems those people voiced was their difficulty in finding people they wanted to follow on Mastodon. Sometimes this was because finding people who were there was complicated for both architectural and UI design reasons; sometimes it was because the people they wanted to hang out with just weren’t on Mastodon, and weren’t going to be.

For people with those concerns, Threads federation is a pretty big step toward being able to maintain an account on Mastodon (or another fediverse service) and still find the people they want to interact with—assuming some of those people are on Threads and not only on Bluesky, Twitter/X, Instagram, and all the other non-ActivityPub-powered systems.

On the flipside, Threads federation gives people on Threads the chance to reconnect with people who left commercial social media for the fediverse—and, if they get disgusted with Meta, to migrate much more easily to a noncommercial, non-surveillance-based network. I’ve written a whole post about the ways in which Mastodon migration specifically is deeply imperfect, but I’m happy to stipulate that it’s meaningfully better than nothing.

I’ll say a little more about common responses to these arguments about the primary benefits of federation with Threads later on, but first, I want to run through some risks and ethical quandaries.

The Threads federation conversations that I’ve seen so far mostly focus on:

  • Meta’s likelihood of destroying the fediverse via embrace-extend-extinguish”
  • Meta’s ability to get hold of pre-Threads fediverse (I’ll call it Small Fedi for convenience) users’ data,
  • Threads’ likelihood of fumbling content moderation, and
  • the correct weighting of Meta being terrible vs. connecting with people who use Threads.

These are all useful things to think about, and they’re already being widely discussed, so I’m going to move quickly over that terrain except where I think I can offer detail not discussed as much elsewhere. (The EEE argument I’m going to pass over entirely because it’s functioning in a different arena from my work and it’s already being exhaustively debated elsewhere.)

Unfolding the risk surface

A few panels from an origami instruction set by Jo Nakashima, website linked in the caption. The resulting unicorn isn't Blade Runner-accurate, but it's all made from a single sheet of paper, which I find satisfying. (A tiny piece of Jo Nakashima’s excellent origami unicorn instructions.)

The risks I’ll cover in the rest of this post fall into three categories:

  1. My understanding of who and what Meta is
  2. The open and covert attack vectors that Meta services routinely host
  3. The ethics of contribution to and complicity with Meta’s wider projects

I want to deal with these in order, because the specifics of the first point will, I hope, clarify why I resist generalizing Threads federation conversations to federating with any commercial or large-scale service.”

Who Meta is

The list of controversies” Meta’s caused since its founding is long and gruesome, and there are plenty of summaries floating around. I spent several months this year researching and writing about just one episode in the company’s recent history because I find that deep, specific knowledge combined with broader summary helps me make much better decisions than summary alone.

Here’s the tl;dr of what I learned about Meta’s adventures in Myanmar.

Beginning around 2013, Facebook spent years ignoring desperate warnings from experts in Myanmar and around the world, kept its foot on the algorithmic accelerator, and played what the UN called a determining role” in the genocide of the Rohingya people. A genocide which included mass rape and sexual mutilation, the maiming and murder of thousands of civilians including children and babies, large-scale forced displacement, and torture.

I wrote so much about Meta in Myanmar because I think it’s a common misconception that Meta just kinda didn’t handle content moderation well. What Meta’s leadership actually did was so multifaceted, callous, and avaricious that it was honestly difficult for even me to believe:

Combine all those factors with Meta leadership’s allergy to learning anything suggesting that they should do less-profitable, more considered things to save lives, and you get a machine that monopolized internet connectivity for millions and then flooded Myanmar’s nascent internet with algorithmically accelerated, dehumanizing, violence-inciting messages and rumors (both authentic and farmed) that successfully demonized an ethnicity and left them without meaningful support when Myanmar’s military finally enacted their campaign of genocide.

As of last year—ten years after warnings began to appear in Myanmar and about six years since the peak of the genocide—Meta was still accepting genocidal anti-Rohingya ads, the content of which was actually taken directly from widely studied documents from the United Nations Independent International Fact-Finding Mission on Myanmar, examining the communications that led to the genocide. Meta continues to accept extreme disinformation as advertising all over the world, in contradiction to its own published policies and statements, as Global Witness keeps demonstrating.

I’d be remiss if I failed to mention that according to whistleblower Sophie Zhang’s detailed disclosures, Meta—which is, I want to emphasize, the largest social media company in the world—repeatedly diverted resources away from rooting out fake-page and fake-account networks run by oppressive governments and political parties around the world, including those targeting activists and journalists for imprisonment and murder, while claiming otherwise in public.

Meta’s lead for Threads, Adam Mosseri, was head of Facebook’s News Feed and Interfaces departments during that long, warning-heavy lead-up to the genocide of the Rohingya in Myanmar. After the worst of the violence was over, Mosseri noted on a podcast that he’d lost some sleep over it.

In 2018, shortly after that podcast, Mosseri was made the new head of Instagram, from whence he comes to Threads—which, under his leadership, hosts accounts like bomb-threat generating money-making scheme Libs of TikTok and Steve Bannon’s dictatorship fancast, War Room.

Knowing the details of these events—most of which I couldn’t even fit into the very long posts I published—makes it impossible for me to cheerfully accept Meta’s latest attempt to permeate the last few contested spaces on the social internet, because touching their products makes me feel physically ill. 

It’s the difference, maybe, between understanding plastic pollution” in the abstract vs. having spent pointless hours sifting bucketfuls of microplastics out of the sand of my home coast’s heartbreakingly beautiful and irreparably damaged beaches.

My personal revulsion isn’t an argument, and the vast majority of people who see a link to the Myanmar research won’t ever read it—or Amnesty’s reporting on Meta’s contribution to targeted ethnic violence against Tigrayan people in Ethiopia, or Sophie Zhang’s hair-raising disclosures about Meta’s lack of interest in stopping global covert influence operations, or Human Rights Watch on Meta’s current censorship practices in the Israel-Palestine war.

Nevertheless, I hope it becomes increasingly clear why the line, for some of us, isn’t about non-commercial” or non-algorithmic,” but about Meta’s specific record of bloody horrors, and their absolute unwillingness to enact genuinely effective measures to prevent future political manipulation and individual suffering and loss on a global scale.

Less emotionally, I think it’s unwise to assume that an organization that has…

  • demonstrably and continuously made antisocial and sometimes deadly choices on behalf of billions of human beings and
  • allowed its products to be weaponized by covert state-level operations behind multiple genocides and hundreds (thousands? tens of thousands?) of smaller persecutions, all while
  • ducking meaningful oversight,
  • lying about what they do and know, and
  • treating their core extraction machines as fait-accompli inevitabilities that mustn’t be governed except in patently ineffective ways…

…will be a good citizen after adopting a new, interoperable technical structure.

Attack vectors (open)

Some of the attack vectors Threads hosts are open and obvious, but let’s talk about them anyway.

Modern commercial social networks have provided affordances that both enable and reward the kind of targeted public harassment campaigns associated with multi-platform culture-war harassment nodes like Libs of Tiktok, who have refined earlier internet mob justice episodes into a sustainable business model.

These harassment nodes work pretty simply:

  1. Use crowdsourced surveillance juiced by fast social media search to find a target, like a children’s hospital, schoolteacher, a librarian, or a healthcare worker. (To give you a sense of scale, Libs of Tiktok named and targeted two hundred and twenty-two individual employees of schools or education organizations in just the first four months of 2022.)
  2. Use social media to publicize decontextualized statements from targeted individuals, doctored video, lies about policies and actions, and dehumanizing statements calling targeted individuals and groups evil cult members who groom children for sexual abuse, etc.
  3. Sit back while violence-inciting posts, right-wing media appearances, lots and lots of bomb threats, and Substack paychecks roll in, good teachers’ and librarians’ lives get absolutely wrecked, and anti-trans, anti-queer legislation explodes nationally.
  4. Repeat.

As I noted above, Threads currently hosts Libs of TikTok, along with plenty of other culture-war grifters devoted to hunting down private individuals talking about their lives and work and using them to do what Meta calls real-world harm.”

Maybe none of those vicious assholes will notice that they’re now federating with a network known as a haven for thousands of LGBTQ+ people, anarchists, dissidents, furries, and other people the harassment machines love to target as ragebait.

And maybe none of the harassment nodes will notice that Mastodon is also used by very real predators and CSAM-distributors—the ones mainstream fedi servers defederate from en masse—and use that fact to further mislead their froth-mouthed volunteer harassment corps about the dangers posed by trans and queer people on the fediverse.

Maybe none of that will happen! But if I were in the sights of operators like those, I’d want to get as far from Threads as possible. And I’d take assertions that people who don’t want to federate with Threads are all irrational losers as useful revelations about the character of the people making them.

Attack vectors (covert)

I’ve written a lot about the ways in which I think the fediverse is currently unprepared to deal with the kinds of sophisticated harms Meta currently allows to thrive, and sometimes directly funds. Please forgive me for quoting myself for my own convenience:

I think it’s easy to imagine that these heavy-duty threats focus only on the big, centralized services, but an in-depth analysis of just one operation, Secondary Infektion, shows that it operated across at least 300 websites and platforms ranging from Facebook, Reddit, and YouTube (and WordPress, Medium, and Quora) to literally hundreds of other sites and forums.

The idea that no one would make the effort to actually conduct high-effort, resource-intensive information operations across smaller social platforms remains common, but is absolutely false. We’ve seen it happen already, and we’ll see it again, and I’d be shocked if next-generation large-language models weren’t already supercharging those campaigns by reducing required effort.

To believe it can’t or won’t happen on fedi—and that Threads won’t accelerate it by providing easy on-ramps and raising the profile of the fediverse more generally—seems naive at best.

Unfortunately, this isn’t something that simply suspending or blocking Threads will fix. I don’t think any action server admins take is going to prevent it from happening, but I do think the next twelve to eighteen months are a critical moment for building cross-server—and cross-platform—alliances for identifying and rooting out whatever influence networks fedi administrators and existing tooling can detect. (Especially but not only given the explosive potential of the upcoming US Presidential election and, thanks to US hegemony, its disproportionate effect on the rest of the world.)

On the pragmatic side, small-scale fedi would benefit hugely from the kind of training and knowledge about these operations that big commercial platforms possess, so if I were a fedi admin who felt fine about working with Meta, those are the kinds of requests I would be making of my probably quite nice new friends in terrible places.

How much do you want to help Meta?

Meta’s business model centers on owning the dominant forums for online human connection in most of the world and using that dominant position to construct dense webs of data that clients at every level of society will pay a lot of money for so that they can efficiently target their ad/influence campaigns.

Amnesty International has an exceptionally trenchant breakdown of the human-rights damage done by both Meta and Google’s globe-circling surveillance operations in English, French, and Spanish, and I think everyone should read it. In the meantime, I think it’s useful to remember that no matter how harmful the unintended effects of these corporations’ operations—and they’ve been immensely harmful—their corporate intent is to dominate markets and make a lot of money via ad/influence campaigns. Everything else is collateral damage.

(I’m going to blow past Meta’s ability to surveil its users—and non-users—outside of its own products for now, because I don’t have the time to get into it, but it’s still pretty gruesome.)

As moral actors, I think we should reckon with that damage—and fight to force Meta and Google to reckon with it as well—but when we look ahead to things like Threads’ next moves, I think we should keep market domination and behavior-targeted ad and influence campaigns foremost in our minds.

With those assumptions on the table, I want to think for a moment about what happens when posts from, say, Mastodon hit Threads.

Right off the bat, when Threads users follow someone who posts from Mastodon, those Masto-originating posts are going to show up in Threads users’ feeds, which are currently populated with the assistance of Meta’s usual opaque algorithmic machinery.

I find it difficult to imagine a world in which Mastodon posts federated into Threads don’t provide content against which Meta can run ads—and, less simplistically, a world in which Threads’ users’ interactions with Mastodon posts don’t provide behavioral signals that allow Meta to offer their clients more fine-grained targeting data.

Threads isn’t yet running ads-qua-ads, but it launched with a preloaded fleet of brands” and the promise of being a nice, un-heated space for conversation—which to say, an explicitly brand-friendly environment. (So far, this has meant no to butts and no to searching for long covid info and yes to accounts devoted to stochastic anti-LGBT terrorism for profit, so perhaps that’s a useful measure of what brands consider safe and neutral.) Perhaps there’s a world in which Threads doesn’t accept ads, but I have difficulty seeing it.

So I think that leaves us with a few things to consider, and I think it’s worth teasing out several entangled but distinct framings at work in our ways of thinking about modern social networks and our complicity in their actions.

Tacit endorsement

A lot of arguments about consumer choice position choice as a form of endorsement. In this framing, having an account on a bad network implies agreement with and maybe even complicity in that network’s leadership and their actions. It boils down to endorsement by association. This comes up a lot in arguments about why people should leave Twitter.

In some formulations of this perspective, having an account on Mastodon and federating with Threads implies no endorsement of Meta’s services; in other formulations, any interconnection with Threads does imply a kind of agreement. I’ll walk through two more ways of looking at these questions that might help reveal the assumptions underlying those opposing conclusions.

Indirect (ad-based) financial support

There’s also a framing of consumer choice as opting into—or out of—being part of the attention merchants’ inventory. (This is the logic of boycotts.)

In this framing, maintaining an active account on a bad network benefits the network’s leaders directly by letting the network profit by selling our attention on to increasingly sophisticated advertising machines meant to influence purchases, political positions, and many kinds of sentiment and belief.

In the conversations I’ve seen, this framing is mostly used to argue that it’s bad to use Meta services directly, but ethically sound to federate with Threads, because doing so doesn’t benefit Meta financially. I think that’s somewhere between a shaky guess and a misapprehension, and there’s a third way to frame our participation in social networks that helps illuminate why.

Social networking as labor

There’s a third perspective that frames what we do within social networks—posting, reading, interacting—as labor. I think it’s reasonably well understood on, say, Mastodon, that networks like X’s and Meta’s rely on people doing that work without really noticing it, as a side effect of trying to [connect with friends or network our way out of permanent precarity or keep up with world events or enjoy celebrity drama or whatever].

What I don’t think we’ve grappled with is the implications of sending our labor out beyond the current small, largely ferociously noncommercial version of the fediverse and into machinery like Meta’s—where that labor becomes, in the most boneheaded formulation, something to smash between ads for flimsy bras and stupid trucks and $30 eco-friendly ear swabs, and in more sophisticated formulations, a way to squeeze yet more behavioral data out of Threads users to sell onward to advertisers.

(Just knowing which Threads users are savvy and interested enough to seek out Mastodon accounts to follow feels like a useful signal for both internal Meta work and for various kinds of advertisers.)

Okay so what

When I started trying to talk about some of the likely technical behaviors we’ll see when Mastodon posts show up inside Threads—by which I mean they will be part of the ad machine and they will be distributed in Meta’s usual algorithmic ways”—I got a lot of responses that focused on the second framing (financial support via ad impressions). Essentially, most of these responses went, If we’re not seeing ads ourselves, what’s the problem?”

An honest but ungenerous response is that I don’t want to contribute my labor to the cause of helping Meta wring more ~value from its users—many of whom have no meaningful alternatives—because those users are also human beings just like me and they deserve better networks just as much as I do.

A better response is that when we sit down to figure out what we want from our server administrators and what we’re going to do as individuals, it’s useful to acknowledge externalities as well as direct effects on our” networks, because ignoring externalities (aka other people somewhere else”) is precisely how we got the worst parts of our current moment.

On pragmatism

Compared to the social internet as a whole, the existing fediverse is disproportionately populated by people who are demonstrably willing to put their various principles above online connection to friends, family members, and others who aren’t on fedi. That’s not intrinsically good or bad—I think it’s both, in different situations—but it shapes the conversation about trade-offs.

If you think anyone who uses Threads is unredeemable, you’re probably not going to have much sympathy for people who miss their friends who use Threads. More broadly, people who feel lonely on fedi because they can’t find people they care about get characterized as lazy, vapid dopamine addicts in a lot of Mastodon conversations.

I’m not particularly interested in judging anyone’s feelings about this stuff—I am myself all over the record stating that Meta is a human-rights disaster run by callous, venal people who shouldn’t hold any kind of power. But I do believe that survivable futures require that we all have access to better ways to be together online, so I always hope for broad empathy in fedi product design.

As for me—I’m much more of a pragmatist than I was twenty years ago, or even five years ago. And I have first-hand experience with having my labor and the labor of many others toward a meaningful public good—in my case, a volunteer-assembled set of official public data points tracking the covid pandemic—used by terrible people.

When the Trump White House—which had suggested suppressing case counts by stopping testing—used our work in a piece of propaganda about how well they were handling the pandemic, I spent days feeling too physically ill to eat.

Nevertheless, I judged that the importance of keeping the data open and flowing and well contextualized was much greater than the downside of having it used poorly in service of an awful government, because we kept hearing first-hand reports that our work was saving human lives.

The experience was clarifying.

Until now, I haven’t involved myself much in discussions about how Mastodon or other fedi servers should react to Threads’ arrival, mostly because I think the right answer should be really different for different communities, my own revulsion notwithstanding.

For people whose core offline communities are stuck with Meta services, for example—and globally, that is a shit-ton of people—I think there are absolutely reasonable arguments for opening lines of communication with Threads despite Meta’s radioactivity.

For other people, the ethical trade-offs won’t be worth it.

For others still, a set of specific risks that federation with Threads opens up will not only make blocking the domain an obvious choice, but potentially also curtail previously enjoyed liberties across the non-Threads fediverse.

Here’s my point: Everyone makes trade-offs. For some people, the benefits of Threads federation is worth dealing with—or overlooking—Meta’s stomach-churning awfulness. But I do think there are human costs to conflating considered pragmatism with a lack of careful, step-by-step thought.

Practicalities

That was the whole of my sermon.

Here are some things to think about if you’re a fedi user trying to work out what to do, what questions to ask your server admins, and how to manage your own risk.

Ask your admins about policy enforcement

I think this is probably a good time for people who are concerned about federation with Threads to look through their server’s documentation and then ask their administrators about the server’s Threads-federation plan if that isn’t clear in the docs, along with things like…

  • …if the plan is to wait and see, what are the kinds of triggers that would lead to suspension?
  • …how will they handle Threads’ failure to moderate, say, anti-trans posts differently from the way they would handle a Mastodon server’s similar failure?
  • …how will they manage and adjudicate their users’ competing needs, including desires to connect with a specific cultural or geographical community that’s currently stuck on Meta (either by choice or by fiat) vs. concerns about Threads’ choice to host cross-platform harassment operators?

I don’t think the answers to these questions are going to be—or should be—the same for every server on the fediverse. I personally think Meta’s machinery is so implicated in genocide and a million lesser harms that it should be trapped inside a circle of salt forever, but even I recognize that there are billions of people around the world who have no other social internet available. These are the trade-offs.

I also think the first two questions in particular will seem easy to answer honestly, but in reality, they won’t be, because Threads is so big that the perceived costs of defederation will, for many or even most fedi admins, outweigh the benefits of booting a server that protects predators and bad actors.

I would note that most mainstream fedi servers maintain policies that at least claim to ban (open) harassment or hateful content based on gender, gender identity, race or ethnicity, or sexual orientation. On this count, I’d argue that Threads already fails the first principle of the Mastodon Server Covenant:

Active moderation against racism, sexism, homophobia and transphobia Users must have the confidence that they are joining a safe space, free from white supremacy, anti-semitism and transphobia of other platforms.

Don’t take my word for this failure. Twenty-four civil rights, digital justice and pro-democracy organizations delivered an open letter last summer on Threads’ immediate content moderation…challenges:

…we are observing neo-Nazi rhetoric, election lies, COVID and climate change denialism, and more toxicity. They posted bigoted slurs, election denial, COVID-19 conspiracies, targeted harassment of and denial of trans individuals’ existence, misogyny, and more. Much of the content remains on Threads indicating both gaps in Meta’s Terms of Service and in its enforcement, unsurprising given your long history of inadequate rules and inconsistent enforcement across other Meta properties.

Rather than strengthen your policies, Threads has taken actions doing the opposite, by purposefully not extending Instagram’s fact-checking program to the platform and capitulating to bad actors, and by removing a policy to warn users when they are attempting to follow a serial misinformer. Without clear guardrails against future incitement of violence, it is unclear if Meta is prepared to protect users from high-profile purveyors of election disinformation who violate the platform’s written policies.

For me, there is no ideal world that includes Meta, as the company currently exists. But in the most ideal available world, I think other fedi services would adopt—and publicly announce—a range of policies for dealing with Threads, including their answers to questions like the ones above.

Domain blocking and its limits

If you don’t want to federate with Threads, the obvious solution is to block the whole domain or find yourself a home server that plans to suspend it. Unfortunately, as I’ve learned through personal experience, suspensions and blocks aren’t a completely watertight solution.

In my own case, a few months ago someone from one of the most widely-suspended servers in the fediverse picked one of my more innocuous posts that had been boosted into his view and clowned in my replies in the usual way despite the fact that my home server had long since suspended the troll’s server.

So not only was my post being passed around on a server that should never have seen it, I couldn’t see the resulting trolling—but others whose servers federated with the troll server could. Now imagine that instead of some random edgelord, it had been part of the Libs of TikTok harassment sphere, invisibly-to-me using my post to gin up a brigading swarm. Good times!

Discussions about this loophole have been happening for much longer than I’ve been active on fedi, but every technical conversation I’ve seen about this on Mastodon rapidly reaches such an extreme level of No, that’s all incorrect, it depends on how each server is configured and which implementations are in play in the following fifty-two ways,” that I’m not going to attempt a technical summary here.

Brook Miles has written about this in greater detail in the context of the AUTHORIZED_FETCH” Mastodon configuration option—see the Example—Boosting” section for more.

If your personal threat model is centered on not being annoyed by visible taunting, this loophole doesn’t really matter. But if you or your community have had to contend with large-scale online attacks or distributed offline threats, boosting your posts to servers your server has already suspended—and then making any ensuring threats invisible to you while leaving them visible to other attackers is more dangerous than just showing them.

Will this be a significant attack vector from Threads, specifically? I don’t know! I know that people who work on both Mastodon and ActivityPub are aware of the problem, but I don’t have any sense of how long it would take for the loophole to be closed in a way that would prevent posts from being boosted around server suspensions and reaching Threads.

In the meantime, I think the nearest thing to reasonably sturdy protection for people on fedi who have good reason to worry about the risk surface Threads federation opens up is probably to either…

  • block Threads and post followers-only or or local-only, for fedi services that support it, or
  • operate from a server that federates only with servers that also refuse to federate with Threads—which is a system already controversial within the fediverse because allowlists are less technically open than denylists.

A note on individual domain blocking

Earlier this week, I got curious about how individual blocks and server suspensions interact, and my fedi research collaborator Darius Kazemi generously ran some tests using servers he runs to confirm our understanding of the way Mastodon users can block domains.

These informal tests show that:

  1. if you individually block a domain, your block will persist even if your server admin suspends and then un-suspends the domain you blocked, and
  2. this is the case whether you block using the block domain” UI in Mastodon (or Hometown) or upload a domain blocklist using the import tools in Settings.

The upload method also allows you to block a domain even if your home server’s admin has already suspended that domain. And this method—belt plus suspenders, essentially—should provide you with a persistent domain block even if your home server’s admins later change their policy and un-suspend Threads (or any other server that concerns you.

(If any of the above is wrong, it’s my fault, not Darius’s.)

Human/feeling

This last part is difficult, but I’m keeping it in because it’s true and it’s something I’m wrestling with.

It’s been a wild week or so watching people who I thought hated centralized social networks because of the harm they do giddily celebrating the entry into the fediverse of a vast, surveillance-centric social media conglomerate credibly accused of enabling targeted persecution and mass murder.

Rationally, I understand that the adoption of an indieweb protocol by the biggest social media company in the world feels super exciting to many fedi developers and advocates. And given the tiny scraps of resources a lot of those people have worked with for years on end, it probably feels like a much-needed push toward some kind of financial stability.

What I cannot make sense of is the belief that any particular implementation of open networking is such an obvious, uncomplicated, and overwhelming good that it’s sensible and good to completely set aside the horrors in Meta’s past and present to celebrate exciting internet milestones.

I don’t think the people who are genuinely psyched about Threads on fedi are monsters or fascists, and I don’t think those kinds of characterizations—which show up a lot in my replies—are helping. And I understand that our theories of change just don’t overlap as much as I’d initially hoped.

But for me, knowing what I do about the hundreds of opportunities to reduce actual-dead-kids harm that Meta has repeatedly and explicitly turned down, the most triumphant announcements feel like a celebration on a mass grave.

Other contexts and voices

21 December 2023

Root & Branch

I took a break after the Meta in Myanmar posts, partly because I was crispy and partly because of heavy family weather. Then, wonderfully, we got a puppy, and so I’ve been sleeping” like a new parent again, dozy and dazed and bumping into the furniture. The puppy is very good, and the family members have all survived.

But in the peculiar headspace of parents in the hospital and sleep deprivation at home, and the deep heaviness of *gesturing around* events, the things I’ve been writing about all year have been distilling down in my brain.

To recap: Things are weird on the networks. Weird like wyrd, though our fates remain unsettled; weird like wert-, the turning, the winding, the twist.

I think one of the deep weirdnesses is that lots of us we know what we don’t want, which is more of whatever thing we’ve been soaking in. But I think many—maybe all?—of the new-school networks with wind in their sails are defined more by what they aren’t than what they are: Not corporate-owned. Not centralized. Not entangled in inescapable surveillance, treadmill algorithms, ad models, billionaire brain injury. In many cases, not governable.

It’s not an untenable beginning, and maybe it’s a necessary phase, like adolescence. But I don’t think it’s sufficient—not if we want to build structures for collaboration and communion instead of blasted landscapes ruled by warlords.

On growth”

Whenever I write about my desire for safer and better networks that are also widely accessible and inviting, I get comments about how I’m advocating for Mastodon specifically to embrace corporate-style growth at all costs. I think this is a good-faith misunderstanding, so I’ll do something I rarely do now and talk about my life as a person.

In my adolescence and for awhile afterward, my ways of relating to people and the world were shaped by the experiences I’d had as a vulnerable, culturally alienated kid—dismissive or cruel authorities, mean kids, all the quotidian ways norms get ground in. My method for self-preservation was shoving back hard enough that most people wouldn’t come around for a second try.

Somewhere in my early 20s, for lots of reasons, something shifted and my eyes rewired. When I looked up again, I could see a lot more of the precariousness and forced competition and material dangers my less-alienated peers had also been dragged through. I started caring a lot more about protection and care, not just for people like me but for the whole messy gaggle.

Personally and professionally, that’s where I’ve stayed. (Prioritizing care and collective survival over allegiance or purity or career impact or whatever simplifies a lot of things, and though I can’t recommend it as a great way to make piles of money, I’ve mostly found ways to make it work.)

So back to growth. To advocate for Growth at all costs” or growth for its own sake,” I’d need to have an idea of what would be best for Mastodon or for the fediverse or for any other network or protocol. I don’t, because although I like Mastodon a lot for my own social media, I don’t care professionally about any network, platform, or protocol for itself.

I care about any given platform exactly as much as it provides good online living conditions for all the people. Which means accessibility, genuine ease of use, and protective norms and rules that cut against predatory behaviors and refrain from treating human lives as an attack surface. I think the fediverse is a really interesting attempt at some of that, and I’d like to see it get better.

Taproots

The complexities of managing real-world social platforms are real and wickedly difficult and almost impossible to think about at a systems level with rigor and clarity. The reason I burned months of savings and emotional equilibrium working on the Myanmar incident report was because the specifics matter so much, and while none of us can know all of them, most of us don’t know most of them, and I think that’s dangerous.

But there are a couple of root-level network decisions that shape everything that sprouts from and branches into that exhilarating/queasy complexity, and although conversations about new networks tend to circle them obsessively, they tend to get stuck in thought-terminating clichés, locked-in frames, and edicts about how people should or would behave, if only human nature would bend to a more rational shape.

I’m a simple person, but I recognize doctrinal disputes when I see them, and I prefer not to.

Two big root-level things that I think we haven’t properly sorted out:

  • Resources: All networks require a whole lot of time and money to run well—the more meticulously run, the more money and time are required, and this is true whether we’re talking about central conglomerates or distributed networks. If we want to avoid just the most obvious bad incentives, where does that money and time come from?
  • Governance: Who—what people, what kind of people, what structures made of people—should we trust to do the heavy, tricky, fraught work of making and keeping our networks good? How should they work? To whom should they be accountable, and how can accountability” be redeemed from its dissipated state and turned into something with both teeth and discretion?

Answers in the negative, unfortunately, don’t make good final answers: the set of all things that aren’t billionaire” or corporation” is…unwieldy, and not everything it contains is of equal use.

It’s my own view that answers that rely either on parsing out good intent (“good people”) or on what I’ve called elsewhere load-bearing personalities” are fundamentally inadequate. I’ll grant that intent may matter in the abstract, but I’m 40 or 50 years old and have yet to see persuasive evidence that intent is a fixed thing that’s possible to divine with any accuracy for most people most of the time. And I’ve been a load-bearing personalities. It’s not a sustainable path. People break or get warped by power or pressure or trauma. Individual commitment or charisma or good judgment are great contributions to a well structured group effort, but they shouldn’t be used as rafters and beams.

So that leaves me with the more material side of things: What kinds of systems work best? What kinds of processes? Which incentives? I want to go back to the big twiggy mess that is one specific branch of governance—content moderation—to try to make this more concrete.

As below, so above

With the exception of people out on the thin end of libertarian absolutism, most of us entangled in the communal internet prefer to live within societies governed by norms and laws.

For most of us, I think, there are things we don’t want to see, but aren’t interested in forcibly eliminating from all public forums. For me, that includes most advertising, quotidian forms of crap speech and behavior (terrible uncle stuff, rather than genocidal demagogue stuff—but note that even this is a category that shifts based on the speaker’s level of power and authority), porn, and incurious, status-seeking punditry.

Your list is probably different! But I’m talking about the stuff of mute lists and platform norms, however they’re enforced. (I would, myself, be content with an effective way for this stuff to be labeled and thrown off my timeline.)

Other things, most of us want not to exist at all.

There’s a broad consensus that, for example, CSAM shouldn’t merely be hidden from our feeds, but shouldn’t exist, because its existence is an ongoing crime that enacts continuous harm on its innocent victims and because it perpetuates new child exploitation: it’s immensely damaging in the present and it endangers the future lives of very real children.

What else? Terrorist and Nazi-esque white supremacist recruitment and propaganda material, probably—it meets the criteria of being both damaging in the present and dangerous to the future.

Even these simplest extremes quickly branch into dizzying human complexity in practice—viz the right-wing demonization of queer and trans existence as grooming, pedophilia, dangerous to children, or the application of anti-terrorism laws to all protest and dissent. But for just a moment, I’ll focus on good-faith applications.

Whom do we trust to define, identify, and remove this stuff?

Which technologies and processes are appropriate for identifying it? What are the risks and trade-offs of those technologies and processes?

What relationships with law enforcement are appropriate?

To whom should the people doing this work be accountable, and how?

To be clear, the fact that plenty of people on the fediverse are happy to trade the industrial-grade trust & safety teams of the big platforms for literally one random pseudonymous person who vibes okay,” says a lot about the platforms we’ve experienced so far. I’m not here to oppose informal and anarchic governance systems and choices! I lean that way myself. But I want to better understand how they—and other extant governance models across the fediverse—work and when they succeed and where they fail, especially at scale, amidst substantial culture differences, and against professionalized adversaries bent on obliterating trust, distributing harmful material, surveilling dissenters, or disseminating propaganda.

Digging in

I spent a lot of this year trying to understand and write about the current landscape on this site. Now it’s time to work out more sustainable ways to contribute, and I’m pleased to finally be able to say that thanks to support from the Digital Infrastructure Insights Fund, Darius Kazemi (of Hometown and Run your own social) and I will be spending the first half of 2024 working on exactly that research, with the goal of turning everything we learn into public, accessible knowledge for people who build, run, and care about new networks and platforms.

Here’s our project page at DIIF, and Darius’s post, which has a lot more details than this one.

I’ll be writing a lot as we get moving on the work, and I’m looking forward to that with slightly distressing ferocity, probably because I don’t know any better way to hear what I’ve been listening to than to write.

6 December 2023

Meta in Myanmar (full series)

Between July and October of this year, I did a lot of reading and writing about the role of Meta and Facebook—and the internet more broadly—in the genocide of the Rohingya people in Myanmar. The posts below are what emerged from that work.

The format is a bit idiosyncratic, but what I’ve tried to produce here is ultimately a longform cultural-technical incident report. It’s written for people working on and thinking about (and using and wrestling with) new social networks and systems. I’m a big believer in each person contributing in ways that accord with their own skills. I’m a writer and researcher and community nerd, rather than a developer, so this is my contribution.

More than anything, I hope it helps.

Meta in Myanmar, Part I: The Setup (September 28, 2023, 10,900 words)

Myanmar got the internet late and all at once, and mostly via Meta. A brisk pass through Myanmar’s early experience coming online and all the benefits—and, increasingly, troubles—connectivity brought, especially to the Rohingya ethnic minority, which was targeted by massive, highly organized hate campaigns.

Something I didn’t know going in is how many people warned Meta‚ and in how much detail, and for how many years. This post captures as many of those warnings as I could fit in.

Meta in Myanmar, Part II: The Crisis (September 30, 2023, 10,200 words)

Instead of heeding the warnings that continued to pour in from Myanmar, Meta doubled down on connectivity—and rolled out a program that razed Myanmar’s online news ecosystem and replaced it with inflammatory clickbait. What happened after that was the worst thing that people can do to one another.

Also: more of the details of the total collapse of content moderation and the systematic gaming of algorithmic acceleration to boost violence-inciting and genocidal messages.

Meta in Myanmar, Part III: The Inside View (October 6, 2023, 12,500 words)

Using whistleblower disclosures and interviews, this post looks at what Meta knew (so much) and when (for a long time) and how they handled inbound information that suggests that Facebook was being used to do harm (they shoved it to the margins).

This post introduces an element of the Myanmar tragedy that turns out to have echoes all over the planet, which is the coordinated covert influence campaigns that have both secretly and openly parasitized Facebook to wreak havoc.

I also get into a specific and I think illustrative way that Meta continues to deceive politicians and media organizations about their terrible content moderation performance, and look at their record in Myanmar in the years after the Rohingya genocide.

Meta in Myanmar, Part IV: Only Connect (October 13, 2023, 8,600 words)

Starting with the recommendations of Burmese civil-society organizations and individuals plus the concerns of trust and safety practitioners who’ve studied large-scale hate campaigns and influence operations, I look at a handful of the threats that I think cross over from centralized platforms to rapidly growing new-school decentralized and federated networks like Mastodon/the fediverse and Bluesky—in potentially very dangerous ways.

It may be tempting to take this last substantial piece as the one to read if you don’t have much time, but I would recommend picking literally any of the others instead—my concluding remarks here are not intended to stand alone.

Meta Meta (September 28, 2023, 2,000 words)

I also wrote a short post about my approach, language, citations, and corrections. That brings the total word to about 44,000.

Acknowledgements

Above all, all my thanks go to the people of the Myanmar Internet Project and its constituent organizations.

Thanks additionally to the various individuals on the backchannel whom I won’t name but hugely appreciate, to Adrianna Tan and Dr. Fancypants, Esq., to all the folks on Mastodon who helped me find answers to questions, and to the many people who wrote in with thoughts, corrections, and dozens of typos. All mistakes are extremely mine.

Many thanks also to the friends and strangers who helped me find information, asked about the work, read it, and helped it find readers in the world. Writing and publishing something like this as an independent writer and researcher is weird and challenging, especially in a moment when our networks are in disarray and lots of us are just trying to figure out where our next job will come from.

Without your help, this would have just disappeared, and I’m grateful to every person who reads it and/or passes it along.

Thanks” is a deeply inadequate thing to say to my partner, Peter Richardson, who read multiple drafts of everything and supported me through some challenging days in my 40,000-words-in-two-weeks publishing schedule, and especially the months of fairly ghastly work that preceded it. But as ever, thank you, Peter.

16 October 2023