Untangling Threads

Back in the fall, I wrote a series of posts on a particularly horrific episode in Meta’s past. I hadn’t planned to revisit the topic immediately, but here we are, with Threads federation with the ActivityPub-based fediverse ecosystem an increasingly vivid reality.

My own emotional response to Meta and Threads is so intense that it hasn’t been especially easy to think clearly about the risks and benefits Threads’ federation brings, and to whom. So I’m writing through it in search of understanding, and in hopes of planting some helpful trail markers as I go.

What federation with Threads offers

Doing a good-faith walkthrough of what seem to me to be the strongest arguments for federating with Threads has been challenging but useful. I’m focusing on benefits to people who use networks—rather than benefits to, for instance, protocol reputation—because first- and second-order effects on humans are my thing. (Note: If you’re concerned that I’m insufficiently skeptical about Meta, skip down a bit.)

Finding our people

Back in July, I did some informal research with people who’d left Mastodon and landed on Bluesky, and one of the biggest problems those people voiced was their difficulty in finding people they wanted to follow on Mastodon. Sometimes this was because finding people who were there was complicated for both architectural and UI design reasons; sometimes it was because the people they wanted to hang out with just weren’t on Mastodon, and weren’t going to be.

For people with those concerns, Threads federation is a pretty big step toward being able to maintain an account on Mastodon (or another fediverse service) and still find the people they want to interact with—assuming some of those people are on Threads and not only on Bluesky, Twitter/X, Instagram, and all the other non-ActivityPub-powered systems.

On the flipside, Threads federation gives people on Threads the chance to reconnect with people who left commercial social media for the fediverse—and, if they get disgusted with Meta, to migrate much more easily to a noncommercial, non-surveillance-based network. I’ve written a whole post about the ways in which Mastodon migration specifically is deeply imperfect, but I’m happy to stipulate that it’s meaningfully better than nothing.

I’ll say a little more about common responses to these arguments about the primary benefits of federation with Threads later on, but first, I want to run through some risks and ethical quandaries.

The Threads federation conversations that I’ve seen so far mostly focus on:

  • Meta’s likelihood of destroying the fediverse via embrace-extend-extinguish”
  • Meta’s ability to get hold of pre-Threads fediverse (I’ll call it Small Fedi for convenience) users’ data,
  • Threads’ likelihood of fumbling content moderation, and
  • the correct weighting of Meta being terrible vs. connecting with people who use Threads.

These are all useful things to think about, and they’re already being widely discussed, so I’m going to move quickly over that terrain except where I think I can offer detail not discussed as much elsewhere. (The EEE argument I’m going to pass over entirely because it’s functioning in a different arena from my work and it’s already being exhaustively debated elsewhere.)

Unfolding the risk surface

A few panels from an origami instruction set by Jo Nakashima, website linked in the caption. The resulting unicorn isn't Blade Runner-accurate, but it's all made from a single sheet of paper, which I find satisfying. (A tiny piece of Jo Nakashima’s excellent origami unicorn instructions.)

The risks I’ll cover in the rest of this post fall into three categories:

  1. My understanding of who and what Meta is
  2. The open and covert attack vectors that Meta services routinely host
  3. The ethics of contribution to and complicity with Meta’s wider projects

I want to deal with these in order, because the specifics of the first point will, I hope, clarify why I resist generalizing Threads federation conversations to federating with any commercial or large-scale service.”

Who Meta is

The list of controversies” Meta’s caused since its founding is long and gruesome, and there are plenty of summaries floating around. I spent several months this year researching and writing about just one episode in the company’s recent history because I find that deep, specific knowledge combined with broader summary helps me make much better decisions than summary alone.

Here’s the tl;dr of what I learned about Meta’s adventures in Myanmar.

Beginning around 2013, Facebook spent years ignoring desperate warnings from experts in Myanmar and around the world, kept its foot on the algorithmic accelerator, and played what the UN called a determining role” in the genocide of the Rohingya people. A genocide which included mass rape and sexual mutilation, the maiming and murder of thousands of civilians including children and babies, large-scale forced displacement, and torture.

I wrote so much about Meta in Myanmar because I think it’s a common misconception that Meta just kinda didn’t handle content moderation well. What Meta’s leadership actually did was so multifaceted, callous, and avaricious that it was honestly difficult for even me to believe:

Combine all those factors with Meta leadership’s allergy to learning anything suggesting that they should do less-profitable, more considered things to save lives, and you get a machine that monopolized internet connectivity for millions and then flooded Myanmar’s nascent internet with algorithmically accelerated, dehumanizing, violence-inciting messages and rumors (both authentic and farmed) that successfully demonized an ethnicity and left them without meaningful support when Myanmar’s military finally enacted their campaign of genocide.

As of last year—ten years after warnings began to appear in Myanmar and about six years since the peak of the genocide—Meta was still accepting genocidal anti-Rohingya ads, the content of which was actually taken directly from widely studied documents from the United Nations Independent International Fact-Finding Mission on Myanmar, examining the communications that led to the genocide. Meta continues to accept extreme disinformation as advertising all over the world, in contradiction to its own published policies and statements, as Global Witness keeps demonstrating.

I’d be remiss if I failed to mention that according to whistleblower Sophie Zhang’s detailed disclosures, Meta—which is, I want to emphasize, the largest social media company in the world—repeatedly diverted resources away from rooting out fake-page and fake-account networks run by oppressive governments and political parties around the world, including those targeting activists and journalists for imprisonment and murder, while claiming otherwise in public.

Meta’s lead for Threads, Adam Mosseri, was head of Facebook’s News Feed and Interfaces departments during that long, warning-heavy lead-up to the genocide of the Rohingya in Myanmar. After the worst of the violence was over, Mosseri noted on a podcast that he’d lost some sleep over it.

In 2018, shortly after that podcast, Mosseri was made the new head of Instagram, from whence he comes to Threads—which, under his leadership, hosts accounts like bomb-threat generating money-making scheme Libs of TikTok and Steve Bannon’s dictatorship fancast, War Room.

Knowing the details of these events—most of which I couldn’t even fit into the very long posts I published—makes it impossible for me to cheerfully accept Meta’s latest attempt to permeate the last few contested spaces on the social internet, because touching their products makes me feel physically ill. 

It’s the difference, maybe, between understanding plastic pollution” in the abstract vs. having spent pointless hours sifting bucketfuls of microplastics out of the sand of my home coast’s heartbreakingly beautiful and irreparably damaged beaches.

My personal revulsion isn’t an argument, and the vast majority of people who see a link to the Myanmar research won’t ever read it—or Amnesty’s reporting on Meta’s contribution to targeted ethnic violence against Tigrayan people in Ethiopia, or Sophie Zhang’s hair-raising disclosures about Meta’s lack of interest in stopping global covert influence operations, or Human Rights Watch on Meta’s current censorship practices in the Israel-Palestine war.

Nevertheless, I hope it becomes increasingly clear why the line, for some of us, isn’t about non-commercial” or non-algorithmic,” but about Meta’s specific record of bloody horrors, and their absolute unwillingness to enact genuinely effective measures to prevent future political manipulation and individual suffering and loss on a global scale.

Less emotionally, I think it’s unwise to assume that an organization that has…

  • demonstrably and continuously made antisocial and sometimes deadly choices on behalf of billions of human beings and
  • allowed its products to be weaponized by covert state-level operations behind multiple genocides and hundreds (thousands? tens of thousands?) of smaller persecutions, all while
  • ducking meaningful oversight,
  • lying about what they do and know, and
  • treating their core extraction machines as fait-accompli inevitabilities that mustn’t be governed except in patently ineffective ways…

…will be a good citizen after adopting a new, interoperable technical structure.

Attack vectors (open)

Some of the attack vectors Threads hosts are open and obvious, but let’s talk about them anyway.

Modern commercial social networks have provided affordances that both enable and reward the kind of targeted public harassment campaigns associated with multi-platform culture-war harassment nodes like Libs of Tiktok, who have refined earlier internet mob justice episodes into a sustainable business model.

These harassment nodes work pretty simply:

  1. Use crowdsourced surveillance juiced by fast social media search to find a target, like a children’s hospital, schoolteacher, a librarian, or a healthcare worker. (To give you a sense of scale, Libs of Tiktok named and targeted two hundred and twenty-two individual employees of schools or education organizations in just the first four months of 2022.)
  2. Use social media to publicize decontextualized statements from targeted individuals, doctored video, lies about policies and actions, and dehumanizing statements calling targeted individuals and groups evil cult members who groom children for sexual abuse, etc.
  3. Sit back while violence-inciting posts, right-wing media appearances, lots and lots of bomb threats, and Substack paychecks roll in, good teachers’ and librarians’ lives get absolutely wrecked, and anti-trans, anti-queer legislation explodes nationally.
  4. Repeat.

As I noted above, Threads currently hosts Libs of TikTok, along with plenty of other culture-war grifters devoted to hunting down private individuals talking about their lives and work and using them to do what Meta calls real-world harm.”

Maybe none of those vicious assholes will notice that they’re now federating with a network known as a haven for thousands of LGBTQ+ people, anarchists, dissidents, furries, and other people the harassment machines love to target as ragebait.

And maybe none of the harassment nodes will notice that Mastodon is also used by very real predators and CSAM-distributors—the ones mainstream fedi servers defederate from en masse—and use that fact to further mislead their froth-mouthed volunteer harassment corps about the dangers posed by trans and queer people on the fediverse.

Maybe none of that will happen! But if I were in the sights of operators like those, I’d want to get as far from Threads as possible. And I’d take assertions that people who don’t want to federate with Threads are all irrational losers as useful revelations about the character of the people making them.

Attack vectors (covert)

I’ve written a lot about the ways in which I think the fediverse is currently unprepared to deal with the kinds of sophisticated harms Meta currently allows to thrive, and sometimes directly funds. Please forgive me for quoting myself for my own convenience:

I think it’s easy to imagine that these heavy-duty threats focus only on the big, centralized services, but an in-depth analysis of just one operation, Secondary Infektion, shows that it operated across at least 300 websites and platforms ranging from Facebook, Reddit, and YouTube (and WordPress, Medium, and Quora) to literally hundreds of other sites and forums.

The idea that no one would make the effort to actually conduct high-effort, resource-intensive information operations across smaller social platforms remains common, but is absolutely false. We’ve seen it happen already, and we’ll see it again, and I’d be shocked if next-generation large-language models weren’t already supercharging those campaigns by reducing required effort.

To believe it can’t or won’t happen on fedi—and that Threads won’t accelerate it by providing easy on-ramps and raising the profile of the fediverse more generally—seems naive at best.

Unfortunately, this isn’t something that simply suspending or blocking Threads will fix. I don’t think any action server admins take is going to prevent it from happening, but I do think the next twelve to eighteen months are a critical moment for building cross-server—and cross-platform—alliances for identifying and rooting out whatever influence networks fedi administrators and existing tooling can detect. (Especially but not only given the explosive potential of the upcoming US Presidential election and, thanks to US hegemony, its disproportionate effect on the rest of the world.)

On the pragmatic side, small-scale fedi would benefit hugely from the kind of training and knowledge about these operations that big commercial platforms possess, so if I were a fedi admin who felt fine about working with Meta, those are the kinds of requests I would be making of my probably quite nice new friends in terrible places.

How much do you want to help Meta?

Meta’s business model centers on owning the dominant forums for online human connection in most of the world and using that dominant position to construct dense webs of data that clients at every level of society will pay a lot of money for so that they can efficiently target their ad/influence campaigns.

Amnesty International has an exceptionally trenchant breakdown of the human-rights damage done by both Meta and Google’s globe-circling surveillance operations in English, French, and Spanish, and I think everyone should read it. In the meantime, I think it’s useful to remember that no matter how harmful the unintended effects of these corporations’ operations—and they’ve been immensely harmful—their corporate intent is to dominate markets and make a lot of money via ad/influence campaigns. Everything else is collateral damage.

(I’m going to blow past Meta’s ability to surveil its users—and non-users—outside of its own products for now, because I don’t have the time to get into it, but it’s still pretty gruesome.)

As moral actors, I think we should reckon with that damage—and fight to force Meta and Google to reckon with it as well—but when we look ahead to things like Threads’ next moves, I think we should keep market domination and behavior-targeted ad and influence campaigns foremost in our minds.

With those assumptions on the table, I want to think for a moment about what happens when posts from, say, Mastodon hit Threads.

Right off the bat, when Threads users follow someone who posts from Mastodon, those Masto-originating posts are going to show up in Threads users’ feeds, which are currently populated with the assistance of Meta’s usual opaque algorithmic machinery.

I find it difficult to imagine a world in which Mastodon posts federated into Threads don’t provide content against which Meta can run ads—and, less simplistically, a world in which Threads’ users’ interactions with Mastodon posts don’t provide behavioral signals that allow Meta to offer their clients more fine-grained targeting data.

Threads isn’t yet running ads-qua-ads, but it launched with a preloaded fleet of brands” and the promise of being a nice, un-heated space for conversation—which to say, an explicitly brand-friendly environment. (So far, this has meant no to butts and no to searching for long covid info and yes to accounts devoted to stochastic anti-LGBT terrorism for profit, so perhaps that’s a useful measure of what brands consider safe and neutral.) Perhaps there’s a world in which Threads doesn’t accept ads, but I have difficulty seeing it.

So I think that leaves us with a few things to consider, and I think it’s worth teasing out several entangled but distinct framings at work in our ways of thinking about modern social networks and our complicity in their actions.

Tacit endorsement

A lot of arguments about consumer choice position choice as a form of endorsement. In this framing, having an account on a bad network implies agreement with and maybe even complicity in that network’s leadership and their actions. It boils down to endorsement by association. This comes up a lot in arguments about why people should leave Twitter.

In some formulations of this perspective, having an account on Mastodon and federating with Threads implies no endorsement of Meta’s services; in other formulations, any interconnection with Threads does imply a kind of agreement. I’ll walk through two more ways of looking at these questions that might help reveal the assumptions underlying those opposing conclusions.

Indirect (ad-based) financial support

There’s also a framing of consumer choice as opting into—or out of—being part of the attention merchants’ inventory. (This is the logic of boycotts.)

In this framing, maintaining an active account on a bad network benefits the network’s leaders directly by letting the network profit by selling our attention on to increasingly sophisticated advertising machines meant to influence purchases, political positions, and many kinds of sentiment and belief.

In the conversations I’ve seen, this framing is mostly used to argue that it’s bad to use Meta services directly, but ethically sound to federate with Threads, because doing so doesn’t benefit Meta financially. I think that’s somewhere between a shaky guess and a misapprehension, and there’s a third way to frame our participation in social networks that helps illuminate why.

Social networking as labor

There’s a third perspective that frames what we do within social networks—posting, reading, interacting—as labor. I think it’s reasonably well understood on, say, Mastodon, that networks like X’s and Meta’s rely on people doing that work without really noticing it, as a side effect of trying to [connect with friends or network our way out of permanent precarity or keep up with world events or enjoy celebrity drama or whatever].

What I don’t think we’ve grappled with is the implications of sending our labor out beyond the current small, largely ferociously noncommercial version of the fediverse and into machinery like Meta’s—where that labor becomes, in the most boneheaded formulation, something to smash between ads for flimsy bras and stupid trucks and $30 eco-friendly ear swabs, and in more sophisticated formulations, a way to squeeze yet more behavioral data out of Threads users to sell onward to advertisers.

(Just knowing which Threads users are savvy and interested enough to seek out Mastodon accounts to follow feels like a useful signal for both internal Meta work and for various kinds of advertisers.)

Okay so what

When I started trying to talk about some of the likely technical behaviors we’ll see when Mastodon posts show up inside Threads—by which I mean they will be part of the ad machine and they will be distributed in Meta’s usual algorithmic ways”—I got a lot of responses that focused on the second framing (financial support via ad impressions). Essentially, most of these responses went, If we’re not seeing ads ourselves, what’s the problem?”

An honest but ungenerous response is that I don’t want to contribute my labor to the cause of helping Meta wring more ~value from its users—many of whom have no meaningful alternatives—because those users are also human beings just like me and they deserve better networks just as much as I do.

A better response is that when we sit down to figure out what we want from our server administrators and what we’re going to do as individuals, it’s useful to acknowledge externalities as well as direct effects on our” networks, because ignoring externalities (aka other people somewhere else”) is precisely how we got the worst parts of our current moment.

On pragmatism

Compared to the social internet as a whole, the existing fediverse is disproportionately populated by people who are demonstrably willing to put their various principles above online connection to friends, family members, and others who aren’t on fedi. That’s not intrinsically good or bad—I think it’s both, in different situations—but it shapes the conversation about trade-offs.

If you think anyone who uses Threads is unredeemable, you’re probably not going to have much sympathy for people who miss their friends who use Threads. More broadly, people who feel lonely on fedi because they can’t find people they care about get characterized as lazy, vapid dopamine addicts in a lot of Mastodon conversations.

I’m not particularly interested in judging anyone’s feelings about this stuff—I am myself all over the record stating that Meta is a human-rights disaster run by callous, venal people who shouldn’t hold any kind of power. But I do believe that survivable futures require that we all have access to better ways to be together online, so I always hope for broad empathy in fedi product design.

As for me—I’m much more of a pragmatist than I was twenty years ago, or even five years ago. And I have first-hand experience with having my labor and the labor of many others toward a meaningful public good—in my case, a volunteer-assembled set of official public data points tracking the covid pandemic—used by terrible people.

When the Trump White House—which had suggested suppressing case counts by stopping testing—used our work in a piece of propaganda about how well they were handling the pandemic, I spent days feeling too physically ill to eat.

Nevertheless, I judged that the importance of keeping the data open and flowing and well contextualized was much greater than the downside of having it used poorly in service of an awful government, because we kept hearing first-hand reports that our work was saving human lives.

The experience was clarifying.

Until now, I haven’t involved myself much in discussions about how Mastodon or other fedi servers should react to Threads’ arrival, mostly because I think the right answer should be really different for different communities, my own revulsion notwithstanding.

For people whose core offline communities are stuck with Meta services, for example—and globally, that is a shit-ton of people—I think there are absolutely reasonable arguments for opening lines of communication with Threads despite Meta’s radioactivity.

For other people, the ethical trade-offs won’t be worth it.

For others still, a set of specific risks that federation with Threads opens up will not only make blocking the domain an obvious choice, but potentially also curtail previously enjoyed liberties across the non-Threads fediverse.

Here’s my point: Everyone makes trade-offs. For some people, the benefits of Threads federation is worth dealing with—or overlooking—Meta’s stomach-churning awfulness. But I do think there are human costs to conflating considered pragmatism with a lack of careful, step-by-step thought.

Practicalities

That was the whole of my sermon.

Here are some things to think about if you’re a fedi user trying to work out what to do, what questions to ask your server admins, and how to manage your own risk.

Ask your admins about policy enforcement

I think this is probably a good time for people who are concerned about federation with Threads to look through their server’s documentation and then ask their administrators about the server’s Threads-federation plan if that isn’t clear in the docs, along with things like…

  • …if the plan is to wait and see, what are the kinds of triggers that would lead to suspension?
  • …how will they handle Threads’ failure to moderate, say, anti-trans posts differently from the way they would handle a Mastodon server’s similar failure?
  • …how will they manage and adjudicate their users’ competing needs, including desires to connect with a specific cultural or geographical community that’s currently stuck on Meta (either by choice or by fiat) vs. concerns about Threads’ choice to host cross-platform harassment operators?

I don’t think the answers to these questions are going to be—or should be—the same for every server on the fediverse. I personally think Meta’s machinery is so implicated in genocide and a million lesser harms that it should be trapped inside a circle of salt forever, but even I recognize that there are billions of people around the world who have no other social internet available. These are the trade-offs.

I also think the first two questions in particular will seem easy to answer honestly, but in reality, they won’t be, because Threads is so big that the perceived costs of defederation will, for many or even most fedi admins, outweigh the benefits of booting a server that protects predators and bad actors.

I would note that most mainstream fedi servers maintain policies that at least claim to ban (open) harassment or hateful content based on gender, gender identity, race or ethnicity, or sexual orientation. On this count, I’d argue that Threads already fails the first principle of the Mastodon Server Covenant:

Active moderation against racism, sexism, homophobia and transphobia Users must have the confidence that they are joining a safe space, free from white supremacy, anti-semitism and transphobia of other platforms.

Don’t take my word for this failure. Twenty-four civil rights, digital justice and pro-democracy organizations delivered an open letter last summer on Threads’ immediate content moderation…challenges:

…we are observing neo-Nazi rhetoric, election lies, COVID and climate change denialism, and more toxicity. They posted bigoted slurs, election denial, COVID-19 conspiracies, targeted harassment of and denial of trans individuals’ existence, misogyny, and more. Much of the content remains on Threads indicating both gaps in Meta’s Terms of Service and in its enforcement, unsurprising given your long history of inadequate rules and inconsistent enforcement across other Meta properties.

Rather than strengthen your policies, Threads has taken actions doing the opposite, by purposefully not extending Instagram’s fact-checking program to the platform and capitulating to bad actors, and by removing a policy to warn users when they are attempting to follow a serial misinformer. Without clear guardrails against future incitement of violence, it is unclear if Meta is prepared to protect users from high-profile purveyors of election disinformation who violate the platform’s written policies.

For me, there is no ideal world that includes Meta, as the company currently exists. But in the most ideal available world, I think other fedi services would adopt—and publicly announce—a range of policies for dealing with Threads, including their answers to questions like the ones above.

Domain blocking and its limits

If you don’t want to federate with Threads, the obvious solution is to block the whole domain or find yourself a home server that plans to suspend it. Unfortunately, as I’ve learned through personal experience, suspensions and blocks aren’t a completely watertight solution.

In my own case, a few months ago someone from one of the most widely-suspended servers in the fediverse picked one of my more innocuous posts that had been boosted into his view and clowned in my replies in the usual way despite the fact that my home server had long since suspended the troll’s server.

So not only was my post being passed around on a server that should never have seen it, I couldn’t see the resulting trolling—but others whose servers federated with the troll server could. Now imagine that instead of some random edgelord, it had been part of the Libs of TikTok harassment sphere, invisibly-to-me using my post to gin up a brigading swarm. Good times!

Discussions about this loophole have been happening for much longer than I’ve been active on fedi, but every technical conversation I’ve seen about this on Mastodon rapidly reaches such an extreme level of No, that’s all incorrect, it depends on how each server is configured and which implementations are in play in the following fifty-two ways,” that I’m not going to attempt a technical summary here.

Brook Miles has written about this in greater detail in the context of the AUTHORIZED_FETCH” Mastodon configuration option—see the Example—Boosting” section for more.

If your personal threat model is centered on not being annoyed by visible taunting, this loophole doesn’t really matter. But if you or your community have had to contend with large-scale online attacks or distributed offline threats, boosting your posts to servers your server has already suspended—and then making any ensuring threats invisible to you while leaving them visible to other attackers is more dangerous than just showing them.

Will this be a significant attack vector from Threads, specifically? I don’t know! I know that people who work on both Mastodon and ActivityPub are aware of the problem, but I don’t have any sense of how long it would take for the loophole to be closed in a way that would prevent posts from being boosted around server suspensions and reaching Threads.

In the meantime, I think the nearest thing to reasonably sturdy protection for people on fedi who have good reason to worry about the risk surface Threads federation opens up is probably to either…

  • block Threads and post followers-only or or local-only, for fedi services that support it, or
  • operate from a server that federates only with servers that also refuse to federate with Threads—which is a system already controversial within the fediverse because allowlists are less technically open than denylists.

A note on individual domain blocking

Earlier this week, I got curious about how individual blocks and server suspensions interact, and my fedi research collaborator Darius Kazemi generously ran some tests using servers he runs to confirm our understanding of the way Mastodon users can block domains.

These informal tests show that:

  1. if you individually block a domain, your block will persist even if your server admin suspends and then un-suspends the domain you blocked, and
  2. this is the case whether you block using the block domain” UI in Mastodon (or Hometown) or upload a domain blocklist using the import tools in Settings.

The upload method also allows you to block a domain even if your home server’s admin has already suspended that domain. And this method—belt plus suspenders, essentially—should provide you with a persistent domain block even if your home server’s admins later change their policy and un-suspend Threads (or any other server that concerns you.

(If any of the above is wrong, it’s my fault, not Darius’s.)

Human/feeling

This last part is difficult, but I’m keeping it in because it’s true and it’s something I’m wrestling with.

It’s been a wild week or so watching people who I thought hated centralized social networks because of the harm they do giddily celebrating the entry into the fediverse of a vast, surveillance-centric social media conglomerate credibly accused of enabling targeted persecution and mass murder.

Rationally, I understand that the adoption of an indieweb protocol by the biggest social media company in the world feels super exciting to many fedi developers and advocates. And given the tiny scraps of resources a lot of those people have worked with for years on end, it probably feels like a much-needed push toward some kind of financial stability.

What I cannot make sense of is the belief that any particular implementation of open networking is such an obvious, uncomplicated, and overwhelming good that it’s sensible and good to completely set aside the horrors in Meta’s past and present to celebrate exciting internet milestones.

I don’t think the people who are genuinely psyched about Threads on fedi are monsters or fascists, and I don’t think those kinds of characterizations—which show up a lot in my replies—are helping. And I understand that our theories of change just don’t overlap as much as I’d initially hoped.

But for me, knowing what I do about the hundreds of opportunities to reduce actual-dead-kids harm that Meta has repeatedly and explicitly turned down, the most triumphant announcements feel like a celebration on a mass grave.

Other contexts and voices


Date
21 December 2023