Root & Branch

I took a break after the Meta in Myanmar posts, partly because I was crispy and partly because of heavy family weather. Then, wonderfully, we got a puppy, and so I’ve been sleeping” like a new parent again, dozy and dazed and bumping into the furniture. The puppy is very good, and the family members have all survived.

But in the peculiar headspace of parents in the hospital and sleep deprivation at home, and the deep heaviness of *gesturing around* events, the things I’ve been writing about all year have been distilling down in my brain.

To recap: Things are weird on the networks. Weird like wyrd, though our fates remain unsettled; weird like wert-, the turning, the winding, the twist.

I think one of the deep weirdnesses is that lots of us we know what we don’t want, which is more of whatever thing we’ve been soaking in. But I think many—maybe all?—of the new-school networks with wind in their sails are defined more by what they aren’t than what they are: Not corporate-owned. Not centralized. Not entangled in inescapable surveillance, treadmill algorithms, ad models, billionaire brain injury. In many cases, not governable.

It’s not an untenable beginning, and maybe it’s a necessary phase, like adolescence. But I don’t think it’s sufficient—not if we want to build structures for collaboration and communion instead of blasted landscapes ruled by warlords.

On growth”

Whenever I write about my desire for safer and better networks that are also widely accessible and inviting, I get comments about how I’m advocating for Mastodon specifically to embrace corporate-style growth at all costs. I think this is a good-faith misunderstanding, so I’ll do something I rarely do now and talk about my life as a person.

In my adolescence and for awhile afterward, my ways of relating to people and the world were shaped by the experiences I’d had as a vulnerable, culturally alienated kid—dismissive or cruel authorities, mean kids, all the quotidian ways norms get ground in. My method for self-preservation was shoving back hard enough that most people wouldn’t come around for a second try.

Somewhere in my early 20s, for lots of reasons, something shifted and my eyes rewired. When I looked up again, I could see a lot more of the precariousness and forced competition and material dangers my less-alienated peers had also been dragged through. I started caring a lot more about protection and care, not just for people like me but for the whole messy gaggle.

Personally and professionally, that’s where I’ve stayed. (Prioritizing care and collective survival over allegiance or purity or career impact or whatever simplifies a lot of things, and though I can’t recommend it as a great way to make piles of money, I’ve mostly found ways to make it work.)

So back to growth. To advocate for Growth at all costs” or growth for its own sake,” I’d need to have an idea of what would be best for Mastodon or for the fediverse or for any other network or protocol. I don’t, because although I like Mastodon a lot for my own social media, I don’t care professionally about any network, platform, or protocol for itself.

I care about any given platform exactly as much as it provides good online living conditions for all the people. Which means accessibility, genuine ease of use, and protective norms and rules that cut against predatory behaviors and refrain from treating human lives as an attack surface. I think the fediverse is a really interesting attempt at some of that, and I’d like to see it get better.

Taproots

The complexities of managing real-world social platforms are real and wickedly difficult and almost impossible to think about at a systems level with rigor and clarity. The reason I burned months of savings and emotional equilibrium working on the Myanmar incident report was because the specifics matter so much, and while none of us can know all of them, most of us don’t know most of them, and I think that’s dangerous.

But there are a couple of root-level network decisions that shape everything that sprouts from and branches into that exhilarating/queasy complexity, and although conversations about new networks tend to circle them obsessively, they tend to get stuck in thought-terminating clichés, locked-in frames, and edicts about how people should or would behave, if only human nature would bend to a more rational shape.

I’m a simple person, but I recognize doctrinal disputes when I see them, and I prefer not to.

Two big root-level things that I think we haven’t properly sorted out:

  • Resources: All networks require a whole lot of time and money to run well—the more meticulously run, the more money and time are required, and this is true whether we’re talking about central conglomerates or distributed networks. If we want to avoid just the most obvious bad incentives, where does that money and time come from?
  • Governance: Who—what people, what kind of people, what structures made of people—should we trust to do the heavy, tricky, fraught work of making and keeping our networks good? How should they work? To whom should they be accountable, and how can accountability” be redeemed from its dissipated state and turned into something with both teeth and discretion?

Answers in the negative, unfortunately, don’t make good final answers: the set of all things that aren’t billionaire” or corporation” is…unwieldy, and not everything it contains is of equal use.

It’s my own view that answers that rely either on parsing out good intent (“good people”) or on what I’ve called elsewhere load-bearing personalities” are fundamentally inadequate. I’ll grant that intent may matter in the abstract, but I’m 40 or 50 years old and have yet to see persuasive evidence that intent is a fixed thing that’s possible to divine with any accuracy for most people most of the time. And I’ve been a load-bearing personality. It’s not a sustainable path. People break or get warped by power or pressure or trauma. Individual commitment or charisma or good judgment are great contributions to a well structured group effort, but they shouldn’t be used as rafters and beams.

So that leaves me with the more material side of things: What kinds of systems work best? What kinds of processes? Which incentives? I want to go back to the big twiggy mess that is one specific branch of governance—content moderation—to try to make this more concrete.

As below, so above

With the exception of people out on the thin end of libertarian absolutism, most of us entangled in the communal internet prefer to live within societies governed by norms and laws.

For most of us, I think, there are things we don’t want to see, but aren’t interested in forcibly eliminating from all public forums. For me, that includes most advertising, quotidian forms of crap speech and behavior (terrible uncle stuff, rather than genocidal demagogue stuff—but note that even this is a category that shifts based on the speaker’s level of power and authority), porn, and incurious, status-seeking punditry.

Your list is probably different! But I’m talking about the stuff of mute lists and platform norms, however they’re enforced. (I would, myself, be content with an effective way for this stuff to be labeled and thrown off my timeline.)

Other things, most of us want not to exist at all.

There’s a broad consensus that, for example, CSAM shouldn’t merely be hidden from our feeds, but shouldn’t exist, because its existence is an ongoing crime that enacts continuous harm on its innocent victims and because it perpetuates new child exploitation: it’s immensely damaging in the present and it endangers the future lives of very real children.

What else? Terrorist and Nazi-esque white supremacist recruitment and propaganda material, probably—it meets the criteria of being both damaging in the present and dangerous to the future.

Even these simplest extremes quickly branch into dizzying human complexity in practice—viz the right-wing demonization of queer and trans existence as grooming, pedophilia, dangerous to children, or the application of anti-terrorism laws to all protest and dissent. But for just a moment, I’ll focus on good-faith applications.

Whom do we trust to define, identify, and remove this stuff?

Which technologies and processes are appropriate for identifying it? What are the risks and trade-offs of those technologies and processes?

What relationships with law enforcement are appropriate?

To whom should the people doing this work be accountable, and how?

To be clear, the fact that plenty of people on the fediverse are happy to trade the industrial-grade trust & safety teams of the big platforms for literally one random pseudonymous person who vibes okay,” says a lot about the platforms we’ve experienced so far. I’m not here to oppose informal and anarchic governance systems and choices! I lean that way myself. But I want to better understand how they—and other extant governance models across the fediverse—work and when they succeed and where they fail, especially at scale, amidst substantial culture differences, and against professionalized adversaries bent on obliterating trust, distributing harmful material, surveilling dissenters, or disseminating propaganda.

Digging in

I spent a lot of this year trying to understand and write about the current landscape on this site. Now it’s time to work out more sustainable ways to contribute, and I’m pleased to finally be able to say that thanks to support from the Digital Infrastructure Insights Fund, Darius Kazemi (of Hometown and Run your own social) and I will be spending the first half of 2024 working on exactly that research, with the goal of turning everything we learn into public, accessible knowledge for people who build, run, and care about new networks and platforms.

Here’s our project page at DIIF, and Darius’s post, which has a lot more details than this one.

I’ll be writing a lot as we get moving on the work, and I’m looking forward to that with slightly distressing ferocity, probably because I don’t know any better way to hear what I’ve been listening to than to write.


Date
6 December 2023