All This Unmobilized Love

Straight out of undergrad, I applied to a bookselling job and didn’t get it, so I started working in tech. By the end of my first year there, I was convinced that the internet was having and would continue to have a norm-smashing, discombobulating effect on the shape of the world and that, left to the paths it was on, it was going to concentrate social inequalities and extractive corporate behavior. I started volunteering for an early web magazine that focused on web standards and accessibility, because it felt like the best-while-nearest place to put my time to help shape the internet in service of a better world. That turned into a couple of decades of internet stuff, but I steered around the big platforms because I didn’t think their intrinsic incentives would ever let them make healthy, non-extractive things, even if individual employees wanted to.

I still think getting our networks right—or at least making them better and putting them in service to the life and health of the world—is critical to the work required to get more of us safely to the other side of the next ten, twenty, thirty years.

A step back: About 25 years into the social internet era, we’ve seen weirdly little experimentation with social forms at scale. Most of our emerging networks have been driven by the workability of technical forms: chat, forums, feeds, galleries, streaming video, two or three variations on comments, and increasingly powerful (and decreasingly transparent) recommendation engines.

Even most of the emergent gestures in our interfaces are tweaks on tech-first features—@ symbols push Twitter to implement threading, hyperlinks eventually get automated into retweets, quote-tweets go on TikTok and become duets. Swipe left to discard a person” is one of a handful of new gestures, and it’s ten years old.

If this were only boring, we could ignore it. But by treating a handful of technical capabilities and conventions as the low-level building blocks of social tools, we’re leaving almost everything good about the human experience on the table. Where are the networks that deeply in their bones understand hospitality vs. performance, safe-to vs. safe-from, double-edged visibility, thresholds vs. hearths, gifts vs. barter, bystanders vs. safety-builders, even something as foundational as power differentials? I don’t think we have them, except piecemeal and by chance, or through the grace of socially gifted moderators and community leads who patch bad product design with their own EQ.

I guess it’s not difficult to understand why these big gaps exist: Commercial software is almost entirely funded, hyped, and judged by a system devoted to jackpots that dismisses most of the natural and built world as an externality, while open source software is mostly built and maintained by people who choose to (and can afford to) spend their non-waged time writing code to serve their own kinda nerdy needs.

But it means there’s a ton of room for fruitful and humane exploration. And I don’t think it’s going to come from the usual hyper-financialized tech places—or from the way things have generally been done in open source, either.

The big promise of federated social tools is neither Mastodon (or Calckey or any of the other things I’ve seen yet) nor the single-server Bluesky beta—it’s new things built in new ways that use protocols like AT and ActivityPub to interact with the big world. A couple weeks back on Bluesky, I reposted a thread that nailed my own reasons for being interested in protocols and platforms:

I’m not at all a tech solutionist - I dislike most technology. I think AI is generally anti-human & I think crypto is generally anti-social. The idea of a massive AI labeler that you offload everything to is completely antithetical to my beliefs

My interest & goals w atproto stem from the fact that our current online systems are preventing us from coordinating in human ways. I want to unbundle & recompose these systems so that people can build online spaces for people

Spaces that are a joy to participate in, that feel safe & full of meaning. Spaces that inspire you, that can challenge you in the right ways

Tech doesn’t solve people problems. But it does shape the tools that help us coordinate solving people problems. As a society, we have shitty tools

Our current social networks are broken, but we can remake them with space for humanity

It’s very good! What I didn’t realize at the time is that it’s from Daniel Holmgren, a protocol engineer on the Bluesky team. I think this is the exact right orientation to building the next generation of social tools—not as Twitter without billionaire” or Mastodon but easy to use” but as infrastructure on which existing and nascent communities can build places that are safe and connected, joyful and challenging, wide-ranging and deeply human.

I’m online enough to pause here both for the folks who are assembling their responses about why I should never mention BlueskyI wrote this for you, it’s okay if you hate it, my hopes for Bluesky are that it provides productive competition for the fediverse and eventually becomes a really interesting distributed option. I will also acknowledge the people about to say that AT or ActivityPub or indeed anything federated cannot ever be truly good or safe, etc. Thing is, the centralized systems never belong to good people, and even when we wrestle the amoral jerks into doing a right thing, it’s fragile and temporary. For entirely pragmatic reasons, I don’t believe central corporate authorities can keep us safe when they’ve demonstrated for 25 years that they can’t, won’t, and are incentivized to do otherwise. There is no corporate-walled internet we can trust to rid us of griefers and racists and surveillance systems.

Banning Nazi servers is insufficient, there should be no Nazi servers,” is a perspective I deeply sympathize with, but unless you bring the entirety of the internet under either corporate or governmental control—and neither of those scenarios has a great record on de-Nazification—the best we can do is exclude bad actors from our spaces.

On the big centralized platforms, that work of exclusion is performed by largely unaccountable entities who by design subordinate safety to their own priorities (growth, engagement), with much of the worst and most traumatizing work offloaded to offshore” moderators who experience the internet’s most awful depravity at industrial scale.

And again, which things get excluded shift with the winds of rich men’s opinions and the same terrifying politics that the big platforms enabled for a profit. To me, this is a big dead end. But there are other ways, and the most promising to me are the ones that can be managed by and for specific communities while also remaining in contact with the big internet.

Even when it’s draining and difficult, even when it means working under threat of online attacks that turn into offline consequences, lots of of people demonstrably want to put our time into things that ease our own fear and dread, repair what’s broken, and build sound knowledge and useful resources. Seeing this in action during the pandemic’s first year changed my understanding of the shape of the world; given ways to do this work for ourselves and each other, we will. But right now, a lot of the entities providing us with those ways to help are for-profit platforms. Some platforms like Reddit and StackExchange (and Facebook Groups) are almost entirely moderated by dedicated volunteer labor, and every big network relies heavily on volunteer peer moderation in the form of (frequently onerous) flagging and reporting. Despite late capitalism, despite all the things we’re going through, the internet already runs on dedicated volunteer labor.

What most of us rarely have time for is the organizational work that runs alongside and on top of technical protocols to enable participatory design processes, build institutional knowledge, and establish the kinds of multi-facted support and user-accountable governance that produces sustainability. Which is, I think, why so much of our time and love ends up benefiting platforms that extract profits rather than ploughing them back into the soil.

We’re already on the verge of a new generation of protocols and platforms, and it’s my big hope that their builders will focus on clarity and ease of use as these technologies mature. But we also need new generations of user-accountable institutions to realize the potential of new tech tools—which loops back to what I think Holgren was writing toward on Bluesky.

I think it’s at the institutional and constitutional levels that healthier and more life-enhancing big-world tools and places for community and sociability will emerge—and are already emerging. Over the coming months, I’ll be speaking with and writing about people working on some of the tools and communities that I think help point ways forward—and with people who’ve built fruitful, immediately useful theories and practices about what the networks have dragged us through already and how we escape the traps we’ve been living in.

In the meantime, some things I’m watching extra closely:

(”Unmobilized love,” is a callback to one of Mike Davis’s final interviews.)

8 June 2023