Agreed, and a very good point. "Visible to people on allow-list servers" is very much along the lines of local-only posts ("visible to people only on this server"). I think of it as "scoped" visibility, although leashed or moored might well be a better term.
thenexusofprivacy
Exactly. There's a core disagreement about whether making a public post means consenting to it being used for all purposes without consent (the multiple battles about consent-based search), but relatively few people are confused about whether bad actors will use it without consent.
A very interesting idea! Actually it seems to me there are two interesting ideas here:
-
endorsements. Something like this (whether it's from feeler servers or other sources) is clearly needed to make consent-based federation scale. IndieWeb's Vouch protocol and the "letters of introduction" Erin Shephard discusses in "A better moderation system is possible for the social web" are similar approaches. You could also imagine building endorsement logic on top of an instance catalog like the FediSeer (of The Bad Space) or infrastructure like FIRES.
-
restricting visibility of a boost to servers the original post is federated with. This is something that's long overdue in the fediverse! Akkoma's bubble is a somewhat-similar concept; Bonfire's boundaries might well support this.
Yep, totally agree!
Or, using Gab provides a sense of what's possible.
And child porn is a great example -- and CSAM more generally. Today's fediverse would have less CSAM if the CSAM instances weren't on it. Why hasn't that happened? The reason that many instances give for not block the instances that are well-known sources of CSAM is that CSAM isn't the only thing on that instance. And it's true: these instances have lots of people talking about all kinds of things, and only a relatively-small number of people spreading CSAM. So not blocking them is completely in aligment with the Big Fedi views Evan articulates: everybody (even CSAM-spreaders) should have an account, and it's more important to have the good (non-CSAM) people on the fediverse than to keep the bad (CSAM-spreading) people off.
A different view is that whoa, even a relatively-small number of people spreading CSAM is way too many, and today's fediverse would be better if they weren't on it, and if the instances that allow CSAM are providing a haven for them then those instances shouldn't be on the fediverse. It seems to me that view would result in less CSAM on the fediverse, which I see as a good thing.
Me: "fedi would be better with fewer Nazis and fascists"
sj_zero: "these pieces are deeply authoritarian"
I agree that small doesn't equal safer, in other articles I've quoted Mekka as saying that for many Black Twitter users there's more racism and Nazis on the fediverse than Twitter. And I agre that better tools will be good. The question is whether, with current tools, growth with the principles of Big Fedi leads to more or less safety. Evan assumes that safety can be maintained: "There may be some bad people too, but we'll manage them." Given that the tools aren't sufficient to manage the bad people today, that seems like an unrealistic assumption to me.
And yes, there are ways to keep these people off the fediverse (although they're not perfect). Gab isn't on the fediverse today because everybody defederated it. OANN isn't on the fediverse today because everybody threatened to defederate the instance that (briefly) hosted them, and as a result the instance decided to enforce their terms of service. There's a difference between Evan's position that he wants them to have accounts on the fediverse, and the alternate view that we don't want them to have accounts on the fediverse (although may not always be able to prevent it).
It's a good comment, thanks for sharing it here! On the bolded part, yes, it's possible to do polls on Mastodon ... it could be very interesting to do a series around these questions. But of course a lot depends on who's doing the poll. Evan for example has blocked a lot of people -- which is fine, there is nothing the matter with blocking people, but it skews the poll results. And a lot depends on how the poll questions are phrased. Still, it's a good idea and I'll think about whether there's a sensible way to do it.
I agree that some of what Evan characterised as Small Fedi isn't about small for small's sake, it's more about the view you describe -- what L. Rhodes calls "networked communities". Of course, the consequences of this result in slower growth than the Big Fedi view, so a smaller network in the short-to-medium term, so from his perspective I can see why I chose this framing.
And from the comment:
Can the Big Fedi people connect with everyone they want to, while the Small Fedi folk keep their comfortable distance and protect their safe spaces?
Yes, I think a schism's likely to happen -- "Meta's fediverse", instances that federate with Threads, will be more attractive to Big Fedi people, and the "free fediverses" that don't federate with Threads (or other surveillance capitialism companies) will be more attractive to people who don't buy into the bigger is better view.
Indeed, there are lots of people like that already on the fediverse, and blocking entire instances is a blunt but powerful tool that well-moderated fediverse instances currently rely on for protection. Today, people on instances where admins and moderators don't block instances that have multiple badly-behaving people have to deal with a lot more harassment and hate speech than people on instances who do. So we'll certainly see a situation where some instances block Threads and others don't. The open question, though, is how many instances will decide to also block instances that federate with Threads -- just as many instances decided to block instances that federated with Gab.
It's not that he wants the fediverse to be unsafe. It's more that the Big Fedi beliefs he describes for the fediverse -- everybody having an account there (which by definition includes Nazis, anti-trans hate groups, etc) , relying on the same kind of automated moderation tools that we've don't lead to safety on other platforms -- lead to a fediverse that's unsafe for many.
And sure there are some people who say Fedi is fine as it is. But that's not the norm for people who disagree with the "Big Fedi" view he sketches. It's like if somebody said "People who want to federate with Threads are all transphobic." There are indeed some transphobic people who want to federate with Threads -- We Distribute just reported on one -- but claiming that's the typical view of people who want to federate with Threads would be a mischaracterization.
The OP talks about how Meta can get a lot of what they want -- including the regulatory aspects -- just by saying they'll integrate with the fediverse, and it's quite possible that's all they'll ever do. But there's a big potential upside for them if they decided to invest in it ... not so much today's fediverse (I agree about the inflated self-importance of a lot of the commentary -- no, they're not so desperate for content that they're trying to steal it from the fediverse) but the potential of decentralized surveillance capitalism. So, we shall see.
Yep. I very much agree with all of you. Here's how I phrased it in Embrace, Extend, and Exploit: Meta's plan for ActivityPub, Mastodon and the fediverse