Anti-abuse tools which rely on centralized, wholesale surveillance, particularly when such a mechanism is the primary revenue model of that centralized system, can only get more and more abusive to users over time.
That's the fundamental problem.
The federated social web, as it exists, points at a *portion* of the solution. But not enough. In many ways, it has borrowed centralized assumptions.
That's why I've been in research mode with @spritelyproject. I think we now have the answers.
@cwebber @erin Surprised to hear that. Webs of Trust (specifically, developing a graphdb that is fast enough at certain things to use very large webs for curation and filtering) are my whole world right now, so I would think this, but I think they're a natural and inevitable solution to distributed moderation. https://makopool.com/better_space_with_wots.html
I'd love to chat with Christine about it if that would be worthwhile.
@cwebber @erin For instance, I get the impression (and it would make sense if this were going on) that mastodon instances are inheriting blocklists or block recommendations from each other, and that the burden of not doing this is overwhelming.
Block inheritance simply is the information structure that that Trustnet is about.
It's also looking like there needs to be Inclusion List inheritance, which is what Tasteweb is about.
@erin What @cwebber is working on involves Object Capabilities (OCAP) and Capabilities Transport Prococol (CapTP) which IMO are essential to make federated and distributed social networking scale to replace Facebook (and IMO eliminating Facebook is an important goal).
OCAP and CapTP are like "distributed ACLs" (gross oversimplification perhaps, but about permissions/access). Trustnet doesn't seem to address that--it is more about determining veracity of authors which is a different issue.
@cwebber I'll rewrite the thing I posted to twitter here.
The sociologist Zeynep Tufekci argued that good moderation requires human judgment and culturally specific knowledge. (At the very least speaking the language)
The big companies in their goals to scale as large as possible, with as few costs as possible keep trying to use AI our outsourcing the work to the users to avoid having to pay for large teams of language specific moderators.
@alienghic and I'll repost what I said there, but better (compressed between two posts):
Fully true, but the solution isn't in convincing central players to pay for more moderators, but to restructure. More soon.
@cwebber perhaps the biggest thing that helped early Mastodon was that it could be run with moderation policies different from what unlimited speech Californian's want.
Apparently some of the early big Japanese mastodon servers were popular because Japan has a different opinion about how bad sexualized drawings of underaged people are than the USA. (I don't really want to see those drawings but can agree they are ethically fuzzier)
A universal service can't apply culturally specific policies
Which points to a way forward: https://www.draketo.de/software/decentralized-moderation
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!