an appeal to the fediverse regarding anti-abuse 

Dear fediverse:

Fascism joining the fediverse is extremely bad, and we have to do something about it. But please, please, please: give me two weeks before you roll out any new solutions. Some of the solutions being proposed look like they will make the situation better but will make it much worse.

I am dropping nearly everything to write a demo and spec explaining how to do things right. Please give me two weeks. I've been preparing for this.

an appeal to the fediverse regarding anti-abuse 

As a hint as to why the current solutions aren't going to work, I'll point you to what happened when Mastodon rolled out direct messages with OStatus, but they *weren't really* private messages. An admirable attempt but it needed a different approach.

I believe this could be like that, but 10x worse. I've been studying what will happen under different approaches and trying hard to figure out how to map a solution onto what we have.

Show thread

an appeal to the fediverse regarding anti-abuse 

@cwebber what are their solutions that you find bad?

an appeal to the fediverse regarding anti-abuse 

@wilkie That's probably a really hard question for @cwebber to answer, since it'd involve talking about his plan that, as he said, isn't ready.

So I'll try and answer: One issue is they would like to cryptographically-ensure that a communication is proper, which means that the more targeted your server is by harassment, the more costly it will be to continue operating. Another is that it doesn't allow a similar autonomy of moderation as is current.

@emsenn @wilkie @cwebber Please elaborate on the autonomy of moderation point. What do you mean by that?

@Gargron If I understand it right - which I very well might not:

Current discussions of OCAP provide the tools to instance moderators, but don't provide similar tooling for users.

Right now, as I understand it, users can do most of the moderation action moderators can, relative to their own profile: they can autonomously moderate their profile even if their instance doesn't do moderation.

(Again, I could be wrong in understanding how things are or could be.)
@wilkie @cwebber

@emsenn @wilkie @cwebber I don't think that's quite right. Now, I don't know which "some of the solutions" Chris is talking about, because I'm aware of what I'm proposing (authorized fetch, same mechanism as already used for inbox deliveries) and what kaniini is proposing (OCAP), and I am under the impression that Chris's solution is OCAP too. But neither of those would affect any existing self-moderation mechanisms.

@Gargron I didn't think current moderation tools would be changed. As I understand it, kaninii's OCAP is focused on interinstance moderation, not a user moderating their own profile with it. That's my concern: that I will be reliant on my instance admin to handle any abuse that needs to be addressed with OCAP. As I understand it, Chris' solution is OCAP as well, but more generalized so can be an inter-user tool just as easily. (Again, I could be wrong about all this.) @wilkie @cwebber

@emsenn @wilkie @cwebber OCAP as well as authorized fetch are primarily about how to allow/disallow another server to retrieve public data from yours, specifically in the case of domain blocks.

Personal blocks, mutes, filters and the rest operate on a logically higher level than that.

@Gargron Ahh so OCAP is not used to forbid/permit an individual but a domain? I had thought that was dependent on implementation, not an intrinsic quality of the feature.

I think it's clear the one thing I was right on is that I don't know what I'm talking about. :D
@wilkie @cwebber

Follow

@emsenn @gargron @wilkie ocap on its own has nothing to do with domain-level authority decisions, that's a specific suggestion of how to do something semi-ocap'y for that purpose

· · Web · 0 · 0 · 1
Sign in to participate in the conversation
Octodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!