Wanna brainstorm some what-ifs?
Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?
Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?
Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?
@humanetech No, the fact that moderation is fully decentralized, and I'd say entirely optional, is a feature, not a bug. There's no need to fix it. There is a need for robust spam filters though because that problem is only a matter of time.
@grishka in my idea moderation is still fully decentralized but it becomes part of the fabric of Fediverse itself, instead of being an instance by instance app-specific thing (an app can of course still ignore implementing it, choose their own way).
Also there would be more visibility to this time-consuming and under-appreciated job.
@humanetech you mean a mod team with some mods available all the time to fedizens having an issue? How should they be chosen, by collective vote? What rights should they have and how would their actions be controlled (quis custodiet ipsos custodes)?
@maikek no, this is not exactly what I mean.
In Delegated Moderation a mod of Instance X might be trusted to do work for Instance Y and their help can be invoked when needed.
In Delegation-as-a-Service any fedizen can offer to do moderation work. It is competely decoupled from instances. But in this model you would need some mechanism to know whether you can trust someone offering their help. A simple "I vouch for this mod" given by fedizens, reputation system might do, at the start.. dunno.
@humanetech so instance admins would assign somebody out of a pool of volunteers to help with moderation and thus determine what rights these people should have on that specific instance? Sounds feasible to me. Some guidelines or code of conduct should exist besides the reputation system.
@maikek yes, that is the idea.
For an instance it'd mean adhering to the Code of Conduct, but - as you say - probably also some more specific moderation guidelines need to be adhered to.
@holly yes exactly. The third paragraph is exactly the idea of the Delegating Moderation part of the Lemmy post.. easing the burden, by providing good metrics.
So a new server comes online, is detected, and immediately all across the fedi people start blocking accounts, including your own users. It is probably worth giving some attention to. You see the block count increase, and can open the list to see reasons. Then you make an informed decision.
Well, it's still harassment the first time.True. That's how you detect it by "seeing" it. But I'd much rather prefer to let the user have the power to block the harasser than me, the instance owner, to do that. I cannot solve all of these disputes + as said above, it gives me too much power and I don't want to abuse it.
Consider someone who is using your server to send repeated insults, unwanted sexual advances, death threats, etc. to multiple other people. Or to reveal someone's private information. As an admin, refusing to take action against a malicious user of your site puts the burden on *multiple* recipients of the abuse to deal with it themselves.
That's not humane.
Consider someone who is using your server to send repeated insults, unwanted sexual advances, death threats, etc. to multiple other people.I would assume such situations would be rare and is not worth sacrificing the freedom of expression overall. In my view at least. As said, emails can also be abused. And on the fediverse, if I block user X for being so "evil", then user X can make an account on another instance and keep on abusing people. The whack-a-mole game begins and I do not think we will win. But if it is easy for people to block others and apply all kinds of filters, then such situations should diminish.
See @SocialCoop's code of conduct for example: https://wiki.social.coop/rules-and-bylaws/Code-of-conduct.html
It wouldn't make sense to argue about whether letting your users being harassed or threatened is good or not.
In addition to what others are saying, there's also the perspective of the instance owner. Say, I have a fedi server with spare capacity. Out of the kindness of my heart I open it to others.
Am I justified to allow only people that adhere to my CoC? I think I am. That automatically gives me the burden to monitor and moderate, which grows with no. of users.
I admire your open-minded admin approach, but it is not for everyone.
It's not censorship that easily, esp. if ppl have the choice to go elsewhere to raise their voice, still part of the #fediverse
Compare to real-world.. Say, I organized an interest group talking about dogs, and many non-members following + reacting to discussion.
Suddenly some guys join and start preaching a religious suicide cult. Or maybe just 'cats are better'. As organizer I'd say "Get lost, preach somewhere else". Censorship?
@humanetech @tio @ged @activitypub @liaizon @grishka @SocialCoop
I am all for censorship. If it is transparent. ppl who do not adhere to the CoC on an instance, eg harass others or post illegal, insulting content, should be warned off and if they don't stop, kick them out immediately.
There are enough "freeze peach" places where they can troll each other. 1/2
@humanetech @tio @ged @activitypub @liaizon @grishka @SocialCoop
Remember the barman? ... "you have to nip it in the bud immediately. These guys come in and it's always a nice, polite one. ... And then they become a regular and after awhile they bring a friend. And that dude is cool too.
And then THEY bring friends and the friends bring friends and they stop being cool and then you realize, oh shit, this is a Nazi bar now. And it's too late ....
Americans probably live with media and the memory of elders telling how they won the war.
Anyway, protecting your users from abuse, manipulation, coercion, violence, threats of violence/doxxing, or harassment isn't censorship. I repeat myself, but moderation isn't about suspending users who have an opinion you don't like. Just like with this Reddit post, it's about suspending users who degrade everyone's experience or may turn it into something that doesn't justify the maintenance costs.
This is the model that run on the federated indymedia network for 10 years. This models limitations was one of the things that ended the project with ripping and tearing...
We take a fresh aproch to this by alloying content to flow to where it is wonted - "news" is not removed it is simply moved at the #OMN in this we try to build on the past indymedia expirence #indymediaback
In the Freenet project (where centralized moderation simply is no option) the answer was to propagate blocking between users in a transparent way. That way blocking disrupters scales better than disrupting. For more info see: https://www.draketo.de/english/freenet/friendly-communication-with-anonymity
Yes, this is interesting, and I think it matches the first part of the the Lemmy post about Federating Moderation.
This allows people to get insight in metrics surrounding moderation action, while you can still have each and every fedizen to make their own individual decision whether or not to take action themselves.
@humanetech @tio @activitypub If you want to try this, there are two steps: First the current state in Freenet: https://github.com/xor-freenet/plugin-WebOfTrust/blob/master/developer-documentation/core-developers-manual/OadSFfF-version1.2-non-print-edition.pdf
Then the optimizations needed so this scales to arbitrary size: https://www.draketo.de/english/freenet/deterministic-load-decentralized-spam-filter
Here’s some data if you want to test algorithms: https://figshare.com/articles/dataset/The_Freenet_social_trust_graph_extracted_from_the_Web_of_Trust/4725664
And some starting code of a more generic prototype for faster testing: https://hg.sr.ht/~arnebab/wispwot/
In the #FediverseFutures initiative I am aiming for more creative brainstorming processes to be unleashed and collected for anyone to jump in on, if it has their passion.
I am not able time-wise to follow-up on all of the topics I start, nor having concrete implementation plans on them (though sometimes I do).
The Lemmy and SocialHub spaces serve as idea archives in that way. Stuff waiting to be elaborated further.
@humanetech @tio @activitypub I’m not there, no. Feel free to copy over my posts here. I hope they help people tackle these issues, because global-scale moderation without centralized control is one of the huge tasks ahead of us — a task that was mostly ignored in the Clearnet (there were underpaid moderators to burn out after all) but tackled within the Freenet Project more than a decade ago.
If people have questions about the math for scaling, it would be great if you could point them to me.
You should have a look at the SocialHub topic. It contains two links to other topics where researchers - one being @robertwgehl - announce they are investigating Moderation in decentralized (fedi) contexts.
Your input may be invaluable to them. Here's the #SocialHub link:
@humanetech @activitypub My first thought, beyond “this is a really interesting and important topic to discuss!” is... there’s a really important missing prerequisite: federated/decentralized _identities_.
Right now, every individual actor is subservient to an instance. If you delegate moderation to a pool of off-instance actors, you also de-facto grant powers to that pool’s instance admins as well.
Also, individual attributes like reputation are extremely fragile w/o individual sovereignty.
Yes, you are quite right, and it is an important topic. Tackling the uses cases of decentralized identity in proper open-standards based ways is something that the entire decentralization movement is eagerly awaiting.
> You have the right to a permanent internet identity which is not associated with what server you are currently using and cannot be taken away from you by anybody, ever.
@humanetech I am not convinced server-less identity is something that can be solved in a reasonable way, honestly. Or at least, it will bring with it unavoidable problem such as it being impossible to do recovery. Humans and human judgement need to be in the loop at some point.
Yes you are right, similar responses are on the thread, and..
> Humans and human judgement need to be in the loop at some point.
.. is something that need not be taken away in any more federated mechanism. I think it very important to keep this human aspect.
This, btw, is a strong point of the #fediverse where there's much more moderators than in traditional social media (which requires algorithms do the work, to scale tasks to billions of users).
@zap that is very interesting and I didn't know that. Zap/zot are doing many great things. Is there documentation to refer to? I'd like to add to the Lemmy discussion.
@zap I like this model, and it makes total sense (though the admin should have control of what channels are allowed, but probably is).
Note that this also aligns with the "Community has no Boundary" paradigm I'm discussing on #SocialHub where instances are abstracted away, and communities are more like the intricate social structures you see in the real world:
And can be extended with e.g. Governance Policies of various kinds:
Your chart is ready, and can be found here:
Things may have changed since I started compiling that, and some things may have been inaccessible.
The chart will eventually be deleted, so if you'd like to keep it, make sure you download a copy.
My thoughts: bad ideas as I see it.
> Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?
It already exists. There is no need to add arbitrary power structures meta-types to a protocol dedicated to communication. Implementers already have trouble interpreting the semantics of AS types and AP behaviour. Moderation belongs to a different layer.
> Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?
Then I cannot trust a single instance. I want to trust a single, stable, mostly immutable group of moderators from my instance. I don't want some "moderation upper class" overseeing the network and trading positions, most especially with incentive (reward is not intrinsic to motivation, self-esteem or character). "Instances" currently provide clear boundaries within the network, and the freedom rests on users to reside wherever they wish. This current paradigm is optimal, and deviations as I see it are only for the worse.
> Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?
This doesn't make sense. There is no "one-true" moderation policy to rate against. A specific user will want a specific moderation policy, and will choose an instance who's moderation reflects it. This is and should continue to be an individual's choice. "Reputation" is reserved for Wikipedia-style structures with top-down hierarchy.
A lot of this is of the cuff, and I know you make more detailed and nuanced points elsewhere, so feel free to agree or disagree, clarify, correct, etc.
Moderation is too much out of sight of fedizens, unthankful work. But it is vital for fedi to not turn into a toxic hellhole. It is fedi's USP.
By making it part of fedi (as vocab extension, not core standard) it gets the appreciation + visibility it deserves. Makes it easier to find mods / offer incentives to help.
Manual decisions / onboarding remain unchanged. Mods need to follow CoC's always.
No "upper class", implicit -> explicit.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!