Wanna brainstorm some what-ifs?

Federated Moderation: What if Moderation was an extension and moderation actions would federate to ease the life of mods + admins?

Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?

Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?

· · Web · 9 · 9 · 9

@humanetech No, the fact that moderation is fully decentralized, and I'd say entirely optional, is a feature, not a bug. There's no need to fix it. There is a need for robust spam filters though because that problem is only a matter of time.

@grishka in my idea moderation is still fully decentralized but it becomes part of the fabric of Fediverse itself, instead of being an instance by instance app-specific thing (an app can of course still ignore implementing it, choose their own way).

Also there would be more visibility to this time-consuming and under-appreciated job.

@humanetech you mean a mod team with some mods available all the time to fedizens having an issue? How should they be chosen, by collective vote? What rights should they have and how would their actions be controlled (quis custodiet ipsos custodes)?

@maikek no, this is not exactly what I mean.

In Delegated Moderation a mod of Instance X might be trusted to do work for Instance Y and their help can be invoked when needed.

In Delegation-as-a-Service any fedizen can offer to do moderation work. It is competely decoupled from instances. But in this model you would need some mechanism to know whether you can trust someone offering their help. A simple "I vouch for this mod" given by fedizens, reputation system might do, at the start.. dunno.

@humanetech so instance admins would assign somebody out of a pool of volunteers to help with moderation and thus determine what rights these people should have on that specific instance? Sounds feasible to me. Some guidelines or code of conduct should exist besides the reputation system.

@maikek yes, that is the idea.

For an instance it'd mean adhering to the Code of Conduct, but - as you say - probably also some more specific moderation guidelines need to be adhered to.

my opinions 

@humanetech I typed up my thoughts and then I read your blog post, I should've done that the other way around, hahah.

I don't know about the logistics of delegating moderation services - I think it's best if users just choose a server with trustworthy moderation, and migrate if that moderation proves to be bad or otherwise insufficient - but I do largely think that federating moderation actions to some extent would be a good thing.

I feel like there's not enough information easily and readily accessible to an admin about the instances that their server talks to. Being able to see from a glance which instances I trust block a given server (or explicitly allow, in the case of allowlists), maybe a sampling of a server's posts as well as the server's age and terms of use, and notifications for when an instance discovers a new server or an instance I trust has just blocked a server, would make federation a much more comfortable experience I feel.

my opinions 

@holly yes exactly. The third paragraph is exactly the idea of the Delegating Moderation part of the Lemmy post.. easing the burden, by providing good metrics.

So a new server comes online, is detected, and immediately all across the fedi people start blocking accounts, including your own users. It is probably worth giving some attention to. You see the block count increase, and can open the list to see reasons. Then you make an informed decision.

@humanetech I think moderation is a very bad idea overall. Control in users' hands is a good idea tho. If some posts are "bad", they are so for some people and not for others. Users already have the ability to ignore/block such users/content. The only issue is that instance admins (like me) may be faced with this bad content on their instance. And the solution to that, I'd say, is to perhaps hide content from local and global feed, but do not ban/block users or content generally. If others want to see that bad content via a direct discovery process like search or a link and such, they should.

Moderation quickly transforms into censorship. Instance admins should not decide for the rest of the users on their instance, what is a good or a bad content to see.

@tio @humanetech @activitypub So if someone is using your instance to harass people on another instance, you feel you shouldn't take action because they're only harassing some people and not others? Or are you using some other definition of moderation than I am?

@humanetech @KelsonV How can some users harass others when "others" can block them?
@humanetech @danielhglus @KelsonV

Well, it's still harassment the first time.

True. That's how you detect it by "seeing" it. But I'd much rather prefer to let the user have the power to block the harasser than me, the instance owner, to do that. I cannot solve all of these disputes + as said above, it gives me too much power and I don't want to abuse it.

@tio @humanetech @activitypub @danielhglus You're assuming a 1-on-1 scenario.

Consider someone who is using your server to send repeated insults, unwanted sexual advances, death threats, etc. to multiple other people. Or to reveal someone's private information. As an admin, refusing to take action against a malicious user of your site puts the burden on *multiple* recipients of the abuse to deal with it themselves.

That's not humane.

@humanetech @danielhglus @KelsonV

Consider someone who is using your server to send repeated insults, unwanted sexual advances, death threats, etc. to multiple other people.

I would assume such situations would be rare and is not worth sacrificing the freedom of expression overall. In my view at least. As said, emails can also be abused. And on the fediverse, if I block user X for being so "evil", then user X can make an account on another instance and keep on abusing people. The whack-a-mole game begins and I do not think we will win. But if it is easy for people to block others and apply all kinds of filters, then such situations should diminish.
@humanetech @KelsonV Look at it this way: I can get harassed over email too. Anyone cans end me an email. But I would not want gmail or any email provider to try and "defend" me against these harassers because I then put a lot of trust into a few individuals to decide what is good or bad for me. I would much rather prefer to have control over this and mark as span any harassing email address so I block them on my end. Like a spam filter that I opt in to use and can have control over its rules.

@tio thats you, but other people like to have spam protection and possibly other filters. Nothing wrong with that, everyone can use what they prefer.

@humanetech @activitypub @KelsonV

@humanetech @felix @KelsonV For sure. Agree. But I'd like for me to opt in for such a filter. Imagine if Firefox or Chrome decide what websites are ok for millions of users and what are not. This is a slippery slope and they may even be doing that to a certain degree.
@humanetech @activitypub @liaizon @grishka @maikek @tio The point with moderation isn't to hide posts that you dislike but to protect your users from actively harmful people.

See @SocialCoop's code of conduct for example:

It wouldn't make sense to argue about whether letting your users being harassed or threatened is good or not.

@tio @ged @activitypub @liaizon @grishka @maikek @SocialCoop

In addition to what others are saying, there's also the perspective of the instance owner. Say, I have a fedi server with spare capacity. Out of the kindness of my heart I open it to others.

Am I justified to allow only people that adhere to my CoC? I think I am. That automatically gives me the burden to monitor and moderate, which grows with no. of users.

I admire your open-minded admin approach, but it is not for everyone.

@tio @ged @activitypub @liaizon @grishka @maikek @SocialCoop

It's not censorship that easily, esp. if ppl have the choice to go elsewhere to raise their voice, still part of the

Compare to real-world.. Say, I organized an interest group talking about dogs, and many non-members following + reacting to discussion.

Suddenly some guys join and start preaching a religious suicide cult. Or maybe just 'cats are better'. As organizer I'd say "Get lost, preach somewhere else". Censorship?

@humanetech @tio @ged @activitypub @liaizon @grishka @SocialCoop
I am all for censorship. If it is transparent. ppl who do not adhere to the CoC on an instance, eg harass others or post illegal, insulting content, should be warned off and if they don't stop, kick them out immediately.
There are enough "freeze peach" places where they can troll each other. 1/2

@humanetech @tio @ged @activitypub @liaizon @grishka @SocialCoop
Remember the barman? ... "you have to nip it in the bud immediately. These guys come in and it's always a nice, polite one. ... And then they become a regular and after awhile they bring a friend. And that dude is cool too.
And then THEY bring friends and the friends bring friends and they stop being cool and then you realize, oh shit, this is a Nazi bar now. And it's too late ....

@maikek @SocialCoop @grishka @liaizon @activitypub @tio @humanetech I don't think there's such a thing as transparent censorship, but also I don't think Americans live with media and the memory of elders having lived occupied by Nazis, with roundups and Jewish people parked in an overcrowded stadium, survivors saying they remembered the smell while watching the movie.

Americans probably live with media and the memory of elders telling how they won the war.

Anyway, protecting your users from abuse, manipulation, coercion, violence, threats of violence/doxxing, or harassment isn't censorship. I repeat myself, but moderation isn't about suspending users who have an opinion you don't like. Just like with this Reddit post, it's about suspending users who degrade everyone's experience or may turn it into something that doesn't justify the maintenance costs.

@tio @humanetech @activitypub

This is the model that run on the federated indymedia network for 10 years. This models limitations was one of the things that ended the project with ripping and tearing...

We take a fresh aproch to this by alloying content to flow to where it is wonted - "news" is not removed it is simply moved at the #OMN in this we try to build on the past indymedia expirence #indymediaback

@tio @humanetech @activitypub The challenge with moderation is that disrupting communication often scales better than individual blocking.

In the Freenet project (where centralized moderation simply is no option) the answer was to propagate blocking between users in a transparent way. That way blocking disrupters scales better than disrupting. For more info see:

@ArneBab @tio @activitypub

Yes, this is interesting, and I think it matches the first part of the the Lemmy post about Federating Moderation.

This allows people to get insight in metrics surrounding moderation action, while you can still have each and every fedizen to make their own individual decision whether or not to take action themselves.

@humanetech @tio @activitypub (the prototype implementation at the end is built in a way that would be suitable for federation, because it can work with a shared database that only has different entry points to get your personal view of the trust-graph. It is far from finished, though)

@ArneBab @tio @activitypub

Are you on Lemmy, or SocialHub? Would be great to collect the info there, in more forum-like conversation threads.

@ArneBab @tio @activitypub

In the initiative I am aiming for more creative brainstorming processes to be unleashed and collected for anyone to jump in on, if it has their passion.

I am not able time-wise to follow-up on all of the topics I start, nor having concrete implementation plans on them (though sometimes I do).

The Lemmy and SocialHub spaces serve as idea archives in that way. Stuff waiting to be elaborated further.

@humanetech @tio @activitypub I’m not there, no. Feel free to copy over my posts here. I hope they help people tackle these issues, because global-scale moderation without centralized control is one of the huge tasks ahead of us — a task that was mostly ignored in the Clearnet (there were underpaid moderators to burn out after all) but tackled within the Freenet Project more than a decade ago.

If people have questions about the math for scaling, it would be great if you could point them to me.

@ArneBab @tio @activitypub

You should have a look at the SocialHub topic. It contains two links to other topics where researchers - one being @robertwgehl - announce they are investigating Moderation in decentralized (fedi) contexts.

Your input may be invaluable to them. Here's the link:

@humanetech @activitypub I like this proposal, but applied on a user level instead of an instance level.

@humanetech @activitypub My first thought, beyond “this is a really interesting and important topic to discuss!” is... there’s a really important missing prerequisite: federated/decentralized _identities_.

Right now, every individual actor is subservient to an instance. If you delegate moderation to a pool of off-instance actors, you also de-facto grant powers to that pool’s instance admins as well.

Also, individual attributes like reputation are extremely fragile w/o individual sovereignty.

@matro @activitypub

Yes, you are quite right, and it is an important topic. Tackling the uses cases of decentralized identity in proper open-standards based ways is something that the entire decentralization movement is eagerly awaiting.

There's work in this area on in @zap +


> You have the right to a permanent internet identity which is not associated with what server you are currently using and cannot be taken away from you by anybody, ever.

@humanetech I am not convinced server-less identity is something that can be solved in a reasonable way, honestly. Or at least, it will bring with it unavoidable problem such as it being impossible to do recovery. Humans and human judgement need to be in the loop at some point.

For federated moderation, I urge you to have a look at the early days of IRC, and what happened there.
@matro @activitypub @zap

@pettter @matro @activitypub @zap

Yes you are right, similar responses are on the thread, and..

> Humans and human judgement need to be in the loop at some point.

.. is something that need not be taken away in any more federated mechanism. I think it very important to keep this human aspect.

This, btw, is a strong point of the where there's much more moderators than in traditional social media (which requires algorithms do the work, to scale tasks to billions of users).

@pettter @humanetech @matro @activitypub @zap Where's a good place to read more about early days IRC and moderation?

You mean like Zap? We've had this for a few years now, including the delegated moderators (which we've had much, much longer)... it doesn't require any extensions to activitypub, but it's nice to provide a notice that posts/comments might not show up immediately.

We used to have a rating service but it had some issues and we'll have to revisit that. For now we just let you figure out who you think you can trust.

@zap that is very interesting and I didn't know that. Zap/zot are doing many great things. Is there documentation to refer to? I'd like to add to the Lemmy discussion.

Moderation is an option selected by the channel owner. The site admin is not involved. You can moderate everybody, anybody, or nobody. You can also grant admin access for your channel and content to anybody you wish. On our own platform this is automatic and connecting from elsewhere to your site as channel admin doesn't require any authentication interaction. There's an app called 'Guest Pass' if you want to give admin rights over your content to somebody on Mastodon (for instance) or just somebody with an email address.

There's also a quick configuration for moderated public groups since this is the most popular use case. In this configuration everybody that joins the group is moderated by default until/unless you decide otherwise.

Somewhere there's also a tool for sending the incoming moderation notifications to any email address or list of addresses you choose. But darned if I can find it right now.

@zap I like this model, and it makes total sense (though the admin should have control of what channels are allowed, but probably is).

Note that this also aligns with the "Community has no Boundary" paradigm I'm discussing on where instances are abstracted away, and communities are more like the intricate social structures you see in the real world:

And can be extended with e.g. Governance Policies of various kinds:


Your chart is ready, and can be found here:

Things may have changed since I started compiling that, and some things may have been inaccessible.

The chart will eventually be deleted, so if you'd like to keep it, make sure you download a copy.

@humanetech @activitypub
I'll reply to your blog posts' points later as they are different from what you summarise here, but for brevity, I'll make some takes on these points here.

My thoughts: bad ideas as I see it.

> Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?

It already exists. There is no need to add arbitrary power structures meta-types to a protocol dedicated to communication. Implementers already have trouble interpreting the semantics of AS types and AP behaviour. Moderation belongs to a different layer.

> Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?

Then I cannot trust a single instance. I want to trust a single, stable, mostly immutable group of moderators from my instance. I don't want some "moderation upper class" overseeing the network and trading positions, most especially with incentive (reward is not intrinsic to motivation, self-esteem or character). "Instances" currently provide clear boundaries within the network, and the freedom rests on users to reside wherever they wish. This current paradigm is optimal, and deviations as I see it are only for the worse.

> Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?

This doesn't make sense. There is no "one-true" moderation policy to rate against. A specific user will want a specific moderation policy, and will choose an instance who's moderation reflects it. This is and should continue to be an individual's choice. "Reputation" is reserved for Wikipedia-style structures with top-down hierarchy.

A lot of this is of the cuff, and I know you make more detailed and nuanced points elsewhere, so feel free to agree or disagree, clarify, correct, etc.

@humanetech @activitypub

Mess with the fabric of the Fediverse, and it will mess with you

@torresjrjr @activitypub all good points. Thx for reply!

Moderation is too much out of sight of fedizens, unthankful work. But it is vital for fedi to not turn into a toxic hellhole. It is fedi's USP.

By making it part of fedi (as vocab extension, not core standard) it gets the appreciation + visibility it deserves. Makes it easier to find mods / offer incentives to help.

Manual decisions / onboarding remain unchanged. Mods need to follow CoC's always.

No "upper class", implicit -> explicit.

@torresjrjr sorry, need to cram things into 500 chars here.. is why I started the Lemmy space :)

Sign in to participate in the conversation

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!