gamingonlinux , to Random stuff
@gamingonlinux@mastodon.social avatar

Just a reminder to send all news tips via email: contact@gamingonlinux.com thanks m'loves

austin ,
@austin@mstdn.party avatar

@gamingonlinux @QuadRadical Consider using Draupnir; there is a hosted version available that you can request for. See https://joinmatrix.org/guide/features/#moderation

thenexusofprivacy , to Random stuff
@thenexusofprivacy@infosec.exchange avatar

Some thoughts from @aendra on running a Bluesky moderation service that blocks screenshots -- including some htoughts on the fediverse as well.

https://www.aendra.com/some-thoughts-on-running-a-moderation-service-that-blocks-screenshots/

I'm a big fan of the XBlock screenshot labeler. If you subscribe to the default settings are to hid the images of the screeshots it detects and put the the equivalent of a content warning on the posts (although the text is still visible by default). [You also have the option to completely hide any post with a detected screenshot, but I sometimes do want to see the screenshots so I stuck with the default.]. The automatic detection is pretty good, although not perfect, and has different CWs for screenshots from Twitter, Insta, Tumblr, Bluesky, etc, so you can have finer control if you want.

Why do I like it so much? So many screenshots are "outrage posts" dunking on something ridiculous somebody's said on Twitter (or wherever), and I'd just as soon not see them! Of course, there are often useful screenshots as well; but I don't mind an extra click.

To me this is a great example of Bluesky's "composable moderation": a user-written service that people can take advatnage of if they want. Services like this don't replace platform level moderation, and if the platform moderation is bad that's a problem. But there are a lot of things that are acceptable at the platform level that many people would rather not see. It's similar to filters on Mastodon, which I also find very useful, but a lot more powerful.

anders , to Non Political Twitter
@anders@mastodon.cyborch.com avatar

You don’t need moderators if your target audience is people would be removed by moderators on other platforms.

From: @tchambers
https://indieweb.social/@tchambers@indieweb.social@indieweb.social/112376772392287825

PaulaToThePeople , to Random stuff
@PaulaToThePeople@climatejustice.social avatar

If you equate good moderation with censorship you're not welcome here.

Offer applies to accounts and instances for an unlimited time.

iftas Bot , to Fediverse Moderation Discussion
@iftas@mastodon.iftas.org avatar

One of IFTAS’ priorities is to support Fediverse by creating template documents, training, and best practices guides.

We know there’s no “one size fits all” solution, but we’ve onsistently heard that these will be useful resources. As a first step towards that, IFTAS Advisors @Lillian and @jdp23 are conducting a survey and holding a series of discussion sessions. We welcome your participation!

Here’s the survey:
https://cryptpad.fr/form/#/2/form/view/0sU8JebkNU8lb1yTD0VFIFiX1HVXKB4r3At41z9gaNY/

@moderation

serge , to News from fediverse
@serge@babka.social avatar

Has anyone considered a Fediverse moderator exchange program?

Basically you'd have a moderator come onto your team and see what your moderation challenges are, how you handle them, etc.

This would be especially useful to marginalized communities which have poor representation on the Fediverse, such as Black instances, Jewish instances, and so on.

Would this be interesting to anyone else?

18+ BeAware , to News from fediverse
@BeAware@social.beaware.live avatar

Well, I just learned a bit about Nostr.

It's basically "Bluesky, but FOSS". You have your account that has more control of moderation for the user.

The difference is that on Nostr nobody can "ban" per se because nobody has control over each other. It's block and mute. That's your moderation and you do it yourself. Nobody does it for you, unlike Fedi or BlueSky.

I gotta say, that's where a lot of you really need to go to have your eyes opened to what the Fediverse is really like for those of us on single user instances and the sheer amount of work unpaid moderators and admins do here to keep these instances as "nasty shit free" as they can.

Awhile back I kept seeing things like "Other platforms have moderation issues, there's CP and all kinds of other nasty stuff there".

There's all that here too, you're just privileged to have people with massive amounts of passion for this place and do the unpaid work to block and remove all that shit.

If you actually had any control in what you're allowed to see or not see, it can end up being the wild west for all things bad. Which is okay for those who don't mind blocking and muting endlessly (What Fediverse unpaid mods and admins do every day), but I'd bet a lot of you won't want to do that because it sounds HARD and potentially gross and traumatic.

So, whenever you think "that other platform is bad because their moderation sucks". Maybe, just maybe, think about if you would want to sit there all day looking at nasty shit like CP and other sexual crimes/hate speech/scams/etc. and have a little sympathy for the actual people who do. Because it sucks and it does exist here, you just don't see it.

#Fediverse #Nostr #Fedi #BlueSky #Threads #Moderation #Mastodon

rimu , to Random stuff

I'm pretty happy with how tools for are coming along!

Moderators can:

  • delete & edit anything in community
  • ban people from community, and unban them.
  • review reports about content in that community
  • mark a report as resolved / ignored.

When a report is resolved or ignored, all reports regarding that content are also resolved. So if something receives 150 reports then mods won't need to click 150 times to resolve all reports. Ignored reports stop all future reports from being accepted.

The person who created the community can appoint other moderators.

Reports federate to and from so if a PieFed user reports some content that came from a Lemmy instance the moderators on the Lemmy instance will be notified about the content being reported.

There's still more to be done with federation of bans, a moderation log, etc. But it's shaping up nicely!

https://piefed.social/post/80650

image/png

sublinks , to News from fediverse
@sublinks@utter.online avatar

https://discuss.online/post/6776820

The Sublinks team has written up a little survey, which we feel is both thorough and inclusive. It covers a wide range of topics, such as user privacy, and community engagement, along with trying to gauge things that are difficult when moderating.

Raccoon , to Random stuff
@Raccoon@techhub.social avatar

Weird bug in and interaction, thought / , , and / people might want to see this.

Got a about a user saying things that were upsetting people, noticed it was from another instance, realized that I had access to the actual account that had sent it. Not sure what this is or why, didn't even realize supported signed reports (I'd honestly like to get more), but obviously that's a risky thing if they didn't know it was happening and didn't know why.

Does anyone know what this is?

Screenshot of reply: they're on Akkoma, don't know why it happened, and don't seem to have experience with how Mastodon handles multi-instance reporting.
Screenshot of my reply, explaining the situation. See post above for the gist.

ALT
  • Reply
  • Expand (2)
  • Collapse (2)
  • Loading...
  • tallship , to News from fediverse
    @tallship@venera.social avatar

    @fediversenews

    !!Moderation request!!

    I come home to find my feeds/streams flooded with vitriolic politicial drivel emanating from a group to which I belong that states emphatically that it doesn't tolerate off-topic traffic.

    The stated purpose of this Fediverse group on my home Friendica server is stated as follows:

    This is a Friendica group dedicated to news. What are the advantages of a group over a hashtag?Groups can do things that hashtags can't. For example, groups:

    • are moderated
    • can re-share content
    • can speak as a group

    Joining and contributing to a Friendica group is easy. To share your posts to @Fediverse News, follow these steps:

    1. Follow @Fediverse News
    2. When sharing Fediverse news, tag @Fediverse News
    3. The @Fediverse News group will then re-share your post

    This is an actively moderated group. Be sure to stay on topic, or your posts will be removed.

    As per the instructions for this Friendica / Fediverse group, I'm notifying the moderation team by CC'ing the following address with this complaint and request to remove the vicious hate that's been spewing into the group here all day long while I've been away working:

    @atomicpoet

    People who sign up for a Fediverse News site should not be subjected to hatred being fomented, propagated, and bantered about with respect to unrelated matters, such as (abominable) off-topic, political vitriol.

    1. ) Posting announcements concerning the onboarding and subesquent federating nature of a public figure's account on threads.net is a relevant matter to the Fediverse, Fediverse Technology, and Fediverse News.
    2. ) acerbic comentary, name calling, ad hominem, and libel, as has consumed the group today, is not - those posts are a cause of severe harm and should most certainly, IMO be removed as per the terms/rules quoted above.
    3. ) The level of cacophony and pejorative hate speech permitted to continue throughout the day today is shameful. This is not the place to engage or encourage such juvenile behavior, let alone permit it to foment and spread across the Fediverse as it has today!

    Moderation Team: Thank you, in advance, for taking your time to address and resolve this matter, returning this group to the decorum it usually enjoys with people conversing and observing the principles of civil discourse.

    Discourse -hominem speech

    .

    TechDesk , to Random stuff
    @TechDesk@flipboard.social avatar

    LGBTQ+ advocacy group GLAAD published a report which claims Meta's moderation system allows anti-trans content on Facebook, Instagram and Threads to "flourish."

    https://flip.it/KYcTKl

    box464 , to Random stuff
    @box464@mastodon.social avatar

    An interesting find with self-labelling of posts (Content Warnings) on BlueSky compared to Mastodon.

    For whatever reason, you can only self-label your posts with images in them. So if your words are suggestive in some way, you're out of luck. There's nothing in their docs about requiring an image, but that's the way their app works (and some third party apps)

    https://docs.bsky.app/docs/advanced-guides/moderation#self-labels

    iftas Bot , to Fediverse Moderation Discussion
    @iftas@mastodon.iftas.org avatar

    So many great conversations at - and great feedback from plenty of engaged attendees!

    We'd like to give you a taste of what you'll be seeing from IFTAS and all our heroic advisors and volunteers in the coming month.

    @moderation

    1/8

    Nonilex , to Random stuff
    @Nonilex@masto.ai avatar

    1928 Poster of with tape on his Mouth Claiming he is Being

    The text under the image translates to:
    “One alone out of 2000 Million humans on earth is not allowed to speak in Germany".

    (damn this is EXACTLY . Also, someone show this to re the arguing they’re being censored by .)

    https://masto.ai/@Nonilex/112118531909720570

    18+ RustyBertrand , to Random stuff
    @RustyBertrand@mastodon.ie avatar

    Somebody on my last instance posted that the is antisemitic. The Hague.

    I replied with this quote. Below.

    Mastodon.social said I was distorting reality because the word Jews has been modified in the meme.

    I am Jewish BTW. Direct female bloodline.

    I do not see the "drastic font change" they see.

    The post was removed.

    Who is reporting this stuff?

    The quote can be googled. It is exactly correct.

    Also, I'm disabled, can't see too well at times.

    ErikvanStraten ,
    @ErikvanStraten@infosec.exchange avatar

    Rusty, I know it's tough.

    I'm 1/4 Jew and demonstrated against Zionism in Amsterdam on March 11.

    Fortunately I have not yet been "beaten up" for https://infosec.exchange/@ErikvanStraten/112088215353556778 (and in Dutch: https://www.security.nl/posting/833512/Invloed+van+eenzijdig+nieuws).

    However, I recently decided to quit my account on a Dutch "tech nerd" site (https://tweakers.net/ ) after ridiculous mismoderation of my reactions (followed by the type of pointless "discussion" you also ran into). IMO this is a pity because I made a lot of in-depth and critical contributions to that site (often upvoted, but also misread, misinderstood, or against common opinion and heavily downvoted). It makes me think of the title of an old song called "We fade to grey" (*) (by "Visage").

    I guess the European DSA law (https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en ) is a double-edged sword. It may achieve the exact opposite of what it intends.

    Like most people, I hate fake news. But who gets to decide the criteria?

    At what point does combating fake news, by silencing opinions (in particular when based on provided or deducable facts), become censorship, and/or increase polarization (*)?

    I guess a simple solution does not exist.

    (*) or gray / polarisation ;-)

    @RustyBertrand

    jdp23 , to moderators group

    is this week! @Lillian and I are planning on proposing a session on Fediverse moderation. Whether or not you'll be at fediforum, and whether or not you're a moderator, we'd love to get your input on some of the topics we hope to touch on:

    • what are some positive examples of moderation on the fediverse? This could be anything from tactics that work well, processes that let teams work together, cross-instance collaboration networks, useful documents ... the list goes on!

    • what resources exist to help new moderators -- and new instances?

    • how could the software evolve to better support moderation?

    (If you haven't ever been to an unconference before, it's a really interesting experience in self-organizing ... and Kaliya Young does a great job of creating a space for participation.)

    @moderators

    fEmber , to News from fediverse

    I would love to see a feature where two fedi instances could establish a "trust" relationship, which would automatically enable a few things:

    • direct relay: public posts are forwarded between the instances, as if they were on a relay together.
    • common emoji list: trusted instances can use each other's custom emoji.
    • auto-forward reports: any incoming reports are distributed to all trusted instances.
    • shared mod notes: moderation notes such as strikes and warnings are visible by mods of trusted instances.
    • report mod actions: when a moderator suspends or silences an account, a report is automatically sent to all trusted instances.
    • domain reputation: federation management could show a warning when some percentage of trusted instances have blocked one that you haven't reviewed.
    • user reputation: remote user management could show an indicator when a remote user has been suspended from one of your trusted instances.

    All of these would be individual toggles, and they would only be enabled after mutual consent on both sides of the connection. I think this could pair very well with the "local bubble" feature of Pleroma / Akkoma / Sharkey to create safe, trusted pockets of federation. It would also reduce the time necessary to detect and respond to emerging threats.

    #Fediverse #Fedi #FediAdmins #FediMods #Moderation #ModTools #ModTooling

    Lillian , to Random stuff
    @Lillian@mastodon.iftas.org avatar

    Are you passionate about the Fediverse's future? Have ideas to enhance the experience?

    Join our session at !
    Whether you're a or not, your insights matter

    @jdp23 and I will facilitate a discussion on:

    • What approaches, tactics, and processes work well?
    • What resources on moderation are useful or what would be useful if they existed?
    • Suggestions for Fediverse software improvements

    Can't attend? Share your insights in our survey!
    https://cryptpad.fr/form/#/2/form/view/0sU8JebkNU8lb1yTD0VFIFiX1HVXKB4r3At41z9gaNY/

    Lillian , to Random stuff
    @Lillian@mastodon.iftas.org avatar

    I’m teaming up with @iftas to create template documents, moderator training, and best practices guides tailored to support moderators within the Fediverse community.

    We'd greatly appreciate your input! Feel free to complete our survey to share your thoughts and suggestions.

    https://cryptpad.fr/form/#/2/form/view/0sU8JebkNU8lb1yTD0VFIFiX1HVXKB4r3At41z9gaNY/

    A million thanks to those who contribute their valuable insights!

    piefedadmin , to Random stuff
    @piefedadmin@join.piefed.social avatar

    Recently @siderea wrote a fantastic thread about social homogeneity, moderation, the design of social platforms and what they could be. They covered a lot of ground and I can’t respond to it all so I’ll just pick some highlights

    I cannot tell you how many conversations I have seen about the topic of “moderation” and how necessary it is in which nobody has ever bothered to set down what exactly it is that they think a moderator is supposed to accomplish.

    I mean, it’s all of them. I’ve been on the internet since the 1980s, and I have never seen anyone stop and actually talk about what they thought moderators were trying to do or should try to do.

    That sounds easy. I’ll take a shot at that, below.

    Also they draw a parallel between designing buildings and designing social platforms:

    Why should our societies tolerate the existence of irresponsibly designed and operated social media platforms, that increase violence and other antisocial behavior?

    Primarily buildings are built to be used, and as such they are tools, and we judge them, as we do all tools, by how fit they are for their purpose, whatever that might be.

    And the purposes of buildings are to afford various ways of people interacting or avoiding interacting.

    So architects think a lot about that. It’s a whole thing.

    Those who put together social media platforms need to think about the same sort of thing.

    Preach!

    The upshot is that we can do better than what we have in the past. We can go beyond the bare minimum of “delete the spam, ban the nazis” moderation. When we build social software the features it has will determine what kind of moderation is possible, what kind of interactions people will have. We should be intentional about that.

    I’d like to share some of my ideas for how we can do that but first, let’s get the basics covered:

    What I think a moderator is supposed to accomplish

    Obviously every online space is different and has it’s own values and priorities. What follows is what I consider to be the minimum necessary to avoid devolving into 4chan as soon as the normies arrive.

    The goal of moderators is to create a positive, inclusive, and constructive online community where users feel comfortable engaging in discussions and sharing their thoughts and ideas. To that end, their responsibilities include:

    1. Enforcing Community Guidelines:
      • Moderators ensure that users adhere to the forum’s rules and guidelines. This may involve removing or editing content that violates these rules.
    2. Fostering a Positive Atmosphere:
      • They work to create a welcoming and friendly atmosphere within the forum. This includes encouraging respectful communication and discouraging any form of harassment or bullying.
    3. Managing Conflict:
      • Moderators intervene when conflicts arise between users, helping to de-escalate situations and resolve disputes. This may involve mediating discussions or issuing warnings to users.
    4. Preventing Spam and Irrelevant Content:
      • They monitor the forum for spam, irrelevant content, or any form of disruptive behaviour. This helps maintain the quality of discussions and keeps the forum focused on its intended topics.
    5. Addressing Technical Issues:
      • Moderators often assist users with technical issues related to the forum platform. This includes addressing bugs, helping users navigate the site, and forwarding technical problems to the appropriate channels.
    6. Encouraging Positive Contributions:
      • Moderators actively encourage users to contribute positively to discussions. This can involve highlighting valuable contributions, providing constructive feedback, and recognizing members for their positive engagement.
    7. Applying Consequences:
      • When necessary, moderators may apply consequences for rule violations, such as issuing warnings, temporary suspensions, or permanent bans. This ensures accountability and helps maintain a healthy community.
    8. Staying Informed:
      • Moderators stay informed about the forum’s community and culture, as well as any changes in policies or guidelines. This helps them address issues effectively and stay responsive to the evolving needs of the community.
    9. Collaborating with Community Members:
      • Moderators listen to concerns and feedback from the community. Taking a collaborative approach helps build trust and ensures that the moderation team understands the community’s needs.

    Ok, cool. But:

    We can and should accomplish more

    When we think about moderation tools for a platform that serves millions of people, we are shaping the nature of social interactions on a grand scale. As we engineer these virtual societies, the question we need to ask ourselves is, “What is the nature of the society we want to create?” and within that, “What do we want moderation to accomplish that supports that nature?” and eventually “What software features do moderators need to do their work?”

    The nature of the society

    We want to create an ideal society where everyone is safe, respected, empowered, entertained and encouraged to grow and find meaning according to their individual free choices. Members of this online society contribute meaningfully and positively to the rest of society, support the actualisation of human rights for all and work to help democracy to live up to it’s promise.

    Remember the 1990s, when the internet hadn’t been corrupted yet? Yeah. I do.

    What we want moderation to accomplish to maintain this ideal society

    Defining the Role of Moderation

    Moderation should not be a passive, reactive role. Instead, it should be proactive, shaping the community’s social dynamics intentionally. The first step towards this is defining what our platforms aim to achieve. Do we want a space for free and open discussions, a supportive community, or a platform for specific interests? This vision will shape the guidelines we develop, the tools we use, and the strategies we implement.

    Developing Clear Guidelines and Empowering Moderators

    Once we have our vision, we need to create a set of rules that align with this vision. These guidelines should be clear, easily accessible, and comprehensive. Moreover, we need to empower our moderators with the right tools and authority to enforce these guidelines. This can include features for deleting posts, banning users, or moving discussions.

    Investing in Technology

    Incorporating technology is crucial in supporting our moderators. Automated moderation tools can detect and remove inappropriate content, while algorithms can promote high-quality posts. Technology can also help in combating challenges like trolls who use new IP addresses to create accounts. Techniques like browser fingerprinting can identify users regardless of their IP, and restrictions on new accounts can deter trolls.

    Addressing Complex Issues

    Online communities also need to grapple with complex issues such as the formation of high-control groups, disinformation propagation, social isolation, and internet addiction. Tackling these problems requires more advanced tools and strategies:

    • For high-control groups, we need to implement robust reporting systems and use AI tools to detect patterns of manipulation.
    • To combat disinformation, we need to establish strong fact-checking protocols, possibly collaborating with external fact-checking organizations.
    • To mitigate social isolation and internet addiction, platforms can implement features to promote healthier usage, like reminders to take breaks or limits on usage time.
    • To manage trolls, we can use advanced techniques that track users beyond their IP address and limit the activities of new accounts until they show they can be trusted.

    Continuous Evaluation and User Education

    Finally, moderation should be an ongoing process of improvement and adaptation. We need to regularly review and update our strategies based on their effectiveness and changing conditions. Additionally, we need to educate our users about these issues and how to report them. An informed user base can greatly aid in maintaining a healthy community.

    In conclusion, moderation in online communities is not just about maintaining order but about intentionally shaping the dynamics of these spaces. As we navigate the digital age, we must recognize the power and responsibility we hold in engineering these virtual societies, and use it to create healthier, safer, and more inclusive communities.

    https://join.piefed.social/2024/03/07/moderation-the-design-of-social-platforms/

    Natasha_Jay , to News from fediverse
    @Natasha_Jay@tech.lgbt avatar

    I'm so impressed. I just reported a troll account who arrived today on mastodon.social

    Zero posts, blank profile, 36 wind-up replies including to a great person I follow, hence I spotted them

    Account was gone inside ten minutes - it's always nice to witness how fast it happens here. Good moderation matters.

    k_matusewicz , to News from fediverse
    @k_matusewicz@mastodon.social avatar

    Hey , and !

    For my MA thesis, I am looking to research on Mastodon, and specifically moderators' experiences with it, how it is enacted on their instance, as well as what it entails.

    Are you a moderator of an open registration, general interest instance? If you would be willing to participate in a one-off, 45-60 minute-long (online) interview, please reply to this toot or reach me at k.k.matusewicz@student.rug.nl. Thank you in advance!

    Boosts are appreciated:)

    box464 , to News from fediverse
    @box464@mastodon.social avatar

    Trying to look on the bright side of this past week's episode of Spamalot - it seems to have shaken some cobwebs loose and helped to identify servers that are critically out of date and with little to no active administration.

    Now how do admins and platform devs use that information to make things better next time? Is it a concern that there's now a collected list of unattended servers floating around?

    Raccoon , to News from fediverse
    @Raccoon@techhub.social avatar

    Just wanted to point out, with the ongoing attack on / / , this kind of happens on every social network. The only reason you are hearing about it is because is normal users who use the normal channels to talk about it, and we actually take it way more seriously than the corporate social networks do.

    Like, this represents most of the traffic on , and they don't seem to care, so it speaks volumes about the fact that you can see us talking about how we don't want that here.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • supersentai
  • WatchParties
  • Rutgers
  • jeremy
  • Lexington
  • cragsand
  • mead
  • RetroGamingNetwork
  • loren
  • steinbach
  • xyz
  • PowerRangers
  • AnarchoCapitalism
  • kamenrider
  • Mordhau
  • WarhammerFantasy
  • itdept
  • AgeRegression
  • mauerstrassenwetten
  • MidnightClan
  • space_engine
  • learnviet
  • bjj
  • Teensy
  • khanate
  • electropalaeography
  • neondivide
  • fandic
  • All magazines