On Content Removal # Harmful, bigoted, or generally distasteful content isn’t as frequent on our site as it is elsewhere, but these do still appear enough that we sometimes get questions about why certain comments are left up. This post is meant to help you understand a bit about how we moderate.
As...
Hey all,
Moderation philosophy posts started out as an exercise by myself to put down some of my thoughts on running communities that I'd learned over the years. As they continued I started to more heavily involve the other admins in the writing and brainstorming. This most recent post involved a lot of moderator voices as well, which is super exciting! This is a community, and we want the voices at all levels to represent the community and how it's run.
This is probably the first of several posts on moderation philosophy, how we make decisions, and an exercise to bring additional transparency to how we operate.
A major problem I encountered on another site was pedantry.
Often, people would make a nuisance of themselves by being deliberately obtuse and fixating on minor details, while not explicitly breaking the site’s rules. Though not overtly hateful or bigoted, pedantic comments could be remarkably exhausting and annoying. It could seem like someone was trolling, or trying to bait you into an argument, while skirting the rules to stay out of trouble themselves.
How do you moderate posts like that? Should they be reported?
Without downvotes it will slowly bubble up to the top because the only barrier is finding enough people gullable or ignorant (precisely, not demeaning) enough to believe it. Or if it's "pop culture misinformation", it rises to the top by virtue of it being popular misinformation.
Both of those are not ideal for quality conten, or fact based discussion and debate when vote counts exist. As more often than not more votes = more true to a layman.
We've seen this on any other platform that has "the only direction is up" mechanics, because the only direction is up.
Another risk is promoting misinformed communities, who find comfort in each other because their shared, incorrect, opinions of what should be fact based truths find common ground. I don't think those are the kinds of communities beehaw wants. Thankfully community creation is heavily managed, which may mitigation or remove such risks entirely.
What I'm getting at is what will the stance be here? If beehaw starts fostering anti-intellectualism, will that be allowed to grow and fester? It's an insidious kind of toxicity that appears benign, till it's not.
To be clear I'm not saying these things exist or will exist on beehaw in a significant capacity. I am stating a theoretical based on the truth that there is always a subset of your population that are misinformed and will believe and spread misinformation, and some of that subset will defend those views vehemently and illogically.
I would hate to see that grow in a place that appears to have all the quality characteristics I have been looking for in a community.
The lowest common denominator of social media will always push to normalize all other forms and communities. It's like a social osmosis. Most communities on places like Reddit failed to combat and avoid such osmosis. Will beehaw avoid such osmosis over time?
“It’s ok to punch a Nazi” or “It’s ok to execute a pedo” content acceptable?
and tangentially related Publishing “Mugshots of criminals” fetishism posts, in the moderation philosophy here?
My personal ethos of moderation is to recognize in written policy that we have these biases to have “they/them” which can backdoor exceptions content moderation standards. The backdoor is that if someone is sufficiently and clearly “bad” for the majority of the community then it becomes ok to wish harm on or dehumanize someone. In my opinion shouldn’t entertain these sorts of post because of the harm/damage done if the mob is wrong, or harm to ourselves by indulging in this sort of pornography of moral certainty. Because as long as a broader culture finds certain categories of people are ok to dehumanize, then there’s no (real) objective check upon what is acceptable based on the desire of that majority, even in a community like beehaw.org.
A tangible legal example which I think provides an example of my personal philosophy is how Human dignity is enshrined in the first article of the German Basic Law – which is the German Constitution. Article 1 reads:
Human dignity shall be inviolable. To respect and protect it shall be the duty of all state authority.
The German people therefore acknowledge inviolable and inalienable human rights as the basis of every community, of peace and of justice in the world.
The following basic rights shall bind the legislature, the executive and the judiciary as directly applicable law.
My two cents here is that if a social media policy is to succeed, it needs something akin to this in it’s “constitution”, because to not have it opens too much moral relativism by bad-faith actors unrestrained and unconcerned by cultural norms to test and push the limits of what they can get away with by dehumanizing their enemies off-platform. ( IE: imagine Pizzagate, and it’s ultimate effect on Beehaw if it’s premise was accepted by the broader community. )
I saw a very popular post on Beehaw yesterday that clearly fit this pattern, and it seems like content designed to test the relative limits of the moderation policy of philosophy of places like Beehaw.>___--
I only have one very specific situational question. On Reddit I was permanently banned from r/politics because when Rand Paul tested positive for COVID, I commented "lol." Is that also considered unacceptable here? If it is I am fine with that, I just want to know what level of basic decency we're expected to show towards public figures we don't like so I can properly self-edit my tone. I am not going to go actively wishing harm on anyone but I thought this was a relatively innocuous comment when I made it and not deserving of a ban, much less a permanent one.
Words like "safe space" and "sanitized space" always seem loaded to me. It seems like it's implying that the real problem is that people are too sensitive / too easily offended and not the person initiating the harmful content.
From what I've seen, there's not an instance that necessarily aligns with my own moderation philosophy, so I plan to stay here, but I don't necessarily agree with your approach, and I hope it's okay to say this.
What's the stand on discussing points of view on charged subjects?
For example, I got banned from Reddit for discussing the possible thought process of someone who might be attracted to minors. Reason for the ban: "sexualization of minors"... even though the content policy refers to the act itself, not to its discussion.
Is it allowed in here to discuss negative or controversial points of view expressed, or actions taken, by third parties? Or does it taint the whole discussion? Are there some particular "taboo" themes that would do that, while others might not? Would such discussions be allowed with a disclaimer of non-support, or get banned anyway?
I sometimes like to reflect on, and discuss, some themes that I understand some might find uncomfortable or even revolting. I also understand that there might be themes not allowed in the server's jurisdiction.
If this was the case, then I think a clear list of "taboo themes" could be useful to everyone, even if most of the moderation was focused on applying a more flexible set of rules.
Great read, thank you so much for sharing these, as they help build confidence for users about whether this right instance for them. Personally, beehaw.org has quickly become one of my favorite online spaces to inhabit for a long time (as you can determine by my average of 10 comments per day since joining). I love how directly your philosophy of the distributed governance of the Fediverse aligns with my own, and it feels like there hasn't been anywhere else I've explored in the Fediverse where I've seen this kind of deep shared understanding about that the Fediverse is not a pooled cluster of compute resources, but instead a loosely associated grouping of self-governing online gathering places.
Keep being great. I have high confidence in this instance
I already see Beehaw as a sanitized space, to be honest. It was the first instance I had signed up for, but I switched almost immediately due to the lack of content and constant defense of censorship. I can sympathize with people who may want a safe space of sorts, but a safe space is just an echo chamber, the same way that the right has created communities where no one can challenge their deranged views.
90% of posts I've seen in Beehaw have devolved into arguments of equity where everyone must take in every advantage or disadvantage that every marginalized group has ever experienced and factor that into their position, or they're guilty of posting from a "white" point of view, or else disenfranchising every group of minorities. Not to mention that thread about Affirmative Action, in which the comments seemed to espouse a purely Black point of view, not taking into account how it may have a positive effect on Asian admissions, and completely ignoring the discussion of how admissions should be merit-based no matter what (even if that means all of our ivy-league colleges are filled with Asian students, who historically place a much higher importance on education than the rest of the world).
I don't have high hopes for any sort of meaningful discussion happening here.
Does not check out, anyway. This is most definitely a "sanitized space". Just for liberals, not leftists. Reddit 2.0. https://beehaw.org/comment/606420
I've seen a couple of really ugly comments recently, where a mod had replied, and I had to click on the person (wanting to block them) to realize they had been banned. I really hope a future Lemmy update shows very clearly when that happens, because right now it just looks like we're leaving the comment up. LEaving the comment up but showing the user as banned would be a relatively okay middle ground, I think.
A question I have about this is when we have communities with diametrically opposite points of view on a topic.. Eg I'm a carnivore, and while I respect vegans/vegetarians I completely disagree with them on fundamental levels. Both sides have logical arguments, but the foundations and life experiences are different. Does beehaw have space for such opposing points of views, or does it lean to one side, opposing the other?
From a logistical standpoint: we simply cannot privilege your personal discomfort over anyone else’s, and we cannot always cater specifically to you and what you want. Your personal positions on right or wrong are not inherently more valid than someone else’s when weighing most questions of how we should moderate this space. There are often plenty of people who do not feel like you that we must also consider in moderation decisions.
This doesn't take into consideration forces of oppression, and is thus incorrect and very badly constructed. Was this jointly authored, or is it one admin's take alone?
It's great to hear from the mod team. I understand Beehaw as being a place that values respect, trust and discussion in good faith. I'd sum it up as "good vibes". I made note of a comment somewhere on here that I gauged as primarily intending to rile up OP (effectively "what is the point of this post"). Not a horrendous comment by any means, but I'd classify it as being "not nice".
Using Beehaw instead of other instances comes at the cost of missing out on places like lemmy.world, although they can certainly be used in parallel. In my view, the gain of being here is respectful conversation. I accept that some emotional volatility is to be expected when politics or the like are being discussed. Are users ever given a gentle nudge to "be(e) a little bit nicer next time"?
I just joined, so I can't really speak too much about all of this from a point of experience on beehaw itself. It does seem like a lot of though has been put in this document which I do very much appreciate. In fact, it is one of the things that drove me to sign up for beehaw out of many other instances.
I do have plenty of experience moderating on "that other platform people are plenty mad at these days". And I would like to share a few things for your consideration, if that is alright? To be clear, nothing in my comment below is intended as judgment on your current approach and philosophy. These are mostly (tangibly) related things I wrote down or bookmarked over the years that might be useful or relevant for your consideration.
As far as hate speech goes, there are indeed roughly the two approaches you outlined. Although I do think it often falls in between. I'd like to caution against the most egregious types of hate speech. I very much don't think you'd leave those up, but I do like to share this story from a bartender [nsfw warning due to Spanish civil war poster with dead child] about this sort of thing.
On Community-Based Moderation I do want to caution for something called the "the fluff principle"
"The Fluff Principle: on a user-voted news site, the links that are easiest to judge will take over unless you take specific measures to prevent it." Source: Article by Paul Graham
What this means is basically the following, say you have two submissions:
An article - takes a few minutes to judge.
An image - takes a few seconds to judge.
So in the time that it takes person A to read and judge he article person B, C, D, E and F already saw the image and made their judgement. So basically images will rise to the top not because they are more popular, but simply because it takes less time to vote on them so they gather votes faster.
This unfortunately also applies to various types of unsavory/bigoted speech. In fact, I believe I remember reading that beehaw did de-federate from some other instances due to problems coming from them. So it seems you are aware of the principle, if only due to experience.
tl;dr Some waffling about moderation and me generally appreciating that thought is being put into it on this platform :)