The wrongful death lawsuit against several social media companies for allegedly contributing to the radicalization of a gunman who killed 10 people at a grocery store in Buffalo, New York, will be allowed to proceed.
I really don’t like cases like this, nor do I like how much the legal system seems to be pushing “guilty by proxy” rulings for a lot of school shooting cases.
It just feels very very very dangerous and ’going to be bad’ to set this precedent where when someone commits an atrocity, essentially every person and thing they interacted with can be held accountable with nearly the same weight as if they had committed the crime themselves.
Obviously some basic civil responsibility is needed. If someone says “I am going to blow up XYZ school here is how”, and you hear that, yeah, that’s on you to report it. But it feels like we’re quickly slipping into a point where you have to start reporting a vast amount of people to the police en masse if they say anything even vaguely questionable simply to avoid potential fallout of being associated with someone committing a crime.
It makes me really worried. I really think the internet has made it easy to be able to ‘justifiably’ accuse almost anyone or any business of a crime if a person with enough power / the state needs them put away for a time.
I don't understand the comments suggesting this is "guilty by proxy". These platforms have algorithms designed to keep you engaged and through their callousness, have allowed extremist content to remain visible.
Are we going to ignore all the anti-vaxxer groups who fueled vaccine hesitancy which resulted in long dead diseases making a resurgence?
To call Facebook anything less than complicit in the rise of extremist ideologies and conspiratorial beliefs, is extremely short-sighted.
"But Freedom of Speech!"
If that speech causes harm like convincing a teenager walking into a grocery store and gunning people down is a good idea, you don't deserve to have that speech. Sorry, you've violated the social contract and those people's blood is on your hands.
Please let me know if you want me to testify that reddit actively protected white supremacist communities and even banned users who engaged in direct activism against these communities
Back when I was on reddit, I subscribed to about 120 subreddits. Starting a couple years ago though, I noticed that my front page really only showed content for 15-20 subreddits at a time and it was heavily weighted towards recent visits and interactions.
For example, if I hadn't visited r/3DPrinting in a couple weeks, it slowly faded from my front page until it disappeared all together. It was so bad that I ended up writing a browser automation script to visit all 120 of my subreddits at night and click the top link. This ended up giving me a more balanced front page that mixed in all of my subreddits and interests.
My point is these algorithms are fucking toxic. They're focused 100% on increasing time on page and interaction with zero consideration for side effects. I would love to see social media algorithms required by law to be open source. We have a public interest in knowing how we're being manipulated.
I just would like to show something about Reddit. Below is a post I made about how Reddit was literally harassing and specifically targeting me, after I let slip in a comment one day that I was sober - I had previously never made such a comment because my sobriety journey was personal, and I never wanted to define myself or pigeonhole myself as a "recovering person".
I reported the recommended subs and ads to Reddit Admins multiple times and was told there was nothing they could do about it.
I posted a screenshot to DangerousDesign and it flew up to like 5K+ votes in like 30 minutes before admins removed it. I later reposted it to AssholeDesign where it nestled into 2K+ votes before shadow-vanishing.
Yes, Reddit and similar are definitely responsible for a lot of suffering and pain at the expense of humans in the pursuit of profit. After it blew up and front-paged, "magically" my home page didn't have booze related ads/subs/recs any more! What a totally mystery how that happened /s
The post in question, and a perfect "outing" of how Reddit continually tracks and tailors the User Experience specifically to exploit human frailty for their own gains.
Edit: Oh and the hilarious part that many people won't let go (when shown this) is that it says it's based on my activity in the Drunk reddit which I had never once been to, commented in, posted in, or was even aware of. So that just makes it worse.
"Noooo it's our algorithm we can't be held liable for the program we made specifically to discover what people find a little interesting and keep feeding it to them!"
I gave up reporting on major sites where I saw abuse. Stuff that if you said that in public, also witnessed by others, you've be investigated. Twitter was also bad for responding to reports with "this doesnt break our rules" when a) it clearly did and b) probably a few laws.
Sweet, I'm sure this won't be used by AIPAC to sue all the tech companies for causing October 7th somehow like unrwa and force them to shutdown or suppress all talk on Palestine. People hearing about a genocide happening might radicalize them, maybe we could get away with allowing discussion but better safe then sorry, to the banned words list it goes.
This isn't going to end in the tech companies hiring a team of skilled moderators who understand the nuance between passion and radical intention trying to preserve a safe space for political discussion, that costs money. This is going to end up with a dictionary of banned and suppressed words.
I think there's definitely a case to be made that recommendation algorithms, etc. constitute editorial control and thus the platform may not be immune to lawsuits based on user posts.
Love Reddit’s lies about them taking down hateful content when they’re 100% behind Israel’s genocide of the Palestinians and will ban you if you say anything remotely negative about Israel’s govenment. And the amount of transphobia on the site is disgusting. Let alone the misogyny.
I will testify under oath with evidence that Reddit, the company, has not only turned a blind eye to but also encouraged and intentfully enabled radicalization on their platform. It is the entire reason I am on Lemmy. It is the entire reason for my username. It is the reason I questioned my allyship with certain marginalized communities. It is the reason I tense up at the mention of turtles.
Honestly, good, they should be held accountable and I hope they will be. They shouldn't be offering extremist content recommendations in the first place.
So, I can see a lot of problems with this. Specifically the same problems that the public and regulating bodies face when deciding to keep or overturn section 230. Free speech isn't necessarily what I'm worried about here. Mostly because it is already agreed that free speech is a construct that only the government is actually beholden to. Message boards have and will continue to censor content as they see fit.
Section 230 basically stipulates that companies that provide online forums (Meta, Alphabet, 4Chan etc) are not liable for the content that their users post. And part of the reason it works is because these companies adhere to strict guidelines in regards to content and most importantly moderation.
Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
Reddit, Facebook, 4Chan et all do have rules and regulations they require their users to follow in order to post. And for the most part the communities on these platforms are self policing. There just aren't enough paid moderators to make it work otherwise.
That being said, the real problem is that this really kind of indirectly challenges section 230. Mostly because it very barely skirts around whether the relevant platforms can themselves be considered publishers, or at all responsible for the content the users post and very much attacks how users are presented with content to keep them engaged via algorithms (which is directly how they make their money).
Even if the lawsuits fail, this will still be problematic. It could lead to draconian moderation of what can be posted and by whom. So now all race related topics regardless of whether they include hate speech could be censored for example. Politics? Censored. The discussion of potential new laws? Censored.
But I think it will be worse than that. The algorithm is what makes the ad space these companies sell so valuable. And this is a direct attack on that. We lack the consumer privacy protections to protect the public from this eventuality. If the ad space isn't valuable the data will be. And there's nothing stopping these companies from selling user data. Some of them already do. What these apps do in the background is already pretty invasive. This could lead to a furthering of that invasive scraping of data. I don't like that.
That being said there is a point I agree with. These companies literally do make their algorithm addictive and it absolutely will push content at users. If that content is of an objectionable nature, so long as it isn't outright illegal, these companies do not care. Because they do gain from it monetarily.
What we actually need is data privacy protections. Holding these companies accountable for their algorithms is a good idea. But I don't agree that this is the way to do that constructively. It would be better to flesh out 230 as a living document that can change with the times. Because when it was written the Internet landscape was just different.
What I would like to see is for platforms to moderate content posted and representing itself as fact. We don't see that nearly enough on places like reddit. Users can post anything as fact and the echo chambers will rally around it if they believe it. It's not really incredibly difficult to radicalise a person. But the platforms aren't doing that on purpose. The other users are, and the algorithms are helping them.
As much as I believe it is a breeding ground for right wing extremism, it's a little strange that 4chan is being lumped in with these other sites for a suit like this. As far as I know, 4chan just promotes topics based on the number of people posting to it, and otherwise doesn't employ an algorithm at all. Kind of a different beast to the others, who have active algorithms trying to drive engagement at any cost.
Are the platforms guilty or are the users that supplied the radicalized content guilty? Last I checked, most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves.
Can we stop letting the actions of a few bad people be used to curtail our freedom on platforms we all use.
I don't want the internet to end up being policed by corporate AIs and poorly implemented bots (looking at you auto-mod).
The internet is already a husk of what it used to be, what it could be. It used to be personal, customisable... Dare I say it; messy and human...
.... maybe that was serving a need that now people feel alienated from. Now we live as corporate avatars who risk being banned every time we comment anywhere.
What an excellent presedent to set cant possibly see how this is going to become authoritarian. Ohh u didnt report someone ur also guilty cant see any problems with this.
A New York state judge on Monday denied a motion to dismiss a lawsuit against several social media companies alleging the platforms contributed to the radicalization of a gunman who killed 10 people at a grocery store in Buffalo, New York in 2022, court documents show.
In her decision, the judge said that the plaintiffs may proceed with their lawsuit, which claims social media companies — like Meta, Alphabet, Reddit and 4chan — ”profit from the racist, antisemitic, and violent material displayed on their platforms to maximize user engagement,” including the time then 18-year-old Payton Gendron spent on their platforms viewing that material.
“They allege they are sophisticated products designed to be addictive to young users and they specifically directed Gendron to further platforms or postings that indoctrinated him with ‘white replacement theory’,” the decision read.
“It is far too early to rule as a matter of law that the actions, or inaction, of the social media/internet defendants through their platforms require dismissal,” said the judge.
“While we disagree with today’s decision and will be appealing, we will continue to work with law enforcement, other platforms, and civil society to share intelligence and best practices,” the statement said.
We are constantly evaluating ways to improve our detection and removal of this content, including through enhanced image-hashing systems, and we will continue to review the communities on our platform to ensure they are upholding our rules.”
The original article contains 407 words, the summary contains 229 words. Saved 44%. I'm a bot and I'm open source!
They're appealing the denial of motion to dismiss huh? I agree that this case really doesn't have legs but I didn't know that was an interlocutory appeal that they could do. They'd win in summary judgement regardless.
eh... anyone can be "radicalized" by anything. is anyone suing Mecca when islamic fundamentalists jihad someone/something? is anyone suing the Catholic church because of christian fundamentalists doing the same thing?
holding tech companies liable because some crazy dumbshit did a bad thing is disingenuous at best. Judge's ruling isnt going to stand.
So now anyone who says things is going to be held accountable for crazy people being crazy?
What a lovely world we live in. That's worse than what CNN kept saying about the joker after that one mass shooting at the theater that happened to be showing "The Dark Knight" at the same time.
What is it with libs and their inability to understand what the term "radicalize" mean?
It's really simple. The white supremacist content this little alt-right child terrorist was watching wasn't radical - it was reactionary. Ie, right-wing and therefore anti-radical.
Come on libs, tell me... is this really so hard to get?
Lol no. Social media isn't responsible it's the people on it. I fucking hate this brain dead logic of "Well punishing the bad person isn't enough, go for the manufacturer!"
Yeah, fuck it, next time someone is beaten to death with a power tool hold DeWalt accountable. Next time someone plays loud music during their murder hold Spotify accountable. So fucking retarded.