Based on that one Senate hearing, it looks like big companies like Facebook, Discord and Twitter are aiming for the maximum percent of false positives and false negatives when it comes to CSAM.
The only thing I know about that screenshot is that it used to say "show results anyway" which is probably worse in most cases
with any luck this will destroy them and funnel disgruntled users our way, where the servers are too numerous to ever fully take down and many aren't even US based anyways
Unfortunately, I don't think so. Most of the politicians were virtue signaling, asking questions that were impossible and demanding timetables that they weren't going to get anyway. One woman actually had some half decent data prepared, but I don't think anybody else was really taking it seriously.
Now if there was some legislation passed, specifically stuff that wasn't KOSA, that would be something else. KOSA seems prepped to simply destroy free speech on the internet, and it would mostly harm smaller social media networks that don't have lawyers and around-the-clock moderators to police every single comment and post.
Here's a hot tip. If you're on android, open the developer settings and turn on "demo mode" before taking screenshots. It makes the battery and signal display as 100% so you don't get judged by internet commenters who don't go outside.
got caught in a horrible recommendations loop because I’d like family photos of running and gymnastics for my nieces and cousins.
I never reach that point on Facebook. I scroll for about 5 posts to see what my family and friends might be up to and get too frustrated with unmoderated spam and report it as spam and close the tab and move on
This is the key reason why I do not have Instagram. People who call Tiktok "creepy" I think are genuinely just Western xenophobes that try to hide how disgusting Instagram and Snapchat are... and I have never used Tiktok other than links I receive from friends.
One the biggest problems with the internet today is bad actors know how to manipulate or dodge the content moderation to avoid punitive consequences. The big social platforms are moderated by the most naive people in the world. It's either that or willful negligence. Has to be. There's just no way these tech bros who spent their lives deep in internet culture are so clueless about how to content moderate.
I know them. I worked in this industry. They're not naive. What basis do you have for these comments?
I think you're conflating with business executives running said social and gaming companies. Stop calling them techbros. Meta is not a tech startup. They're a transnational corporation. They have capitalist execs running the companies.
The thing is that words can have a very broad range of meaning depending on who uses them and how (among many other factors), but you can't accurately code all of that into a form that computers can understand. Even ignoring bad actors it makes certain things very difficult, like if you ever want to search for something that just happens to share words with something completely different which is very popular.
Auto-moderation is both lazy and is only going to get worse. Not saying there isn't some value on things being hard-banned (like very specific spam like shit that just keeps responding to everything with the same thing non-stop). But these mega outlets/sites want to just use full automation to ban shit without any human interactions. At least unless you or another corp has connections on the inside to get a person or people to fix it. Just like how they make it so fucking hard to ever reach a person when calling (or trying to even find) a support line.
This automated shit just blacklists more and more shit and can completely fuck over people that use those sites for income (and they even can't reach a person when their income is cut off for false reasons and don't get back-pay for the period of a strike/ban). The bad guys will always just keep moving to a new word or phrase as the old ones get banned. So we as users are actually losing words and phrases and the actual shit is just on to the next one without issues.
I agree with you is the TL;DR, and the rest is just my mad ranting opinions about companies being allowed to just auto-censor us. So feel free to completely ignore the rest. lol.
It is like just banning words and phrases just because bad people use them has just become the norm. I really really can't stand the way that channels on YT constantly have to self-censor basically everything (even if the video is just reporting on or trying to explain bad shit that is or has happened). And it never seems to actually stop the actual issues from happening. Just means the bad people just move on to a new word or phrase that is then itself banned. It isn't about actually stopping fucked-up shit from happening. It is just about making sure advertisers and other sources of money don't throw a fit.
We always hear about how places like China are bad in-part for censoring words and speech. But in the US and other western nations we pretend we are allowed to freely speak uncensored. We have always had censoring of speech, it is just that the real rulers of the country are allowed to do it instead. Keeps the government's hands free from legally being the enforcers of doing it to us. Shit like CP is fucked, and it should be handled for what it is, but allowing for-profit companies and especially their algorithms/AI to decide what we can and can't say or search for without any level of human interactions that very much lead to false bans is also fucked.
It is waaaay too easy for all the mega corps to completely take down channels and block creators from revenue of their own work just completely automated. But the accused channel can't ever get a real person to both get clear understanding of what and who is attacking them, and to explain why their strike/bans aren't valid. I have heard that even channels that have gotten written/legal permission from a big studio to use a clip of music or segment from video (music being the worst) will STILL catch automated strikes for copyright violations.
We don't need actual government censors, because the mega corps with all the money are allowed to do it for them. We have rights but they don't really matter if they can say a private company or org made up of people from various mega corps are allowed to do it for them.
At one point, "cheese pizza" was a term they apparently used on YouTube videos etc due to it having the same abbreviation as CP (Child Pornography).
This in turn was why the Podesta emails led to the whole pizza gate thing - there were a bunch of emails with weird phrasings like going to do cheese pizza for a couple of hours that just aren't how people talk or write and so internet weirdos thought it was pedo code and then it kinda went insane from there.
Remember, searching for "halo" is banned because it could potentially be linked to pedophilia, but editing a video of the president to look like a pedophile is fine because "it wasnt done with AI."
Biden was edited to look like he was groping his granddaughter for an extended amount of time instead of quickly putting a pin above her breast. It was posted to Facebook/Instagram/Meta. AI wasn't used.
I actually don't know, I'm not sure it is possible (I never used Instagram, the search might be auto-submitting for all I know) but intentionally flagging yourself as potential child abuser, for clout, is a bit extreme...
Barely 2 years ago I noticed that people were posting porn on Insta, and it was publicly visible just because they tagged #cum as #cüm. I don't think this is possible now, but basically corporations are dumb and people posting disallowed content can be creative as hell.
I had a post of mine flagged for multiple days on there because it had an illustration of a woman in a full length wool coat completely covering her and not in any way sexual. Shit is so stupid
We beat KOSA before, we can beat it again. Contacting your reps matters. Voting matters, especially in primaries and locals. So does being active politically in other ways.
It's the "Kids Online Safety Act". Basically it's using the old "think of the children!" move, but in reality conservatives are trying to push anything queer back into the dark.
I'm fine with abducting children for a Super-Soldier program. But I draw the line at having photos of them on Instagram. Honestly, a deserved warning. Be better 👏