The Editors Protecting Wikipedia from AI Hoaxes
The Editors Protecting Wikipedia from AI Hoaxes

WikiProject AI Cleanup is protecting Wikipedia from the same kind of misleading AI-generated information that has plagued the rest of the internet.

The Editors Protecting Wikipedia from AI Hoaxes
WikiProject AI Cleanup is protecting Wikipedia from the same kind of misleading AI-generated information that has plagued the rest of the internet.
LLMs should die in a fire.
I'd be on board if it was actually useful and accurate. But it has proven time and time again to be hot garbage 99% of the time as they shove it down everyone's throat. They keep talking about it being a new age of AI and how it's going to change the world but it's only made the internet a worse place and changed nothing or made things worse.
They keep talking about it being a new age of AI and how it's going to change the world but it's only made the internet a worse place and changed nothing or made things worse.
Just like with crypto and NFTs.
For me it's useful, the 99% garbage is hype and misuse. I'd like the exploitative nature of llms to die, rther than the technology itself
Honestly the problem is that it simultaneously works too well and not well enough.
The truth is, it's proven time and time again to be hot garbage about 85% of the time. But that 15% of the time that it works great, that's why it's being shoved down our throats. That's what's ruining this for everyone, that fact that on rare occasions, it does actually work...
It’s changing the world, but not in the way we want it to.
The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.
There's nothing fundamentally wrong with LLMs. Users just need to know their capabilities and limitations and use them correctly. Just like any other tool.
As far as Wikipedia is concerned, there is pretty much no way to use LLMs correctly, because probably each major model includes Wikipedia in its training dataset, and using WP to improve WP is... not a good idea. It probably doesn't require an essay to explain why it's bad to create and mechanise a loop of bias in an encyclopedia.
There's nothing fundamentally wrong with LLMs.
Disagree.
Just my daily lose of LLMentalist
No matter how you look at it, Wikipedia is one of the modern wonders of the world; those who maintain and defend it are doing holy work. The availability of free, high quality, publically indexed and equitably accessible information about our modern world is such an under-appreciated gift.
Education is a powerful tool, but when most people hear "knowledge is power" they think of personal success or political might. But its true power is on an evolutionary scale.
No other species in the history of our (known) universe has the capability to study the world, and then share those the conclusions to the next generation with high precision, like we do. It's absolutely fascinating. It's what sets us apart from the rest. It defines the human experience.
The reality is that the integrity of this mechanism (or rather, the democratization of said mechanism) is under threat. It always has been, but the nature of the threat has changed, and its scary. I'm glad it is being protected, at least for now.
I swear there is rarely ever a time when Wikipedia isn't the best source to at least start looking into something. Definitely a modern wonder, assuming what you're looking for is on there.
Google, Microsoft, OpenAI, Anthropic and co should help fight this fight, their tools are the problem here.
No thank you. I would prefer if they had as little influence over anything in the world as possible.
Ok, but we are only here because of their stuff, it's only fair that they pick up the broom and help clean.
404 does not miss