Skip Navigation

Credulous coverage of AI slop on Wikipedia

Everybody loves Wikipedia, the surprisingly serious encyclopedia and the last gasp of Old Internet idealism!

(90 seconds later)

We regret to inform you that people write credulous shit about "AI" on Wikipedia as if that is morally OK.

Both of these are somewhat less bad than they were when I first noticed them, but they're still pretty bad. I am puzzled at how the latter even exists. I had thought that there were rules against just making a whole page about a neologism, but either I'm wrong about that or the "rules" aren't enforced very strongly.

38 comments
  • Just in case you needed to induce vomiting:

    The Universal AI University has implemented a novel admissions process, leveraging the Metaverse and Artificial Intelligence (AI) technologies. This system integrates optimization algorithms, crowd-generating tools, and visual enhancement technologies within the Metaverse, offering a unique and technologically advanced admissions experience for students.

  • Reflection (artificial intelligence) is dreck of a high order. It cites one arXiv post after another, along with marketing materials directly from OpenAI and Google themselves... How do the people who write this shit dress themselves in the morning without pissing into their own socks?

    • I also really don't enjoy AI boom.

      GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. [...] An upgraded version called GPT-3.5 was used in ChatGPT, which later garnered attention for its detailed responses and articulate answers across many domains of knowledge.

      Who wrote this? OpenAI marketing?

      • Let's see, it cites Scott Computers, a random "AI Safety Fundamentals" website, McKinsey (four times!), a random arXiv post....

    • and of course, not a single citation for the intro paragraph, which has some real bangers like:

      This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of "test-time compute," where additional computational resources are used during inference.

      because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation

38 comments