Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 1st June 2025

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

125 comments
  • Interesting (in a depressing way) thread by author Alex de Campi about the fuckery by Unbound/Boundless (crowdfunding for publishing, which segued into financial incompetence and stealing royalties), whose latest incarnation might be trying to AI their way out of the hole they’ve dug for themselves.

    From the liquidator’s proposals:

    We are also undertaking new areas of business that require no funds to implement, such as starting to increase our rights income from book to videogaming by leveraging our contacts in the gaming industry and potentially creating new content based on our intellectual property utilizing inexpensive artificial intelligence platforms.

    (emphasis mine)

    They don’t appear to actually own any intellectual property anymore (due to defaulting on contracts) so I can’t see this ending well.

    Original thread, for those of you with bluesky accounts: https://bsky.app/profile/alexdecampi.bsky.social/post/3lqfmpme2722w

  • Loose Mission Impossible Spoilers

    The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI's main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI's that ultimately have only moderate power.

    Adding to the post-LLM hype predictions: I think post LLM bubble popping, "Terminator" style rogue AI movie plots don't go away, but take on a different spin. Rogue AI's strength's are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less "failed to comprehend love" or "cleverly constructed logic bomb breaks its reasoning" and more "forgets what it was doing after getting drawn into too long of a conversation". For human actions it will be less "its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement" and more "its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster".

  • So far away we wait for the AGI
    For the billions all wasted and gone
    We feel the pain of compute time lost in few thousand days
    Through the sneering and the flames we carry on

  • Another critihype article from the BBC, with far too much credulousness at the idea behind supposed AI consciousness at the cost of covering the harms of AI as things stand, e.g. the privacy, environmental, data set bias problems:

    https://www.bbc.com/news/articles/c0k3700zljjo

    • Tried to read it, ended up glazing over after the first or second paragraph, so I'll fire off a hot take and call it a day:

      Artificial intelligence is a pseudoscience, and it should be treated as such.

      • Every AI winter, the label AI becomes unwanted and people go with other terms (expert systems, machine learning, etc.)... and I've come around to thinking this is a good thing, as it forces people to specify what it is they actually mean, instead of using a nebulous label with many science fiction connotations that lumps together decent approaches and paradigms with complete garbage and everything in between.

      • I'm gonna be polite, but your position is deeply sneerworthy; I don't really respect folks who don't read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:

        There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as "blindly optimistic and driven by human exceptionalism." … "We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn't mean they go together in general, for example in animals."

        At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:

        In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won't be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. "It will mean that we trust these things more, share more data with them and be more open to persuasion." But the greater risk from the illusion of consciousness is a "moral corrosion", he says. "It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives" – meaning that we might have compassion for robots, but care less for other humans.

        A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it's a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I'll try to salvage your position:

        Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it's definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.

125 comments