Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 16th March 2025

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

54 comments
  • The Columbia Journalism Review does a study and finds the following:

    • Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
    • Premium chatbots provided more confidently incorrect answers than their free counterparts.
    • Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
    • Generative search tools fabricated links and cited syndicated and copied versions of articles.
    • Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.
    • this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

      but also, hoo boy what a painful talk page

      • it's not actually any more painful than any wikipedia talk page, it's surprisingly okay for the genre really

        remember: wikipedia rules exist to keep people like this from each others' throats, no other reason

  • A hackernews doesn't think that LLMs will replace software engineers, but they will replace structural engineers:

    https://news.ycombinator.com/item?id=43317725

    The irony is that most structural engineers are actually de jure professionals, and an easy way for them to both protect their jobs and ensure future buildings don't crumble to dust or are constructed without sprinkler systems is to simply ban LLMs from being used. No such protection exists for software engineers.

    Edit the LW post under discussion makes a ton of good points, to the level of being worthy of posting to this forum, and then nails its colors to the mast with this idiocy

    At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.

    Gotta reaffirm the dogma!

    • but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what's possible and the best practices

      days since orangeposter (incorrectly) argued in certainty from 3 seconds of thought as to what they think is involved in a process: [0]

      it's so fucking frustrating to know easy this bullshit is to see if you know a slight bit of anything, and doubly frustrating as to how much of the software world is this thinking. I know it's nothing particularly new and that our industry has been doing this for years, but scream

      • You basically just need to know a lot of rules / tables and how things interact to know what’s possible and the best practices

        And to be a programmer you basically just need to know a lot of languages / libraries and how things interact, really easy, barely an inconvenience.

        The actual irony is that this is more true than for any other engineering profession since programmers uniquely are not held to any standards whatsoever, so you can have both skilled engineeres and complete buffoons coexist, often within the same office. There should be a Programmers' Guild or something where the experienced master would just slap you and throw you out if you tried something idiotic like using LLMs for code generation.

  • Huggingface cofounder pushes against LLM hype, really softly. Not especially worth reading except to wonder if high profile skepticism pieces indicate a vibe shift that can't come soon enough. On the plus side it's kind of short.

    The gist is that you can't go from a text synthesizer to superintelligence, framed as how a straight-A student that's really good at learning the curriculum at the teacher's direction can't really be extrapolated to an Einstein type think-outside-the-box genius.

    The world 'hallucination' never appears once in the text.

54 comments