Skip Navigation

Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systems

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

174 comments
  • Evan Urquhart:

    I had to attend a presentation from one of these guys, trying to tell a room full of journalists that LLMs could replace us & we needed to adapt by using it and I couldn't stop thinking that an LLM could never be a trans journalist, but it could probably replace the guy giving the presentation.

  • Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because it's a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.

    It's a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you don't it backs off.

    The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because it's not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesn't appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And that's how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.

    Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP don't explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.

  • Ian Lance Taylor (of GOLD, Go, and other tech fame) had a take on chatbots being AGI that I liked to see from an influential person of computing. https://www.airs.com/blog/archives/673

    The summary is that chatbots are not AGI, using the current AI wave as the usher to AGI is not it, and all around dislikes in a very polite way that chatbot LLMs are seen as AI.

    Apologies if this was posted when published.

  • Sex pest billionaire Travis Kalanick says AI is great for more than just vibe coding. It's also great for vibe physics.

    • @TinyTimmyTokyo He has more dollars than sense, as they say. (Funnier if you say it out loud)
      @blakestacey

    • My guess is that vibe-physics involves bruteforcing a problem until you find a solution. That method sorta works, but is wholly inefficient and rarely robust/general enough to be useful.

      • Nah, he's just talking to an LLM.

        “I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”

        And I don't think you can brute force physics in general, having to experimentally confirm or disprove every random-ass intermediary hypothesis the brute force generator comes up with seems like quite the bottle neck.

      • If infinite monkeys with typewriters can compose Shakespeare, then infinite monkeys with slop machines can produce Einstein (but you need to pump in infinite amounts of money first into my CodeMonkeyfy startup, just in case).

  • Remember last week when that study on AI's impact on development speed dropped?

    A lot of peeps take away on this little graphic was "see, impacts of AI on sw development are a net negative!" I think the real take away is that METR, the AI safety group running the study, is a motley collection of deeply unserious clowns pretending to do science and their experimental set up is garbage.

    https://substack.com/home/post/p-168077291

    "First, I don’t like calling this study an “RCT.” There is no control group! There are 16 people and they receive both treatments. We’re supposed to believe that the “treated units” here are the coding assignments. We’ll see in a second that this characterization isn’t so simple."

    (I am once again shilling Ben Recht's substack. )

  • Previously sneered:

    The context for this essay is serious, high-stakes communication: papers, technical blog posts, and tweet threads.

    More recently, in the comments:

    After reading your comments and @Jiro 's below, and discussing with LLMs on various settings, I think I was too strong in saying....

    It's like watching people volunteer for a lobotomy.

  • Haven't really kept up with the pseudo-news of VC funded companies acquiring each other, but it seems Windsurf (previously been courted by OpenAI) is now gonna be purchased by the bros behind Devin.

174 comments