Skip Navigation

Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 6th July 2025

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, happy 4th July in advance...I guess.)

70 comments
  • https://lists.w3.org/Archives/Public/public-swicg/2025Feb/0025.html

    found this while stalking @self@awful.systems's mastodon, the people working on ActivityPub want to shoehorn Ai into it somehow.

    • including possible effects on the protocols from issues like such as AI fuzzing attempts, to social engineering by AI's,

      "You know those massive problems we already had going back decades? Well what if the same problems happened in the future but with the letters 'A' and 'I' prepended? Scary!"

      through to how we deal with and approach and facilitate Avatars and Agents.

      Gosh darn it don't tell me LLM hype is going to ruin the existing definition of "agent" already well established in web standards.

  • Aella popped up on doomscroll - https://youtu.be/r7WL6kaTJnw

    E: oh man the comments are great

    E2:

    1:08:02 There's a lot of discussions among the rationalist community about the uneven distribution of IQ and its correlation with race. Why is this a topic that people fixate on if they're also convinced that this ultra intelligence an AGI that's like smarter than every human on the planet why are these marginal differences so important to people?

    • Highlights from the comments: @wjpmitchell3 writes,

      Actual psychology researcher: the problem with IQ is A) We don't really know what it's measuring, B.) We don't really know how it's useful, C.) We don't really know how context-specific it is, D.) When people make arguments about IQ, it's often couched around prejudiced ulterior motives. No one actually cares about IQ; they care about what it's a proxy measure of and we don't have good evidence yet to say "This is a reliable and broadly-encompassing representation of intelligence." or whatever else, so if you are trying to use IQ differences to say that there are race differences in intelligence, you have no grounds. The best you can say is there are race differences in this proxy measure that we're still trying to understand. It's dangerous to use an unreliable and possibly inaccurate representation of a phenomena to make policy changes or inform decisions around race. The evidence threshold has to be extremely high because we're entering sensitive ethical spaces, which is something that rationalist don't do well in because their utilitarian calculus has difficulty capturing the intangibles.

      @arnoldkotlyarevsky383 says,

      Nothing wrong with being self educated but she comes across as being not as far along as you would want someone to be in their self-education before being given a platform.

      @User123456767 observes,

      You can kind of tell she grew up as a Calvinist because she still seems to think she's part of the elect she's just replaced an actual big G God with some sort of AI God.

      @jaredsarnie3712 begins,

      I feel like so much of what she says boils down to finding bizarre hypothetical situations where child sexual abuse is morally acceptable.

      And from @Fruuuuuuuuuck:

      Doomscroll gooner arc

      • One thing I have wondered about. The rats always have that graphic of the IQ of Einstein vs the village idiot being almost imperceptible vs the IQ of the super robo god. If that's the case, why the hell do we only want our best and brightest doing "alignment research"? The village idiot should be almost just as good!

  • Actually burst a blood vessel last weekend raging. Gary Marcus was bragging about his prediction record in 2024 being flawless

    Gary continuing to have the largest ego in the world. Stay tuned for his upcoming book "I am God" when 2027 comes around and we are all still alive. Imo some of these are kind of vague and I wouldn't argue with someone who said reasoning models are a substantial advance, but my God the LW crew fucking lost their minds. Habryka wrote a goddamn essay about how Gary was a fucking moron and is a threat to humanity for underplaying the awesome power of super-duper intelligence and a worse forecaster than the big brain rationalist. To be clear Habryka's objections are overall- extremely fucking nitpicking totally missing the point dogshit in my pov (feel free to judge for yourself)

    https://xcancel.com/ohabryka/status/1939017731799687518#m

    But what really made me want to drive a drill to the brain was the LW brigade rallying around the claim that AI companies are profitable. Are these people straight up smoking crack? OAI and Anthropic do not make a profit full stop. In fact they are setting billions of VC money on fire?! (strangely, some LWers in the comments seemed genuinely surprised that this was the case when shown the data, just how unaware are these people?) Oliver tires and fails to do Olympic level mental gymnastics by saying TSMC and NVDIA are making money, so therefore AI is extremely profitable. In the same way I presume gambling is extremely profitable for degenerates like me because the casino letting me play is making money. I rank the people of LW as minimally truth seeking and big dumb out of 10. Also weird fun little fact, in Daniel K's predictions from 2022, he said by 2023 AI companies would be so incredibly profitable that they would be easily recuperating their training cost. So I guess monopoly money that you can't see in any earnings report is the official party line now?

    • Gary Marcus has been a solid source of sneer material and debunking of LLM hype, but yeah, you're right. Gary Marcus has been taking victory laps over a bar set so so low by promptfarmers and promptfondlers. Also, side note, his negativity towards LLM hype shouldn't be misinterpreted as general skepticism towards all AI... in particular Gary Marcus is pretty optimistic about neurosymbolic hybrid approaches, it's just his predictions and hypothesizing are pretty reasonable and grounded relative to the sheer insanity of LLM hypsters.

      Also, new possible source of sneers in the near future: Gary Marcus has made a lesswrong account and started directly engaging with them: https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

      Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He'll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass. Maybe he'll even start to be "charitable" to meet their norms and avoid down votes (I hope not, his snark and contempt are both enjoyable and deserved, but I'm not optimistic based on how the skeptics and critics within lesswrong itself learn to temper and moderate their criticism within the site). Lesswrong will moderately upvote his posts when he is sufficiently deferential to their norms and window of acceptable ideas, but won't actually learn much from him.

    • I wouldn’t argue with someone who said reasoning models are a substantial advance

      Oh, I would.

      I've seen people say stuff like "you can't disagree the models have rapidly advanced" and I'm just like yes I can, here: no they didn't. If you're claiming they advanced in any way please show me a metric by which you're judging it. Are they cheaper? Are they more efficient? Are they able to actually do anything? I want data, I want a chart, I want a proper experiment where the model didn't have access to the test data when it was being trained and I want that published in a reputable venue. If the advances are so substantial you should be able to give me like five papers that contain this stuff. Absent that I cannot help but think that the claim here is "it vibes better".

      If they're an AGI believer then the bar is even higher, since in their dictionary an advancement would mean the models getting closer to AGI, at which point I'd be fucked to see the metric by which they describe the distance of their current favourite model to AGI. They can't even properly define the latter in computer-scientific terms, only vibes.

      I advocate for a strict approach, like physicist dismissing any claim containing "quantum" but no maths, I will immediately dismiss any AI claims if you can't describe the metric you used to evaluate the model and isolate the changes between the old and new version to evaluate their efficacy. You know, the bog-standard shit you always put in any CS systems Experimental section.

      • To be clear, I strongly disagree with the claim. I haven't seen any evidence that "reasoning" models actually address any of the core blocking issues- especially reliably working within a given set of constraints/being dependable enough to perform symbolic algorithms/or any serious solution to confabulations. I'm just not going to waste my time with curve pointers who want to die on the hill of NeW sCaLiNG pArAdIgM. They are just too deep in the kool-aid at this point.

    • It's kind of a shame to have to downgrade Gary to "not wrong, but kind of a dick" here. Especially because his sneer game as shown at the end there is actually not half bad.

  • An interesting takedown of "superforecasting" from Ben Recht, a 3 part series on his substack where he accuses so called super forecasters of abusing scoring rewards over actually being precogs. First (and least technical) part linked below...

    https://www.argmin.net/p/in-defense-of-defensive-forecasting

    "The term Defensive Forecasting was coined by Vladimir Vovk, Akimichi Takemura, and Glenn Shafer in a brilliant 2005 paper, crystallizing a general view of decision making that dates back to Abraham Wald. Wald envisions decision making as a game. The two players are the decision maker and Nature, who are in a heated duel. The decision maker wants to choose actions that yield good outcomes no matter what the adversarial Nature chooses to do. Forecasting is a simplified version of this game, where the decisions made have no particular impact and the goal is simply to guess which move Nature will play. Importantly, the forecaster’s goal is not to never be wrong, but instead to be less wrong than everyone else.*

    *Yes, I see what I did there."

  • Ed's got another banger: https://www.wheresyoured.at/make-fun-of-them/

    What's extra fun is that HN found it: https://news.ycombinator.com/item?id=44424456

    There's at least one (if not two if you handle the HN response separately) good threads that could be made from this. Don't have the time personally at the moment.

    I will say that I'm shocked to see some reasonable shit in the HN comments, people saying the post is too long or not an acceptable tone are getting told off rather respectably with some good explanations (effectively: this was written this way intentionally you dolt). Broken clock and all that, I guess.

    • Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it's dark alchemy. And while it's not that the rabbit hole doesn't go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.

      I'm not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can't do what they do so that you don't ask the incredibly obvious questions about why it's so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don't know what kinda excuse the business idiots and political bullshitters are going to come up with.

      • You're absolutely right that the computer is still a black box to a lot of people, but throughout the personal computing era, there has at least been a pathway to mastery for the tools it offers. Furthermore, the touchscreen/smartphone era has roped in mechanisms of touch and proprioception that make the devices a more intimate, if deeply imperfect, extension of the self. Up until sometime late last decade, the Steve Jobs "bicycle of the mind" concept was still a driving force in the field.

        I still don't think most people grasp what a subtle, but fundamental, break it is that these AI products demand you confront them as a wholly separate entity from yourself. The path to mastery, and the feedback loop that builds that path, is so obscure it may as well not exist. If you wish to retrain a model, you've got to invest huge amounts of time and resources, as well as what remains a specialized (and not well-specified, as Ed highlights) skillset... and since it's a probabilistic process, you're still not going to get consistent results.

        I am more and more convinced that one of the damning core flaws of the current crop of AI technologies is that they are designed to incentivize use of centralized computing resources. Their designers are simply asking completely the wrong questions for the people the technologies are being imposed upon. But you can't say that someplace like HN, or even some parts of Bluesky, because so many people's salaries still depend on the rents from centralized computing.

70 comments