Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
78
Comments
881
Joined
2 yr. ago

  • You're dead right on that.

    Part of me suspects STEM in general (primarily tech, the other disciplines look well-protected from the fallout) will have to deal with cleaning off the stench of Eau de Fash after the dust settles, with tech in particular viewed as unequipped to resist fascism at best and out-and-proud fascists at worst.

  • I wrote yesterday about red-team cybersecurity and how the attack testing teams don’t see a lot of use for AI in their jobs. But maybe the security guys should be getting into AI. Because all these agents are a hilariously vulnerable attack surface that will reap rich rewards for a long while to come.

    Hey, look on the bright side, David - the user is no longer the weakest part of a cybersecurity system, so they won't face as many social engineering attempts on them.

    Seriously, though, I fully expect someone's gonna pull off a major breach through a chatbot sooner or later. We're probably overdue for an ILOVEYOU-level disaster.

  • SneerClub @awful.systems

    Sarah Lyons on AI doom crankery

  • Tante fires off about web search:

    There used to be this deal between Google (and other search engines) and the Web: You get to index our stuff, show ads next to them but you link our work. AI Overview and Perplexity and all these systems cancel that deal.

    And maybe - for a while - search will also need to die a bit? Make the whole web uncrawlable. Refuse any bots. As an act of resistance to the tech sector as a whole.

    On a personal sidenote, part of me suspects webrings and web directories will see a boost in popularity in the coming years - with web search in the shitter and AI crawlers being a major threat, they're likely your safest and most reliable method of bringing human traffic to your personal site/blog.

  • MoreWrite @awful.systems

    Some Quick-and-Dirty Thoughts on Technological Determinism

  • Well, what’s next, and how much work is it?

    I'm not particularly sure myself. By my guess, I don't expect one specific profession to be "what's next", but a wide variety of professions becoming highly lucrative, primarily those which can exploit the fallout of the AI bubble to their benefit. Giving some predictions:

    • Therapists and psychiatrists should find plenty of demand, as mental health crisis and cases of AI psychosis provide them a steady stream of clients.
    • Those in writing related jobs (e.g. copywriters) can likely squeeze hefty premiums from clients with AI-written work that needs fixing.
    • Programmers may find themselves a job tearing down the mountains of technical debt introduced by vibe-coding, and can probably crowbar a premium out of desperate clients as well. (This one's probably gonna be limited to senior coders, though - juniors are likely getting the shaft on this front)

    As for which degrees will come into high demand, I expect it will be mainly humanities degrees that benefit - either directly through netting you a profession that can exploit the AI fallout, or indirectly through showing you have skills that an LLM can't imitate.

    I didn’t want to be a computing professional. I trained as a jazz pianist

    Nice. You could probably earn some cash doing that on the side.

    At some point we ought to focus on the real problem: not STEM, not humanities, but business schools and MBA programs.

    You're goddamn right.

  • Not only that, the reported development of post-quantum cryptography (with NIST having released some finalised encryption standards last year) could give cybersec professionals a headstart on protecting everything if it fully comes to fruition (assuming said cryptography lives up to its billing).

    You want me to take a shot in the dark, I expect zero-knowledge proofs will manage to break into the mainstream before quantum computing becomes a thing - minimising the info you give out is good for protecting your users' privacy, and minimises the amount of info would-be attackers could work with.

  • The security guys aren’t interested in quantum computing either. Because it doesn’t exist yet. The report’s author seems surprised that “several interviewees believe that quantum computing is in an overhyped phase.”

    If and when quantum computing does start making waves, I expect the security guys will start loudly crowing about it.

    Its ostensible ability to break most regular encryption schemes over its knee would be a complete fucking nightmare for them, that's for sure.

  • Thomasaurus has given their thoughts on using AI, in a journal entry called "I tried coding with AI, I became lazy and stupid)". Unsurprisingly, the whole thing is one long sneer, with a damning indictment of its effectiveness at the end:

    If I lose my job due to AI, it will be because I used it so much it made me lazy and stupid to the point where another human has to replace me and I become unemployable.

    I shouldn't invest time in AI. I should invest more time studying new things that interest me. That's probably the only way to keep doing this job and, you know, be safe.

  • To extend that analogy a bit, the dunkfest I noted suggests that a portion of the public views STEM as perfectly okay with the orphan grinder's existence at best, and proud of having orphan blood on their hands at worst.

    As for the motorised orphan grinder you mention, it looks to me like the public viewed its construction as STEM voting for the Leopards Eating People's Faces Party (with predictable consequences).

  • Quick update: I've checked the response on Bluesky, and it seems the general response is of schadenfreude at STEM's expense. From the replies, I've found:

    Plus one user mocking STEM in general as "[choosing] fascism and “billions must die”" out of greed, and another approving of others' dunks on STEM over past degree-related grievances.

    You want my take on this dunkfest, this suggests STEM's been hit with a double-whammy here - not only has STEM lost the status their "high-paying" reputation gave them, but that reputation (plus a lotta built-up grievances from mockery of the humanities) has crippled STEM's ability to garner sympathy for their current predicament.

  • New article from the New York Times reporting on an influx of compsci graduates struggling to find jobs (ostensibly caused by AI automation). Found a real money shot about a quarter of the way through:

    Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York. That is more than double the unemployment rate among recent biology and art history graduates, which is just 3 percent.

    You want my take, I expect this article's gonna blow a major hole in STEM's public image - being a path to a high-paying job was one of STEM's major selling points (especially compared to the "useless" art/humanities degrees), and this new article not only undermines that selling point, but argues for flipping it on its head.

  • In more low-key news, the New Yorker's given public praise to Blood in the Machine, pulling a year-old review back into the public spotlight.

    Its hardly anything new (the Luddites' cultural re-assessment has been going on since 2023), but its hardly a good sign for the tech industry at large (or AI more specifically) that a major newspaper's decided to give some positive coverage to 'em.

    With that out the way, here's a sidenote:

    When history looks back on the Luddites' cultural re-assessment, I expect the rise of generative AI will be pointed to as a major factor.

    Beyond being a blatant repeat of what the Luddites fought against (automation being used to fuck over workers and artisans), its role in enabling bosses to kill jobs and abuse labour in practically every field imaginable (including fields that were thought safe from automation) has provided highly fertile ground for developing class solidarity.

  • Sam Altman is touting GPT-5 as a “Ph.D level expert.” You might expect a Ph.D could count.

    So let’s try the very first question: how many R’s are the in the word strawberry? GPT-5 can do the specific word “strawberry.” Cool.

    But I suspect they hard-coded that question, because it fails hard on other words: [ChatGPT]

    I LITERALLY SPECIAL-CASED THIS BASIC FUCKING SHIT TEN FUCKING MONTHS AGO AND I'M FUCKING DOGSHIT AS A PROGRAMMER HOW THE EVER-LOVING FUCK DID THEY COMPLETELY FUCKING FAIL TO SPECIAL-CASE THIS ONE SPECIFIC SITUATION WHAT THE ACTUAL FUCK

    (Seriously, this is extremely fucking basic stuff, how the fuck can you be so utterly shallow and creatively sterile to fuck this u- oh, yeah, I forgot OpenAI is full of promptfondlers and Business Idiots like Sam Altman.)

  • SneerClub @awful.systems

    AI disagreements - Brian Merchant on our very good friends

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 10th August 2025

    MoreWrite @awful.systems

    Some Off-The-Cuff Predictions about the Next AI Winter

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025

    NotAwfulTech @awful.systems

    Godot Showcase - Dogwalk

    MoreWrite @awful.systems

    A Mini-Essay on Newgrounds' Resistance to AI Slop

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 13th July 2025

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 6th July 2025

    TechTakes @awful.systems

    Rolling the ladder up behind us - Xe Iaso on the LLM bubble

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 29th June 2025

    MoreWrite @awful.systems

    Some More Quick-and-Dirty Thoughts on AI's Future

    NotAwfulTech @awful.systems

    We started porting LEGO Island to... everything?

    SneerClub @awful.systems

    The Psychology Behind Tech Billionaires

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 22nd June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 15th June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 8th June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 1st June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 25th May 2025