Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
78
Comments
881
Joined
2 yr. ago

  • In other news, the mainstream press has caught on to "clanker" (originally coined for use in the Star Wars franchise) getting heavy use, with Rolling Stone, Gizmondo and Axios putting out articles on it, and NPR featuring it in Word of the Week.

    You want my take, I expect it will retain heavy usage going forward - as I've stated before (multiple times at least), AI is no longer viewed as a "value-neutral" tool/tech, but as an enemy of humanity, whose use expresses a contempt for humanity.

  • On a personal note, part of me expects this will see some adoption as an anti-scraping measure - unlike tarpits like Iocaine and Nepenthes, this won't take up a significant amount of resources to implement, and their ability to crash AI scraper bots both wastes the AI corps' time by forcing them to reboot said scraper and encourages them to avoid your website entirely.

  • New article from Matthew Hughes, about the sheer stupidity of everyone propping up the AI bubble.

    Orange site is whining about it, to Matthew's delight:

    Someone posted my newsletter to Hacker News and the comments are hilarious, insofar as they're upset with the tone of the piece.

    Which is hilarious, because it precisely explains why those dweebs love generative AI. They're absolutely terrified of human emotion, or passion, or naughty words.

  • They're allegedly useful for visually impaired users, but I strongly doubt Zuckerberg's thought of that particular untapped market, or interested in taking advantage of it.

    I've already predicted Meta Rayban wearers would be assaulted in the street, but I wouldn't be shocked if visually impaired peeps kept away from them as well - whatever accessibility boons they may grant, it is not worth getting called a creepshotter and/or your ass getting beaten.

    (Sidenote: This is the second time I've seen a tech accessibility related shitshow so far - the first was some alt-text drama I ran into three days ago.)

  • Wikipedia also just upped their standards in another area - they've updated their speedy deletion policy, enabling the admins to bypass standard Wikipedia bureaucracy and swiftly nuke AI slop articles which meet one of two conditions:

    • "Communication intended for the user”, referring to sentences directly aimed at the promptfondler using the LLM (e.g. "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.”)
    • Blatantly incorrect citations (examples given are external links to papers/books which don't exist, and links which lead to something completely unrelated)

    Ilyas Lebleu, who contributed to the update in policy, has described this as a "band-aid" that leaves Wikipedia in a better position than before, but not a perfect one. Personally, I expect this solution will be sufficent to permanently stop the influx of AI slop articles. Between promptfondlers' utter inability to recognise low-quality/incorrect citations, and their severe laziness and lack of care for their """work""", the risk of an AI slop article being sufficiently subtle to avoid speedy deletion is virtually zero.

  • Cloudflare has publicly announced the obvious about Perplexity stealing people's data to run their plagiarism, and responded by de-listing them as a verified bot and added heuristics specifically to block their crawling attempts.

    Personally, I'm expecting this will significantly hamper Perpllexity going forward, considering Cloudflare's just cut them off from roughly a fifth of the Internet.

  • I’m sure you can think of hypothetical use cases for Google Glass and Meta AI RayBans. But these alleged non-creepshot use cases already failed to keep Google Glass alive. I predict they won’t be enough to keep Meta AI RayBans alive.

    Its not an intentional use case, but its an easy way to identify people who I should keep far, far away from.

    It turns out normal people really do not like this stuff, and I doubt the public image of tech bros has improved between 2014 and 2025. So have fun out there with your public pariah glasses. If you just get strong words, count yourself lucky.

    On a wider note, I wouldn't be shocked if we heard of Rapist RayBan wearers getting beaten up or shot in the street - if the torching of Waymos in anti-ICE protests, the widespread vandalism of Cybertrucks, and Luigi Mangione's status as a folk hero are anything to go by, I'd say the conditions are right for cases of outright violence against anyone viewed as supporting the techbros.

    EDIT: Un-fucked the finishing sentence, and added a nod to Cybertrucks getting fucked up.

  • Ran across a pretty solid sneer: Every Reason Why I Hate AI and You Should Too.

    Found a particularly notable paragraph near the end, focusing on the people focusing on "prompt engineering":

    In fear of being replaced by the hypothetical ‘AI-accelerated employee’, people are forgoing acquiring essential skills and deep knowledge, instead choosing to focus on “prompt engineering”. It’s somewhat ironic, because if AGI happens there will be no need for ‘prompt-engineers’. And if it doesn’t, the people with only surface level knowledge who cannot perform tasks without the help of AI will be extremely abundant, and thus extremely replaceable.

    You want my take, I'd personally go further and say the people who can't perform tasks without AI will wind up borderline-unemployable once this bubble bursts - they're gonna need a highly expensive chatbot to do anything at all, they're gonna be less productive than AI-abstaining workers whilst falsely believing they're more productive, they're gonna be hated by their coworkers for using AI, and they're gonna flounder if forced to come up with a novel/creative idea.

    All in all, any promptfondlers still existing after the bubble will likely be fired swiftly and struggle to find new work, as they end up becoming significant drags to any company's bottom line.

  • I don’t need that, in fact it would be vastly superior to just “steal” from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.

    I'd say its a combo of them feeling entitled to plagiarise people's work and fundamentally not respecting the work of others (a point OpenAI's Studio Ghibli abomination machine demonstrated at humanity's expense.

    On a wider front, I expect this AI bubble's gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.

  • Ran across a notable post on Bluesky recently - seems there's some alt-text drama that's managed to slip me by:

    On a wider note, I wouldn't be shocked if the AI bubble dealt some setbacks to accessibility in tech - given the post I've mentioned earlier, there's signs its stigmatised alt-text as being an AI Bro Thing™.

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 18th May 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 11th May 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 4th May 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 27th April 2025

    NotAwfulTech @awful.systems

    AI crawler blocker Anubis gets deployed by the United Nations

    TechTakes @awful.systems

    "OpenAI Is A Systemic Risk To The Tech Industry"

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 20th April 2025

    TechTakes @awful.systems

    Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”

    NotAwfulTech @awful.systems

    EasyRPG Player 0.8.1 "Stun" released

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 13th April 2025

    SneerClub @awful.systems

    Big Tech Backed Trump for Acceleration. They Got a Decel President Instead

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 6th April 2025

    NotAwfulTech @awful.systems

    ReactOS 0.4.15 released

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 30th March 2025

    SneerClub @awful.systems

    "The questions ChatGPT shouldn’t answer"

    NotAwfulTech @awful.systems

    Tiny3D - High-Res Textures

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd March 2025

    TechTakes @awful.systems

    Yud follows up Sammy Boy's AI-Generated "Metafiction"

    MoreWrite @awful.systems

    Some More Off-The-Cuff Predictions about the AI Bubble

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 16th March 2025