Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PY
Posts
7
Comments
66
Joined
1 yr. ago

  • NotebookLM was really useful to my friend who has a humiliation kink which he satisfies by erotically roleplaying on Discord: he simply copypasted the chatlogs into the AI input box and received a personalized podcast of two AI voices kinkshaming him.

    His primary complaint was that it wasn't longer.

  • I do not recommend using the word "AI" as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don't distinguish between things that could actually be built and "throwing an LLM at the problem" -- you're treating their lack-of-differentiation as valid and feeding them hype.

  • Thank you for reading my story!

    This started as something terse and didactic, which felt like really bad territory for the piece. I'm kind of relieved that you took away the intended content.

  • MoreWrite @awful.systems

    SOONDAE, the hero dog

    TechTakes @awful.systems

    StackOverflow is blogging about Web3. Please put it out of its misery

    • high willingness to accept painfully inexact responses
    • high tendency to side with authority when given no information
    • low ability to distinguish "how it is" from "how it seems like it should be"

    Meta:

    • default expectation that others are the same way
    • indignant consent-ignoring gesture if they're not
  • TechTakes @awful.systems

    Crowdstrike takes out last remaining threat vector (the users)

  • A friend who worked with her is sympathetic to her but does not endorse her: this is a tendency she has, she veers back and forth on it a lot, she has frequent moments of insight where she disavows her previous actions but then just kind of continues doing them. It's Kanye-type behavior.

  • The media again builds a virtual public consisting of billionaires of a variety of positions and ask you "which one do you agree with?" This is a strategy to push the public closer to the beliefs of billionaires.

    I don't know who these fucking people are. The real public in California still supports Biden by a 25% margin.

  • The last time I met a person who had done deeply reprehensible, highly publicized tech fraud (FTX executive) he kind of just came off as a dude, and I liked him.

    That kind of makes me feel bad when I think about it.

    I haven't met a high-profile fraudster lately, but my first impression of bad guys is usually pretty positive. As far as I can tell, people keep their ambient personalities when they break bad, but they compartmentalize and they develop supermassive appetites for praise. This long-run increases their suggestibility because they have to be more and more gullible to not hate themselves. I think this hollows them out -- when you live a double life for long enough, you kind of stop observing the reality-fiction boundary at all.

    Not clear how to stop the cycle. There's just too much money involved for me to dive off the train right now.

  • Last paragraph first: Grudgingly, yeah, that's a pretty good literal answer to the question. Peter Thiel won't sell just anyone a cult following, and you're not paying for it in cash, but he will sell you one if you're lucky.

    Writing advice: I like your writing. I haven't tried to emulate you because I haven't read enough of your writing, and because when I made my first brush with you (which was like a year ago) I was spending a lot less time emulating people in general.

    It's a little distressing to me because, well, I'm way too anxious to play the game of moral righteousness straight-facedly. It takes a very different personality from mine to say "Those are the bad people, fuck them" and not see the obvious similarities between me and the people I hate.

    Some level of this is actual, real-world hypocrisy: I'm the cofounder of an AI startup and at the same time I deeply dislike AI. I went here because one, there was money, and two, I didn't want a way worse person than me to take the same job. It has not been what I hoped for -- it has been deeply destructive to my personality -- it has taught me a lot and made me much more cynical -- it has definitely made me stupider.

    I don't really know how to do a hypocrisy purge. (I hear this is what ayahuasca is for, but Catholicism also works, and I'm considering getting my brain tattooed with a laser gun.) I think until I do one I have to temper all my moral righteousness by saying "I think I know why this person is doing the thing they're doing, and if you want their (bad) motives, here's my guess."

  • I think most people would see higher performance on general tasks on Adderall. Not sure if this is actually a good reason to put everyone on Adderall.

    Side effects can be pretty brutal, although people who abuse caffeine to get the same level of stimulation are going to probably have them a lot worse.

  • MoreWrite @awful.systems

    ITT Pyrex's self-loathing and request for practical advice

    MoreWrite @awful.systems

    A modest proposal for OpenAI employees

    MoreWrite @awful.systems

    Incorrect information about large language models

    FreeAssembly @awful.systems

    The NixOS community is relitigating quotas, now on Zulip