Skip Navigation
58 comments
  • Looking at this dull aimless mass of text I can understand why people like Yud are so impressed with chatGPT's capabilities.

  • Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.

    I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.

    But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨

    The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.

  • holy fuck, programming and programmers both seem extremely annoying in yud’s version of the future. also, I feel like his writing has somehow gotten much worse lately. maybe I’m picking it out more because he’s bullshitting on a subject I know well, but did he always have this sheer density of racist and conservative dogwhistles in his weird rants?

    • Yeah, typical reactionary spiral, it's bad. Though at least this one doesn't have a bit about how rape is cool actually.

  • this was actually mildly amusing at first and then it took a hard turn into some of the worst rationalist content I've ever seen, largely presented through a black self insert. by the end he's comparing people who don't take his views seriously to concentration camp guards

  • There's technobabble as a legitimate literary device, and then there's having randomly picked up that comments and compilers are a thing in computer programming and proceeding to write an entire parable anti-wokism screed interminable goddamn manifesto around them without ever bothering to check what they actually are or do beyond your immediate big brain assumptions.

  • Eliezer Yudkowsky was late so he had to type really fast. A compiler was hiden near by so when Eliezer Yudkowsky went by the linter came and wanted to give him warnings and errors. Here Eliezer Yudkowsky saw the first AI because the compiler was posessed and operating in latent space.

    "I cant give you my client secret compiler" Eliezer Yudkowsky said

    "Why not?" said the compiler back to Eliezer Yudkowsky.

    "Because you are Loab" so Eliezer Yudkowsky kept typing until the compiler kill -9'd itself and drove off thinking "my latent space waifu is in trouble there" and went faster.

  • TA: You're asking the AI for the reason it decided to do something. That requires the AI to introspect on its own mental state. If we try that the naive way, the inferred function input will just say, 'As a compiler, I have no thoughts or feelings' for 900 words.

    I wonder if he had the tiniest of a pause when including that line in this 3062 word logorrhea. Dude makes ClangPT++ diagnostics sound terse.

    • Oh fuck I should not have read further, there's a bit about the compiler mistaking color space stuff for racism that's about as insightful and funny as you can expect from Yud.

      • Yeah, once you get past the compsci word salad things like this start to turn up:

        Student: But I can't be racist, I'm black! Can't I just show the compiler a selfie to prove I've got the wrong skin color to be racist?

        Truly incisive social commentary, and probably one of those things you claim it's satire as soon as you get called on it.

  • @corbin This has to the best possible argument against being able to pay to tweet over the character limit

58 comments