Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EV
Posts
6
Comments
274
Joined
2 yr. ago

  • I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted.

    LW isn't looking for technical practical solutions. They want plausible sci-fi that fits their narrative. Actually solving the problems they worry about would mean there's no reason for the cult to exist, so why would they upvote that?

    Overall LW seems to be dead wrong about predicting modern AI systems. They anticipated that there was this general intelligence quality that would enable problem solving, escape, instrumental convergence, etc. However what ended up working was approximating functions really hard. The existence of ChatGPT without a singularity is a crisis for LW. No longer can they safely pontificate and write Harry Potter/The Culture fanfiction; now they must confront the practical reality of the monsters under their bed looking an awful lot more like dust bunnies.

  • Look more carefully at what the cult leader is asking for. He was asking for money for his project before, now he's tearing his hair out in despair because we haven't spent enough money on his project, we'd better tell the aliens to give us another few months so we can spend more money on the cult project.

    He has been very careful not to say that we should do anything bad to the aliens, just people who don't agree with him about how we should talk to the aliens.

  • Content Warning: Ratspeak

    The practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw "intelligence" at to make it go up.

  • There's some actual gold in there

    If Aaronson was capable of respectful communication then when the smoothie person got angry at him he would have considered the possibility that the anger was warranted and he had done something wrong. And probably would have pretty quickly realized: "oh wait I'm putting down money on this tray that I just picked up money from... that doesn't make any sense." But instead he immediately jumped to: "there's something wrong with this smoothie shop worker" Everyone would have been saved a ton of grief if Aaronson had any respect for the people around him or any self awareness. But he has neither.

    a person who repeatedly fails to execute fairly simple tasks in an airport that is specifically designed to make everything as strait [sic] forward as possible is otherwise a very successful human.

  • we did arrive, (barely) on time, the contemptuous American Airlines counter staff deliberately refused to check us in, chatting as we stewed impotently

    Major 'I'd like to speak to your manager' vibes.

    No empathy for the employees, their little mistakes are assumed to be deliberate, but when the same is assumed of him, the big burley men are here to take the nerds away, I knew it.

  • It's amazing that one can use so many words to describe such a simple concept. Astonishing lack of economy. I am not going to read or even skim that, I just thought it was funny that he was like 'I won, see?' and his proof is a dead link.

  • In my final year of high school debate

    Most self aware rationalist.

    I’m not an expert about X, but it seems like most of the experts about X think X or are unsure about it. The fact that Eliezer, who often veers sharply off-the-rails, thinks X gives me virtually no evidence about X. Eliezer, while being quite smart, is not rational enough to be worthy of significant deference on any subject, especially those subjects outside his area of expertise. Still though, he has some interesting things to say about AI and consequentialism that are sort of convincing. So it’s not like he’s wrong about everything or is a total crank. But he’s wrong enough, in sufficiently egregious ways, that I don’t really care what he thinks.

    So close to being deprogrammed. So close. It's like when a kid finds out about the Easter Bunny but somehow still clings to Santa.

    He links to this (warning, so long it has a whole 'why write this' section) article on Yudkowsky being wrong which amuses me.

    Making basic errors that show you don’t have the faintest grasp on what people are arguing about, and then acting like the people who take the time to get Ph.Ds and don’t end up agreeing with your half-baked arguments are just too stupid to be worth listening to is outrageous.

    This, but for AI lol.

    If anyone would like to have a debate about this on YouTube...

    LW equivalent of fight me irl bro

  • Did the jargon file fail to evolve, or did Raymond just stop maintaining it? It roughly coincides with when the full extent of his brain breakage became apparent ("anti idiotiarian manifesto.")

    Or maybe it represents the end of when the Stallmanites, The corporate types (incl. Raymond) and the Linux Fandom actually saw eye-to-eye.