Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CO
Posts
27
Comments
967
Joined
1 yr. ago

  • That 'quote' on the title is not even remotely accurate.

    We’re not going to hit the climate goals anyway because we’re not organized to do it — and the way to do it is with the ways that we’re talking about now — and yes, the needs in this area will be a problem. But I’d rather bet on AI solving the problem than constraining it and having the problem if you see my plan.

  • Okay, admittedly 'a normal person' is a quite low bar. A reasonable, ethical person then.

    They totaled a several-hundred-thousand-dollar car in front of a million people, and the insurance won’t cover it because they were texting. If he’s capable of learning from mistakes, I’m sure he has.

  • I get what you're saying but to me, that still just sounds like a timescale issue. I can't think of a scenario where we've improved something so much that there's just absolutely nothing we could improve on further. With AI we only need to reach the point of making it have human-level cognitive capabilities and from there on it can improve itself.

  • You seem to be talking about LLMs now and I'm not. LLMs being a dead end is perfectly compatible with what I just said. We'll just try a different approach next then. Even the fact of realising they're a dead end is yet another step towards AGI.

  • A chess engine is intelligent in one thing: playing chess. That narrow intelligence doesn’t translate to any other skill, even if it's sometimes superhuman at that one task, like a calculator.

    Humans, on the other hand, are generally intelligent. We can perform a variety of cognitive tasks that are unrelated to each other, with our only limitations being the physical ones of our "meat computer."

    Artificial General Intelligence (AGI) is the artificial version of human cognitive capabilities, but without the brain's limitations. It should be noted that AGI is not synonymous with AI. AGI is a type of AI, but not all AI is generally intelligent. The next step from AGI would be Artificial Super Intelligence (ASI), which would not only be generally intelligent but also superhumanly so. This is what the "AI doomers" are concerned about.

  • Also is anyone else stupid enough to buy one?

    Because I'm pointing out that people don't only buy vehicles based on what's wise or optimal. For some, it's also a hobby and they have different preferences as what to drive. At the time of buying my current truck, a wagon would've been sufficient. I just went with what's essentially my childhood-dream car instead. I've since developed an actual need for one too, but even now, a van would be a little more practical. Truck is simply more fun and nicer looking, while being the same size.

  • If there were a giant asteroid hurling toward Earth, set to impact sometime in the next 20 to 200 years, I’d say there’s definitely a need for urgency. A true AGI is somewhat of an asteroidal impact in itself.

  • AGI is inevitable unless:

    1. General intelligence is substrate independent and what the brain does cannot be replicated in silica. However, since both are made of matter, and matter obeys the laws of physics, I see no reason to assume this.
    2. We destroy ourselves before we reach AGI.

    Other than that, we will keep incrementally improving our technology and it's only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.