Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MU
Posts
0
Comments
65
Joined
1 day ago

AI 2027

Jump
  • You are probably quite right, which is a good thing, but the authors take that into account themselves:

    "Our team’s median timelines range from 2028 to 2032. AI progress may slow down in the 2030s if we don’t have AGI by then."

    They are citing an essay on this topic, which elaborates on the things you just mentioned:
    https://www.lesswrong.com/posts/XiMRyQcEyKCryST8T/slowdown-after-2028-compute-rlvr-uncertainty-moe-data-wall

    I will open a champagne bottle if there is no breakthrough in the next few years, because than the pace will significantly slow down.
    But still not stop and that is the thing.
    I myself might not be around any more if AGI arrives in 2077 instead of 2027, but my children will, so I am taking the possibility seriously.

    And pre-2030 is also not completely out of the question. Everyone has been quite surprised on how well LLMs were working.
    There might be similar surprises for the other missing components like world model and continuous learning in store, which is a somewhat scary prospect.

    And alignment is even now already a major concern, let's just say "Mecha-Hitler", crazy fake videos and bot-armies with someone questionable's agenda...
    So seems like a good idea to try and press for control and regulation, even if the more extreme scenarios are likely to happen decades into the future, if at all...

  • This is at the same time hilarious and puzzling. I mean, Meta has urged users to use their real name for years and Neil does just that and gets apparently shut down now because he not only acts as himself but is named as himself on top of that!!1!

  • AI 2027

    Jump
  • I think the point is not that it is really going to happen at that pace, but to show that it very well might happen within our lifetime. And also the authors have adjusted the earliest possible point of a possible hard to stop runaway scenario to 2028 afaik.

    Kind of like the atomic doomsday clock, which has been oscillating between a quarter to twelve and a minute before twelve during the last decades, depending on active nukes and current politics. Helps to illustrate an abstract but nonetheless real risk with maximum possible impact (annihilation of mankind - not fond of the idea...)

    Even if it looks like AI has been hitting some walls for now (which I am glad about) and is overhyped, this might not stay this way. So although AGI seems unlikely at the moment, taking the possibility into account and perhaps slowing down and making sure we are not recklessly risking triggering our own destruction is still a good idea, which is exactly the authors' point.

    Kind of like scanning the sky with telescopes and doing DART-style asteroid research missions is still a good idea, even though the probability for an extinction level meteorite event is low.

  • I am from Germany and actually still have several trillion marks lying around in my attic, inherited from my great-grandfather. Inflation was so high at some point that he couldn't keep up with spending the earned money before the curve became highly exponential and the money become basically worth nothing within days...

  • I'm using openrouter.ai which is a service that allows the use of a wide range of models and you can easily switch between them on the fly.

    Besides the major players I can also use cloud hosted instances of open models. These are often incredibly cheap and and you can select the ones that don't use your data for training.

    Typical use cases include language learning and copilot stuff for programming.