Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LO
Posts
1
Comments
86
Joined
2 yr. ago

  • When, arguing with people like yudkowsky, you can never decisively 'win' or change his mind, because he and other doomers can quickly retreat to the classic hole: "You can't prove X is impossible!! Nature isn't already perfectly optimal!!!" Searching for some kind of "hard limit" on how nature or technology can evolve will always end up empty handed. Lots of really awful things are possible. (Lots of super fascinating things are also possible.) Searching for some singular hard reason why nature as it is, is totally safe from future threats or change will always end up empty handed.

    Capability, is not interesting. Capability, is not the real test. Economics, is the real master of it. And specifically, the open system economics of the entire environment in which something is embedded. It's why the Voyager, a technology planned, built, and launched with 80 year old techniques and knowledge is SOTA for space exploration and contribution to science, and Starship is still just a huge dark hole for money and talent.

    if I want to understand historical biology, I do not go looking for the alien intelligence and engineering capability that built it, I look for the environmental forces that contributed to, and eventually supported the homeostasis of, it.

  • Yes, I agree. My personal thoughts are also that long term energy maximization is synonymous with regulatorial systems and dealing with the complications of energy use. Paradoxically long term maximization is defeated by any naive short term abuse. Only a naive understanding of physics supports the idea that you can simply, just produce and use more energy just like that.

    Which is why theae takes don't mean, anything. It's a revelation to want money and do stupid without consequence.

  • I had kind of the same thought. Woah, maximize long term energy production??? How novel, let's get our best people right on that, thanks for mentioning it, gosh didnt occur to anyone.

    I wonder when it finally occurs to them that the monetary system is literally a proxy for energy production and consumption, and their entire philosophy might as well read: "make more $$$." I'll have to ask the stupid question again, what material difference is there between e/acc, ea, and delusion?

  • The irony in all this is that if they just dropped the utilitarianism and were just honest about feelings guiding their decision making, they could be tolerable. "I'm not terribly versed in the details of the gun violence issue, but I did care about malaria enough to donate to some functional causes." Ok, fine, you're now instantly just a normal person.

  • This whole, debate, is really just the question of closed systems vs open ones. That's it. If you want a dystopia because you see yourself as the winner of the final optimization, or you demand that outcome of the universe be knowable to you specifically, you will focus on closed system thermodynamics. If you enjoy the creative beauty of nature and have the capacity to change your perspective on response to the unforseen, you embrace open systems thermodynamics.

    So yeah as with abuses of Bayesian logic, your desired outcome always reflects back on which assumptions you take. These takes tell you more about the person spouting then than any meaningful observations of life.

  • Takes like this are one of the many things I pull out to point out how naive and misguided most x-risk obsessive people are. And especially Mr. Altman.

    Despite wide fears of synthetic gain of function attacks, as it turns out, it's actually really hard to create a new virus meaningfully stronger than the standard endemic ones that already exist. Many countries and labs have legitimately tried. Lots of papers and research. It's, really really hard to beat nature at the microbiological scale; Viruses have to not only be virulent, but it has to contend with extremely unpredictable intermediate environments. The current endemic viruses got there through many mutations and adaptations inside environments that they were already at least successful (and not in vitro). And in the end, what would be the point? Once a virulent virus breaks out, you have very little control. Either it works really well and backfires or, even far more likely, it doesn't do that much at all, but it does piss other nations off.

    It's not impossible. But honestly, yeah, I don't comprehend x-riskers who obsess over this.

  • Desperation of delusion. "End of all value" => "I don't understand things, so I better at least have control!" I wonder if these kinds of people would send literal Nazis to my doorstep if I suggested that I don't have any stake either way in the "coin flipping on the end of my world view."

  • This is the push/pull abusive dynamic: feign sensitivity, deny negative implications as not their intention, but demand positive feedback for dangerous takes. EA believes that not being wrong or held accountable is the most important optimization, so all their positions come from having absolutely no stake in the real world consequences.

  • There's a difference between "can" and "cost". Code is syntactic and formal, true, but what about pseudo code that is perfectly intelligible by a human? There is, afterall, a difference between sharing "compiled" code that is meant to be fed directly into a computer and sharing "conceptual" code that is meant to be contextualized into knowledge. Afterall, isn't "code" just the formalization of language, with a different purpose and trade off?