look AI doom is all very well, but we have extremely important message board drama to be getting on with
look AI doom is all very well, but we have extremely important message board drama to be getting on with

Three Missing Cakes, or One Turbulent Critic? — LessWrong

Some of the comments are, uh, really telling:
The irony is completely lost on them.
The OP replies that they meant the former... the later is a better answer, Death with Dignity is kind of a big reveal of a lot of flaws with Eliezer and MIRI. To recap, Eliezer basically concluded that since he couldn't solve AI alignment, no one could, and everyone is going to die. It is like a microcosm of Eliezer's ego and approach to problem solving.
Yeah, no shit secrecy is bad for scientific inquiry and open and honest reflections on failings.
...You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer's self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question... and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!
As the cherry on top of this shit sundae, the bubble caused by said hype dealt devastating damage to the Internet and the world at large in spite of failing to create the unaligned AGI Yud was doomsaying about, and made people more vulnerable to falling for the plagiarism-fueled lying machines behind said bubble.
And some people are crediting Eliezer as if he predicted this devastating damage and not something completely different. Or they compare LLMs spewing shit to his scenarios of agentically and intellignetly dangerous and manipulative AGIs.
So not only has Yud failed to properly align AI, he also failed to align the AI aligners. Time to burn down the sequences and start over.