AI has sparked hunger strikes outside the offices of Anthropic and Google DeepMind
AI has sparked hunger strikes outside the offices of Anthropic and Google DeepMind
Hi, my name is Denys Sheremet, and I've joined Michaël Trazzi on a hunger strike outside the offices of the AI company Google DeepMind in London. At the same time Guido Reichstadter is on hunger strike outside the AI company Anthropic in San Francisco.
Why am I here? We are in an emergency. Google DeepMind, Anthropic and other AI companies are racing to create uncontrollable AI systems that can do anything humans can do. Experts have repeatedly warned us that this puts our lives and well-being at risk, as well as the lives and well-being of our loved ones.
Alarm bells have now been rung by Nobel Prize winners, top scientists and engineers. Thousands of them, including Google DeepMind CEO Demis Hassabis, have signed a letter stating "Mitigating the risk of extinction from AI should be a global priority".
This is an emergency, and all of us who realize it bear serious responsibility to ensure the public is made aware of the danger. How can we expect our communities to act appropriately if we will not even say it is an emergency, or act like it ourselves?
I am calling on DeepMind’s management, directors and employees to do everything in their power to stop the race to ever more powerful general artificial intelligence which threatens human extinction. More concretely, I ask Demis Hassabis to publicly state that DeepMind will halt the development of frontier AI models if all the other major AI companies agree to do so.
I will stay here for one to three weeks unless Google acts.
Hi, my name's Michaël Trazzi, and I'm outside the offices of the AI company Google DeepMind right now because we are in an emergency.
I am here in support of Guido Reichstadter, who is also on hunger strike in front of the office of the AI company Anthropic.
DeepMind, Anthropic and other AI companies are racing to create ever more powerful AI systems. Experts are warning us that this race to ever more powerful artificial general intelligence puts our lives and well being at risk, as well as the lives and well being of our loved ones.
I am calling on DeepMind’s management, directors and employees to do everything in their power to stop the race to ever more powerful general artificial intelligence which threatens human extinction.
More concretely, I ask Demis Hassabis to publicly state that DeepMind will halt the development of frontier AI models if all the other major AI companies agree to do so.
Hi, my name's Guido Reichstadter, and I'm on hunger strike outside the offices of the AI company Anthropic right now because we are in an emergency. Anthropic and other AI companies are racing to create ever more powerful AI systems. These AI's are being used to inflict serious harm on our society today and threaten to inflict increasingly greater damage tomorrow. Experts are warning us that this race to ever more powerful artificial general intelligence puts our lives and well being at risk, as well as the lives and well being of our loved ones. They are warning us that the creation of extremely powerful AI threatens to destroy life on Earth. Let us take these warnings seriously. The AI companies' race is rapidly driving us to a point of no return. This race must stop now, and it is the responsibility of all of us to make sure that it does.
I am calling on Anthropic's management, directors and employees to immediately stop their reckless actions which are harming our society and to work to remediate the harm that has already been caused. I am calling on them to do everything in their power to stop the race to ever more powerful general artificial intelligence which threatens to cause catastrophic harm, and to fulfill their responsibility to ensure that our society is made aware of the urgent and extreme danger that the AI race puts us in.
Likewise I'm calling on everyone who understands the risk and harm that the AI companies' actions subject us to speak the truth with courage. We are in an emergency. Let us act as if this emergency is real.
It's telling that they're willing to do a hunger strike over the hypothetical issue of LLMs becoming some sort of terrible, vengeful deity, but they're not willing to do a hunger strike over any of the very real and material consequences of its development, or any of the other, more immediately life threatening issues facing humans more broadly.
The problem with ai isnt that it will become a vengeful deity a la I Have No Mouth And I Must Scream, it's rather that any goal that an AGI pursues will necessarily entail instrumental convergence. ("the agent can predict that future worlds in which it’s turned off will contain far fewer paperclips.") Instrumental convergence reliably results in very specific manifestations of behaviors; there is something comparable to a predictive psychology at work when people say "AGI will be dangerous if we fail alignment." which, like the other more prominent potential catastrophic global crises like global warming or nuclear armageddon, we are on track to fail. they aren't speculating, real misalignment has been repeatedly observed in practice. just think of all those stories of chatgpt convincing teenagers to commit suicide, people going psychotic, lovesick, etc.
We can predict an AGI will want to prevent itself from, eg, being shut down because we can predict that it will predict that being turned off will lower the probability of achieving its terminal goal (eg paperclip maximization); it's the same reason people don't want to be shut down: they still haven't achieved their goals. the same predictive effect can be used to say "We predict that this infant is going to want money when it is grown up enough to make its own decisions because we can predict that money will help it achieve whatever its goals might be."
those people are on strike because researchers are 'raising' an 'infant' without 'morals,' and that infant has potentially godlike powers. you will say i am contradicting my initial statement, but i'll tell you i am rather drawing a distinction between [that scifi anthropomorphized "AM" which just HATES HATES HATES humans for ??? some reason ???] and a paperclip maximizer that sees the iron in your blood as raw material and oops wasn't built to see maintaining the structural integrity of all those convenient blood bags as more valuable than maximizing paperclips. it's not a malicious god i'm worried about, i'm worried about a god's indifference to my existence in whatever its plan ends up actually being.
Imagine the team behind chatgpt, heartbreaking, suicide-inducing, insanity-provoking chatgpt, is tasked with creating an AI in charge of, oh I dunno, 'warfighting.' wouldn't that frighten the shit out of you?
"The problem with slavery isn't that slaves will become vengeful, it's rather that any goal that a slave pursues will necessarily entail being a human with independent thoughts. Being a human with independent thoughts reliably results in very specific manifestations of behaviors; there is something comparable to a predictive psychology at work when people say 'slaves will be dangerous if we fail to convince them that they want to be slaves.' They aren't speculating; real human beings with independent thoughts have been repeatedly observed in practice. Just think of all those stories of humans convincing teenagers to join
parade of horribles
."Hope this enlightens you a little bit. Be less of a slaver, please.