
the silicon valley technofascists are the definition of good times breed weak men

The Lasker/Mamdani/NYT sham of a story just gets worse and worse. It turns out that the ultimate source of Cremieux's (Jordan Lasker's) hacked Columbia University data is a hardcore racist hacker who uses a slur for their name on X. The NYT reporter who wrote the Mamdani piece, Benjamin Ryan, turns out to have been a follower of this hacker's X account. Ryan essentially used Lasker as a cutout for the blatantly racist hacker.
It's starting to feel like I need to download a snapshot of Wikipedia now before it gets worse.
I'd be lying if I said the randomly generated narrative the LLM is stringing together isn't hilarious.
"I panicked and ran database commands without permission."
"I destroyed all production data."
"You immediately said 'No', ''Stop', 'You didn't even ask.'"
"But it was already too late."
The rest of that guy's blog is a fucking neofascist mess. That'll teach me to post a link without first checking out the writer.
Reading the comments led me to this entertaining sneer about our friends.
Daniel Koko's trying to figure out how to stop the AGI apocalypse.
How might this work? Install TTRPG afficionados at the chip fabs and tell them to roll a saving throw.
Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don't roll a 1 are destroyed on the spot.
And if that doesn't work? Koko ultimately ends up pretty much where Big Yud did: bombing the fuck out of the fabs and the data centers.
"For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes."
It's not that weird when you understand the sharks he swims with. Race pseudoscientists routinely peddle the idea that Ashkenazi Jews have higher IQs than any other ethnic or racial group. Scoot Alexander and Big Yud have made this claim numerous times. Lasker pretending to be a Jew makes more sense once you realize this.
You thought Crémieux (Jordan Lasker) was bad. You were wrong. He's even worse. https://www.motherjones.com/politics/2025/07/cremieux-jordan-lasker-mamdani-nyt-nazi-faliceer-reddit/
Oof, that Hollywood guest (Brian Koppelman) is a dunderhead. "These AI layoffs actually make sense because of complexity theory". "You gotta take Eliezer Yudkowsky seriously. He predicted everything perfectly."
I looked up his background, and it turns out he's the guy behind the TV show "Billions". That immediately made him make sense to me. The show attempts to lionize billionaires and is ultimately undermined not just by its offensive premise but by the world's most block-headed and cringe-inducing dialog.
Terrible choice of guest, Ed.
Sex pest billionaire Travis Kalanick says AI is great for more than just vibe coding. It's also great for vibe physics.
When you look at METR's web site and review the credentials of its staff, you find that almost none of them has any sort of academic research background. No doctorates as far as I can tell, and lots of rationalist junk affiliations.
I like his new framing of the accelerationists and transhumanists as pro-extinctionists.
Elon makes Grok developers install intrusive surveillance software on their laptops. They're being told to enable screen captures and URL tracking.
HN commenters are slobbering all over the new Grok. Virtually every commenter bringing up Grok's recent full-tilt Nazism gets flagged into oblivion.
Not gonna lie, it's fun reading those reddit posts from vibe coders, squealing like stuck pigs because their heavily subsidized code extruder stopped working.
What I don't understand is how these people didn't think they would be caught, with potentially career-ending consequences? What is the series of steps that leads someone to do this, and how stupid do you need to be?
What makes this worse than the financial crisis of 2008 is that you can't live in a GPU once the crash happens.
Roko has ideas
the silicon valley technofascists are the definition of good times breed weak men
"Ban women from universities, higher education and most white-collar jobs."
"Allow people to privately borrow against the taxable part of the future incomes or other economic activities of their children."
So many execrable takes in one tweet, and that's only two of them. I'm tempted to think he's cynically outrage-farming, but then I remember who he is.
Apparently the NYT hit-piece's author, Benjamin Ryan, is a subscriber to Jordan Lasker's (Cremieux's) substack.
Nate Soares: "Buy my book or everyone dies'
I think more people should say what they actually believe about AI dangers, loudly and often. Even (and perhaps especially) if you work in AI policy.…
Nate Soares and Big Yud have a book coming out. It's called "If Anyone Builds It, Everyone Dies". From the names of the authors and the title of the book, you already know everything you need to know about its contents without having to read it. (In fact, given the signature prolixity of the rationalists, you can be sure that it says in 50,000 words what could just as easily have been said in 20.)
In this LessWrong post, Nate identifies the real reason the rationalists have been unsuccessful at convincing people in power to take the idea of existential risk seriously. The rationalists simply don't speak with enough conviction. They hide the strength of their beliefs. They aren't bold enough.
As if rationalists have ever been shy about stating their kooky beliefs.
But more importantly, buy his book. Buy so many copies of the book that it shows up on all the best-seller lists. Buy so many copies that he gets invited to speak on fancy talk shows that will sell even more books. Basical
Orange site censoring posts left and right as US descends further into fascism
The tech bro hive mind on HN is furiously flagging (i.e., voting into invisibility) any submissions dealing with Tesla, Elon Musk or the kafkaesque US immigration detention situation. Add "/active" to the URL to see.
The site's moderator says it's fine because users are "tired of the repetition". Repetition of what exactly? Attempts to get through the censorship wall?
"Tracing Woodgrains" starts a eugenics-oriented education policy "think-tank"
A think tank centered on orienting education towards a culture of excellence. Click to read Center for Educational Progress, a Substack publication with thousands of subscribers.
Sneerclubbers may recall a recent encounter with "Tracing Woodgrains", née Jack Despain Zhou, the rationalist-infatuated former producer and researcher for "Blocked and Reported", a podcast featuring prominent transphobes Jesse Singal and Katie Herzog.
It turns out he's started a new venture: a "think-tank" called the "Center for Educational Progress." What's this think-tank's focus? Introducing eugenics into educational policy. Of couse they don't put it in those exact words, but that's the goal. The co-founder of the venture is Lillian Tara, former executive director of Pronatalist.org, the outfit run by creepy Harry Potter look-a-likes (and moderately frequent topic in this forum) Simone and Malcolm Collins. According to the anti-racist activist group Hope Not Hate:
The Collinses enlisted Lillian Tara, a pronatalist graduate student at Harvard University. During a call with
Casey Newton drinks the kool-aid
In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:
That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI.
His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him.
Peopl
Adderall in Higher Doses May Raise Psychosis Risk
Excerpt:
A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.
Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.
Grimes attends Curtis Yarvin's wedding
Maybe she was there to give Moldbug some relationship advice.
OK doomer
Some people think machine intelligence will transform humanity for the better. Others fear it may destroy us. Who will decide our fate?
The New Yorker has a piece on the Bay Area AI doomer and e/acc scenes.
Excerpts:
[Katja] Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include “Harry Potter and the Methods of Rationality,” a piece of fan fiction running to more than six hundred thousand words, and “The Sequences,” a gargantuan series of essays about how to sharpen one’s thinking.
[...]
A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked [Katja] Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”
Grace looked sheepish. “Scott and I are d
Since age 12, SBF was a dedicated utilitarian, mommy says. It's not fair to imprison him for life.
In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.
Excerpt:
One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obvio
BasedBeffJezos has this girlfriend from Canada, you wouldn't know her
Pass the popcorn, please.
(nitter link)
Let rationalists put GMO bacteria in your mouth
They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.
Effective Obfuscation
Silicon Valley's "effective altruism" and "effective accelerationism" only give a thin philosophical veneer to the industry's same old impulses.
Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.
Eliezer "8.5% more cheerful about OpenAI going forward" with Mira Murati at the helm
Not 7.5% or 8%. 8.5%. Numbers are important.
Let them Fight: EAs and neoreactionaries go at it
The reactionary futurism of Marc Andreessen.
Non-paywalled link: https://archive.ph/9Hihf
In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.
But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.
So have at at, Marc and Ezra. Fight. And maybe take each other out.
Let's walk through the uncanny valley with SBF so we can collapse some wave functions together
Attached: 2 images I finally got hold of the government exhibit that SBF's lawyers worried prosecutors were using just to "show that he’s some sort of crazy person" https://mollywhite.net/storage/sbf-trial/GX-39A.pdf #FTX #SBF #crypto #cryptocurrency
Rationalist check-list:
This email by SBF is basically one big malapropism.
Generative AI producing racially biased results? Stop worrying about it, say orange site tech bros. It's just reflecting reality.
Representative take:
If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".
Croatia shutters a libertarian paradise, orange site readers have a sad
"Yudkowsky is a genius and one of the best people in history."
[All non-sneerclub links below are archive.today links]
Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.
For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe:
Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the un
"Tech Right" scribe Richard Hanania promoted white supremacy for years under a pen name
Hanania is championed by tech moguls and a U.S. senator, but HuffPost found he used a pen name to become an important figure in the “alt-right.”
Excerpt:
Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.
He's also a big eugenics supporter:
“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in t
Grimes pens new TREACLES anthem: "I wanna be software, upload my mind"
When she's not busy fantasizing about libertarian crypto-based UBI schemes or propping up the patriarchy, she's writing wearying odes to transhumanism and longtermism. Truly the voice of a generation.