None of my acquaintances who have Wikipedian insider experience have much familiarity with the "Did you know" box. It seems like a niche within a niche that operates without serious input from people who care about the rest of the project.
"In The News" is apparently also an editor clique with its own weird dynamics, but it doesn't elevate as many weird tiny articles to the Main Page because the topics there have to be, you know, in the news.
Reflection (artificial intelligence) is dreck of a high order. It cites one arXiv post after another, along with marketing materials directly from OpenAI and Google themselves... How do the people who write this shit dress themselves in the morning without pissing into their own socks?
Counterpoint: I get to complain about whatever I want.
I could write a lengthy comment about how a website that is nominally editable by "anyone" is in practice a walled garden of acronym-spouting rules lawyers who will crush dissent by a thousand duck nibbles. I could elaborate upon that observation with an analogy to Masto reply guys and FOSS culture at large.
Or I could ban you for fun. I haven't decided yet. I'm kind of giddy from eating a plate of vegan nacho fries and a box of Junior Mints.
It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe.
When the mainstream of the movement is ve zhould chust bomb all datacenters, maaaaaybe they are?
Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking.
No he wasn't.
In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic
No, Hariezer Yudotter does not know science. He regurgitates the partial understanding and the outright misconceptions of his creator, who has read books but never had to pass an exam.
Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.
This presumes that MIRI's "research on AI risk" actually exists, i.e., that their pitiful output can be called "research" in a meaningful sense.
“Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.
Having plotted many graphs on "war" and "genocide" in my two books on violence, I closely tracked the definitions, and it's utterly clear that the war in Gaza is a war (e.g., the Uppsala Conflict Program, the gold standard, classifies the Gaza conflict as an "internal armed conflict," i.e., war, not "one-sided violence," i.e., genocide).
You guys! It's totes not genocide if it happens during a war!!
Let's see, it cites Scott Computers, a random "AI Safety Fundamentals" website, McKinsey (four times!), a random arXiv post....