Skip Navigation

Posts
51
Comments
1,260
Joined
2 yr. ago

  • I tried doing what I occasionally do and browsing LessWrong for something to point and laugh at, but opening it in a Firefox private window, I can only scroll down a little way before the page breaks. The bottom post appears and disappears, and it scrolls no further.

  • what are the requirements for consciousness

    10 coffee

    20 goto 10

  • The video title "A Pragmatist's Take on Small Talk" would be much better if it were William James giving advice on navigating the social niceties. Step 1: this hat.

  • People simping for Microsoft makes my '90s-kid brain boggle.

  • To date, the largest working nuclear reactor constructed entirely of cheese is the 160 MWe Unit 1 reactor of the French nuclear plant École nationale de technologie supérieure (ENTS).

    "That's it! Gromit, we'll make the reactor out of cheese!"

  • All it takes is one frivolous legal threat to shut down a small website by putting them on the hook for legal costs they can't afford. Facebook gets away with awful shit not because of the law, but because they are stupidly rich. Change the law, and they will still be stupidly rich. Indeed, the "sunset Section 230" path will make it open season for Facebook's lobbyists to pay for the replacement law that they want. I do not see that leading anywhere good.

  • I have to wonder though if the fact Google is generating this text themselves rather than just showing text from other sources means they might actually have to face some consequences in cases where the information they provide ends up hurting people.

    Darn good question. Of course, since Congress is thirsty to destroy Section 230 in the delusional belief that this will make Google and Facebook behave without hurting small websites that lack massive legal departments (cough fedi instances)....

  • I thought about assembling a kind of anti-Sequence reading list about quantum mechanics, a view from outside the cult shit that the Sequences try to drown you in, with their bad history, caricatured philosophy and mathematics that ranges from turgid to incorrect. The trouble is that a better understanding is not written all in one place, and even the good papers don't necessarily convey the everything Yud taught you is wrong emotional hook. The literature does not lead to cracking many smiles, though I did appreciate Adrian Kent's eel remark in this book review.

    Some papers that have a bit more zing than average:

    And, if you really want to dive into the waters and open your eyes below the surface:

  • If natively fluent speakers of the English language use beg the question in the "wrong" way time and time again, finding the "incorrect" meaning a natural fit with their understanding of the verb to beg, then the "incorrect" meaning may well be the one we should roll with.

  • Hmm, a xitter link, I guess I'll take a moment to open that in a private tab in case it's passingly amusing...

    To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—

    OK, you have my attention now.

    To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—

    During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI employees, their friends, and adjacent entrepreneurs, which I have not previously spoken about publicly.

    It is not my place to speak as to why Jan Leike and the superalignment team resigned. I have no idea why and cannot make any claims. However, I do believe my cultural observations of the SF AI scene are more broadly relevant to the AI industry.

    I don't think events like the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers have been good for women. They create a climate that can be very bad for female AI researchers, with broader implications relevant to X-risk and AGI safety. I believe they are somewhat emblematic of broader problems: a coercive climate that normalizes recklessness and crossing boundaries, which we are seeing playing out more broadly in the industry today. Move fast and break things, applied to people.

    There is nothing wrong imo with sex parties and heavy LSD use in theory, but combined with the shadow of 100B+ interest groups, leads to some of the most coercive and fucked up social dynamics that I have ever seen. The climate was like a fratty LSD version of 2008 Wall Street bankers, which bodes ill for AI safety.

    Women are like canaries in the coal mine. They are often the first to realize that something has gone horribly wrong, and to smell the cultural carbon monoxide in the air. For many women, Silicon Valley can be like Westworld, where violence is pay-to-pay.

    I have seen people repeatedly get shut down for pointing out these problems. Once, when trying to point out these problems, I had three OpenAI and Anthropic researchers debate whether I was mentally ill on a Google document. I have no history of mental illness; and this incident stuck with me as an example of blindspots/groupthink.

    I am not writing this on the behalf of any interest group. Historically, much of OpenAI-adjacent shenanigans has been blamed on groups with weaker PR teams, like Effective Altruism and rationalists. I actually feel bad for the latter two groups for taking so many undeserved hits. There are good and bad apples in every faction. There are so many brilliant, kind, amazing people at OpenAI, and there are so many brilliant, kind, and amazing people in Anthropic/EA/Google/[insert whatever group]. I’m agnostic. My one loyalty is to the respect and dignity of human life.

    I'm not under an NDA. I never worked for OpenAI. I just observed the surrounding AI culture through the community house scene in SF, as a fly-on-the-wall, hearing insider information and backroom deals, befriending dozens of women and allies and well-meaning parties, and watching many them get burned. It’s likely these problems are not really on OpenAI but symptomatic of a much deeper rot in the Valley. I wish I could say more, but probably shouldn’t.

    I will not pretend that my time among these circles didn’t do damage. I wish that 55% of my brain was not devoted to strategizing about the survival of me and of my friends. I would like to devote my brain completely and totally to AI research— finding the first principles of visual circuits, and collecting maximally activating images of CLIP SAEs to send to my collaborators for publication.

  • Kludging an "objective reduction" process into the dynamics is throwing out quantum mechanics and replacing it with something else. And because Orch-OR is not quantum mechanics, every observation that a quantum effect might be biologically important somewhere is irrelevant. Orch-OR isn't "quantum biology", it's pixie-dust biology.

  • Who needs usernames when you have "context clues" instead? :-P