Skip Navigation

Posts
51
Comments
1,222
Joined
2 yr. ago

  • Gah. I've been nerd sniped into wanting to explain what LessWrong gets wrong.

  • There's a "critique of functional decision theory"... which turns out to be a blog post on LessWrong... by "wdmacaskill"? That MacAskill?!

  • If you want to read Yudkowsky's explanation for why he doesn't spend more effort on academia, it's here.

    spoiler alert: the grapes were totally sour

  • If you go over to LessWrong, you can get some ideas of what is possible

  • You might think that this review of Yud's glowfic is an occasion for a "read a second book" response:

    Yudkowsky is good at writing intelligent characters in a specific way that I haven't seen anyone else do as well.

    But actually, the word intelligent is being used here in a specialized sense to mean "insufferable".

    Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.

    Ah, the book that isn't actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn't sufficiently self-aware to know that's what she was writing.

  • Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.

    I'm trying, but I can't not donate any harder!

    The most popular LessWrong posts, SSC posts or books like HPMoR are usually people's first exposure to core rationality ideas and concerns about AI existential risk.

    Unironically the better choice: https://archiveofourown.org/donate

  • The post:

    I think Eliezer Yudkowsky & many posts on LessWrong are failing at keeping things concise and to the point.

    The replies: "Kolmogorov complexity", "Pareto frontier", "reference class".

  • The lead-in to that is even "better":

    This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We've never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

    "The reason for optimism is that we can cozy up to fascists!"

  • The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent

    Uh-huh.

  • An interesting thing came through the arXiv-o-tube this evening: "The Illusion-Illusion: Vision Language Models See Illusions Where There are None".

    Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something "really is" and how something "appears to be", and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.

  • Governments have criminalized the practice of managing your own health.

    I have the feeling that they're not a British trans person talking about the NHS, or an American in a red state panicking about dying of sepsis because the baby they wanted so badly miscarried.

  • I must have been living under a rock/a different kind of terminally online, because I had only ever heard of Honey through Dan Olson's riposte to Doug Walker's The Wall, which describes Doug Walker delivering "an uncomfortably over-acted ad for online data harvesting scam Honey" (35:43).

  • I saw this floating around fedi (sorry, don't have the link at hand right now) and found it an interesting read, partly because it helped codify why editing Wikipedia is not the hobby for me. Even when I'm covering basic, established material, I'm always tempted to introduce new terminology that I think is an improvement, or to highlight an aspect of the history that I feel is underappreciated, or just to make a joke. My passion project — apart from the increasingly deranged fanfiction, of course — would be something more like filling in the gaps in open-access textbook coverage.

  • ruh roh

    Jump
  • "I'm extremely left-leaning, but I do have concerns about the (((globalists))) in finance"

  • As a person whose job has involved teaching undergrads, I can say that the ones who are honestly puzzled are helpful, but the ones who are confidently wrong are exasperating for the teacher and bad for their classmates.

  • ruh roh

    Jump
  • I am too tired to put up with people complaining about "angies" and "woke lingo" while trying to excuse their eugenicist drivel with claims of being "extremely left leaning". Please enjoy your trip to the scenic TechTakes egress.

  • "If you don't know the subject, you can't tell if the summary is good" is a basic lesson that so many people refuse to learn.

  • From the replies:

    In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.

    Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.

    There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.

    And a good sneer:

    With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.