Skip Navigation

Posts
5
Comments
259
Joined
2 yr. ago

  • Yay! some nice

     
            Dont  Dead
        Open Inside
    
    
      

    Abundance Agenda horriffying strings of words.

    How to save liberalism (without being boring)

    Congratulations! You appear to be failing so far—on both counts!

  • I love that their stated "Pitch us!" suggestion box email address pitches@theargument.com doesn't appear to have any registered MX records.

    I wonder if this is an intentional shredder meme situation (I doubt it), and if not how long it will take them to notice. (I'm assuming that's the domain they wanted but haven't quite been able to buy it yet, not very serious.)

    EDIT: Fixed already.

  • Because of course why have a data center when you can have an ecumenskatasphaira.

  • Pressing F for doubt, looks like a marketing scam to me.

  • the oldest elements of TESCREAL appear to date back to cyberpunk science fiction in the 1980s

    Nitpick: Cosmism was birthed in 19th century Russia, complete with "Death is the enemy" "Let's ressurect everyone" (using science) "Let's conquer the universe" and proto-eugnenics of the "common project of humanity as transforming all into great men".

  • I attempted a point by point sneer, but there is a bit too much silliness and not enough cohesion to produce something readable.

    So focusing on "Post-critique":

    OP misspels of some of his "enemy" authors, in a way directly cribbed from Wikipedia suggesting no real analysis.

    [...], such texts included Ricouer's Freud and Philosophy: An Essay on Interpretation, Wittgenstein's Philosophical Investigations and On Certainty, Merleau-Ponty's Phenomenology of Perception, Hannah Arendt's The Human Condition, and Kierkegaard's works [...]

    Ricouer should be Ricœur or at the very least Ricoeur. (Incidentally OP also makes a very poor summary of his work)

    Complete and arbitrary marriage of epistemic post-critique and literary post-critique, which as far as I can see have nothing to do with each other beyond sharing a name, and in fact even seem a bit at odds with each other in how they relate to recontextualisation.

    I would say this is obviously bot vomit, but I have known humans to be this lazy and thickheaded.

  • PS: We also think that there existing a wiki page for the field that one is working in increases one's credibility to outsiders - i.e. if you tell someone that you're working in AI Control, and the only pages linked are from LessWrong and Arxiv, this might not be a good look.

    Aha so OP is just hoping no one will bother reading the sources listed on the article...

  • I think a big difference between Thiel and Musk, is that Thiel views himself as an "intellectual" and derives prestige "intellectualism". I don't believe for a minute he's genuinely christian, but his wankery about end-of-times eschatology of armageddon = big-left-government, is a a bit too confused to be purely cynical, I think sniffing his own farts feeds his ego.

    Of course a man who would promote open doping olympics isn't sober.

  • I'm under the impression that he essentially stated as much, though i'm a bit too lazy to go quote mining.

  • Oof on the part of the author though:

    Eliezer Yudkowsky: Nope.

    Algernoq (the blogpost author): I assume this is a "Nope, because of secret author evidence that justifies a one-word rebuttal" or a "Nope, you're wrong in several ways but I have higher-value things to do than retype the sequences". (Also, it's an honor; I share your goal but take a different road.) [...]

    Richard_Kennaway: What goal do you understand yourself to share with Eliezer, and what different road?

    Algernoq: I don't deserve to be arrogant here, not having done anything yet. The goal: I had a sister once, and will do what I can to end death. The road: I'm working as an engineer (and, on reflection, failing to optimize) instead of working on existential risk-reduction. My vision is to build realistic (non-nanotech) self-replicating robots to brute-force the problem of inadequate science funding. I know enough mechanical engineering but am a few years away from knowing enough computer science to do this.

  • And the extension of this to characters, and I don't actually remember at this point, if this exact way of phrasing it is original to me or not, is that you might think of a three dimensional character as one who contains at least two two-dimensional characters.

    Ahhh! No! I can't! Just... NO. Two stereotypes don't make a full person! (screams into a pillow)

  • Funnily enough it isn't even required by their purported bayesian doctrine (which proves none of them do the math), you could simply "update forward" again based on the new evidence that the text is part-fictional.

  • Counter-theory: The now completely irrelevant search results and the idiotic summaries, are a one-two punch combo, that plunges the user in despair, and makes them close the browser out of disgust.

  • Subjectively speaking:

    1. Pre-LLM summaries were for the most part actually short.
    2. They were more directly lifted from human written sources, I vaguely remember lawsuits or the threat of lawsuits by newspapers over google infoboxes and copyright infringement in pre-2019 days, but i couldn't find anything very conclusive with a quick search.
    3. They didn't have the sycophantic—hey look at me I'm a genius—overly-(and wrong)-detailed tone that the current batch has.
  • This is obviously a math olympiad gold medal performance, Fields medal worthy even!

  • It can't be that stupid, you haven't read the sequences hard enough.

  • I mean if you want to be exceedingly generous (I sadly have my moments), this is actually remarkably close to the "intentional acts" and "shit happens" distinction, in a perverse Rationalist way. ^^

  • But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.

  • Did you read any of what I wrote? I didn't say that human interactions can't be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.


    EDIT:

    To clarify I and maybe put it in terms which are closer to your interpretation.

    With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.

    With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it's where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There's nothing below the surface,

    I don't personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it's more useful to frame it as respecting other people's boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.

    And I especially don't think it healthy to essentially be saying: "have the same expectations of chatbots and service workers".


    TLDR:

    You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it's not just about protecting yourself, it's also about protecting them.

    You should never catch feelings for a chatbot, because they don't have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).