Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SW
Posts
38
Comments
1,815
Joined
2 yr. ago

  • take 2 minutes to think of precisely the information I need

    I can’t even put into words the full nonsense of this statement. How do you think this would work? This is not how learning works. This is not how research works. This is not how anything works.

    This part threw me as well. If you can think of it, why read for it? Didn’t make sense and so I stopped looking into this particular abyss until you pointed it out again.

    I think the only interpretation of what this person said that approaches some level of rationality on their part is essentially a form of confirmation bias. They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want. LLMs are biased to be people-pleasers and will happily spin whatever hallucinated tokens the user throws at them. That’s my best guess.

    That you didn’t think of the above just goes to show the failure of your unfeeble mind’s logic and reason to divine such a truth. Just kidding, sorta, in the sense that you can’t expect to understand an irrational thought process using rationality.

    But if it’s not that I’m still thrown.

  • Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right

    I agree! I'm more thinking of the case where a kid might overhear what they think is a phone call when it's actually someone being mean to Siri or whatever. I mean, there are more options than "be nice to digital entities" if we're trying to teach children to be good humans, don't get me wrong. I don't give a shit about the non-feelings of the LLMs.

  • hey dawg if you want to be anti-capitalist that’s great, but please interrogate yourself on who exactly is developing LLMs and who is running their PR campaigns before you start simping for AI and pretending like a hallucination engine is a helpful tool in general and specifically to help people understand complex topics where precision and nuance are needed and definitely not fucking hallucinations. Please be serious and for real

  • Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.

  • Got curious and wanted to see if I could beat the Atari 2600. Found an online emulator here.

    "Easiest" difficulty appears to be 8, followed by 1, then increasing in difficulty up to 7. I can beat 8, and the controls and visuals are too painful for me to try anything more than this.