Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
Posts
0
Comments
113
Joined
2 yr. ago

  • It's a bit tangential, but using ChatGPT to write a press release and then being unable to answer any critical questions about it is a little bit like using an app to climb a mountain wearing shorts and flip-flops without checking the weather first and then being unable to climb back down once the inevitable thunderstorm has started.

  • A while ago, I uploaded a .json file to a chatbot (MS Copilot, I believe). It was a perfectly fine .json, with just one semicolon removed (by me). The chatbot was unable to identify the problem. Instead, it claimed to have found various other "errors" in the file. Would be interesting to know if other models (such as GPT-5) would perform any better here, as to me (as a layperson) this sounds somewhat similar to the letter counting problem.

  • Turns out I had overlooked the fact that he was specifically seeking to replace chloride rather than sodium, for whatever reason (I'm not a medical professional). If Google search (not Google AI) tells the truth, this doesn't sound like a very common idea, though. If people turn to chatbots for questions like these (for which very little actual resources may be available), the danger could be even higher, I guess, especially if chatbots had been trained to avoid disappointing responses.

  • On first glance, this also looks like a case where a chatbot confirmed a person's biases. Apparently, this patient believed that eliminating table salt from his diet would make him healthier (which, to my understanding, generally isn't true - consuming too little or no salt could be even more dangerous than consuming too much). He was then looking for a "perfect" replacement, which, to my knowledge, doesn't exist. ChatGPT suggested sodium bromide, possibly while mentioning that this would only be suitable for purposes such as cleaning (not as food). I guess the patient is at least partly to blame here. Nevertheless, ChatGPT seems to have supported his nonsensical idea more strongly than an internet search would have done, which in my view is one of the more dangerous flaws of current-day chatbots.

    Edit: To clarify, I absolutely hate chatbots, especially the idea that they could replace search engines somehow. Yet, regarding the example above, some AI bros would probably argue that the chatbot wasn't entirely in the wrong if it hadn't suggested adding sodium bromide to food. Nevertheless, I would still assume that the chatbot's sycophantic communication style significantly exacerbated the problem on hand.

  • Maybe my analogy is a little bit too silly and too obvious, but I think wanting a humanoid robot (rather than one designed in a way that is best suited for the purpose) could be somewhat akin to wanting a mechanical horse rather than a car. On the one hand, this may sound like a reasonable idea if saddles, carriages, stables and blacksmiths are already available. On the other hand, the mechanical horse is going to be a lot slower than a car and a lot more uncomfortable to ride. Also, it is still going to need charging stations or gas stations (since it won't eat oats) and dedicated repair shops (since veterinarians won't be able to fix it). Also, its technology might be a lot more complex and difficult to fix than that of a car (especially the early models).

  • I guess both chatbots and humanoid robots are basically about the fantasy of automating human labor away effortlessly. In the past, most successful automation probably required a strong understanding of not just the tech, but also the tasks themselves and often a complete overhaul of processes, internal structures etc. In the end, there was usually still a need for human labor, just with different skill sets than before. Many people from the C-suite aren't very good at handling these challenges, even if they would want to make everyone believe otherwise. This is probably why the promise of reaping all the rewards of automation without having to do the work sounds compelling to many of them.

  • the reason is they’re selling sci fi dreams of robot servants even though these dreams are lies.

    We've seen the same with chatbots, I guess. Objectively speaking, they perform worse at most tasks than regular search engines, databases, dedicated machine learning-based tools etc. However, they sound humanoid (like overly sycophantic human office workers, to be more precise), thus the hype.

  • It's also very difficult to get search results in English when this isn't set as your first language in Google, even if your entire search term is in English. Even "Advanced Search" doesn't seem to work reliably here, and of course, it always brings up the AI overview first, even if you clicked advanced search from the "Web" tab.

  • I guess the question here really boils down to: Can (less-than-perfect) capitalism solve this problem somehow (by allowing better solutions to prevail), or is it bound to fail due to the now-insurmountable market power of existing players?

  • Somehow makes me think of the times before modern food safety regulations, when adulterations with substances such as formaldehyde or arsenic were common, apparently: https://pmc.ncbi.nlm.nih.gov/articles/PMC7323515/ We may be in a similar age regarding information now. Of course, this has always been a problem with the internet, but I would argue that AI (and the way oligopolistic companies are shoving it into everything) is making it infinitely worse.

  • If I'm not mistaken, even in pre-LLM days, Google had some kind of automated summaries which were sometimes wrong. Those bothered me less. The AI hallucinations appear to be on a whole new level of wrong (or is this just my personal belief - are there any statistics about this?).

  • Most searchers don’t click on anything else if there’s an AI overview — only 8% click on any other search result. It’s 15% if there isn’t an AI summary.

    I can't get over that. An oligopolistic company imposes a source on its users that is very likely either hallucinating or plagiarizing or both, and most people seem to eat it up (out of convenience or naiveté, I assume).

  • Maybe us humans possess a somewhat hardwired tendency to "bond" with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).

  • At first glance it seems impossible once N≥2, because as soon as you bring a boat across to the right bank, one of you must pilot a boat back—leaving a boat behind on the wrong side.

    In this sentence, the bot appears to sort of "get" it (not entirely, though, the wording is weird). However, from there, it definitely goes downhill...