Skip Navigation

Recent AI failures are cracks in the magic

109
109 comments
  • There's magic?

    56
  • Yea, try talking to chatgpt about things that you really know in detail about. It will fail to show you the hidden, niche things (unless you mention them yourself), it will make lots of stuff up that you would not pick up on otherwise (and once you point it out, the bloody thing will "I knew that" you, sometimes even if you are wrong) and it is very shallow in its details. Sometimes, it just repeats your question back to you as a well-written essay. And that's fine...it is still a miracle that it is able to be as reliable and entertaining as some random bullshitter you talk to in a bar, it's good for brainstorming too.

    43
  • Good. It's dangerous to view AI as magic. I've had to debate way too many people who think they LLMs are actually intelligent. It's dangerous to overestimate their capabilities lest we use them for tasks they can't perform safely. It's very powerful but the fact that it's totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.

    30
  • Those recent failures only come across as cracks for people who see AI as magic in the first place. What they're really cracks in is people's misperceptions about what AI can do.

    Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it's not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don't need to jump straight to that level to still get dramatic changes to society and the economy out of it.

    I get strong "everything is amazing and nobody is happy" vibes from this sort of thing.

    26
  • I hope it collapses in a fire and we can just keep our foss local models with incremental improvements, that way both techbros and artbros eat shit

    15
  • As I often mention when this subject pops up: while the current statistics-based generative models might see some application, I believe that they'll be eventually replaced by better models that are actually aware of what they're generating, instead of simply reproducing patterns. With the current models being seen as "that cute 20s toy".

    In text generation (currently dominated by LLMs), for example, this means that the main "bulk" of the model would do three things:

    • convert input tokens into sememes (units of meaning)
    • perform logic operations with the sememes
    • convert sememes back into tokens for the output

    Because, as it stands, LLMs are only chaining tokens. They might do this in an incredibly complex way, but that's it. That's obvious when you look at what LLM-fuelled bots output as "hallucination" - they aren't the result of some internal error, they're simply an undesired product of a model that sometimes outputs desirable stuff too.

    Sub "tokens" and "sememes" with "pixels" and "objects" and this probably holds true for image generating models, too. Probably.

    Now, am I some sort of genius for noticing this? Probably not; I'm just some nobody with a chimp avatar, rambling in the Fediverse. Odds are that people behind those tech giants already noticed the same ages ago, and at least some of them reached the same conclusion - that better gen models need more awareness. If they are not doing this already, it means that this shit would be painfully expensive to implement, so the "better models" that I mentioned at the start will probably not appear too soon.

    Most cracks will stay there; Google will hide them with an obnoxious band-aid, OpenAI will leave them in plain daylight, but the magic trick will still not be perfect, at least in the foreseeable future.

    And some might say "use MOAR processing power!", or "input MOAR training data!", in the hopes that the current approach will "magically" fix itself. For those, imagine yourself trying to drain the Atlantic with a bucket: does it really matter if you use more buckets, or larger buckets? Brute-forcing problems only go so far.

    Just my two cents.

    13
  • There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn't it insane how far we've come since then?

    Image generation, video generation, self-driving cars (Level 4 so the driver doesn't need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.

    Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.

    13
  • Trying to make real and good use of AI generative models are cracks in the magic.

    8
  • It's well worth reading the longer newsletter the above link quotes: https://www.wheresyoured.at/sam-altman-fried/

    I kinda agree we are probably cresting the peak of the hype cycle right now.

    2
  • "This post is for paid subscribers"

    (Also that page has a script I had to override just to copy and paste that)

    2
You've viewed 109 comments.