Skip Navigation
17 comments
  • No shit, that's how LLMs work.

    • This gets me often. You keep finding papers and studies claiming things I thought were well understood, which ends up revealing corporate hype that had passed me by.

      So it turns out that letting a LLM self-prompt for a while before responding makes it a bit tighter in some ways but not self aware, huh? I have learned that this was a thing people were unclear about, and nothing else.

  • I'm a bit torn on this. On one hand: obviously LLMs do this, since they're essentially just huge pattern recognition and prediction machines, and basically any person probing them with new complex problems has made that exact observation already. On the other hand: a lot of everyday things us humans do are not that dissimilar from recognizing patterns and remembering a solution, and it feels like doing this step well is a reasonable intermediate step towards AGI, and not as hugely far off as this article makes it out to be.

17 comments