Indication that drawing the boundary is hard is just looking at how bad current LLMs are with hallucinating. An LLM almost never states “I don't know” or “I am unsure”, at least not in a meaningful fashion. Ask it about anything that's known to be an unsolved problem, it'll tell you so — but ask it about anything obscure, and it'll come up with some plausible-sounding bullshit.
And I think that's a failure to recognize the boundary of what it knows vs what it doesn't.
I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.
Indication that drawing the boundary is hard is just looking at how bad current LLMs are with hallucinating. An LLM almost never states “I don't know” or “I am unsure”, at least not in a meaningful fashion. Ask it about anything that's known to be an unsolved problem, it'll tell you so — but ask it about anything obscure, and it'll come up with some plausible-sounding bullshit.
And I think that's a failure to recognize the boundary of what it knows vs what it doesn't.
I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.