You have activated the Falsifiability trap card - LLMs as tutors = lol
You have activated the Falsifiability trap card - LLMs as tutors = lol
User @theneverfox@pawb.social has posed a challenge when trying to argue that... wait, what?
The biggest, if rarely used, use case [of LLMs] is education - they’re an infinitely patient tutor that can explain things in many ways and give you endless examples.
Lol what.
Try reading something like Djikstra’s algorithm on Wikipedia, then ask one to explain it to you. You can ask for a theme, ask it to explain like you’re 5, or provide an example to check if you understood and have it correct any mistakes
It’s fantastic for technical or dry topics, if you know how to phrase things you can get quick lessons tailored to be entertaining and understandable for you personally. And of course, you can ask follow up questions
Well, usually AI claims are just unverifiable gibberish but this? Dijkstra's algorithm is high school material. This is a verifiable claim. And oh boi, do AI claims have a long and storied history of not standing up to scrutiny...

I sincerely didn't expect it'd take me so little, but well, this is patently wrong. One, this is not Dijkstra's algorithm. Two, picking the shortest road always is obviously incorrect, see this map:

Green path is the shortest to candy. Red path is what you get by always following the shortest road.
Dijkstra's algorithm picks the closest node that was seen thus far and tries to make paths better by expanding from there, the idea being that if some node is far away then paths through it are going to be long, so we don't need to look at them until there's no other option. In this case it'll immediately see A at distance 1 and Candy at distance 2, expand from A (since it's closer) to get to B in distance 2; after that it will look at B and Candy, but see it cannot improve from there and terminate.
Let's see what ChatGPT will tell me when I bring this counterexample to its stupid algorithm to its attention.

It fucking doubles down! It says it's wrong and then gives the same stupid algorithm just with "map" changed to "maze" and "candy" changed to "toy"! And I wanted candy!
Okay, maybe saying "like I'm 5" was wrong, let's try to recreate something closer to what @theneverfox wanted.

Okay, at least it's not incorrect, there are no lies in this, although I would nitpick two things:
- It doesn't state what the actual goal of the algorithm is. It says "fundamental method used in computer science for finding the shortest paths between nodes in a graph", but that's not precise; it finds the shortest paths from a node to all other nodes, whereas the wording could be taken to imply its between two nodes.
- "infinity (or a very large number)" is very weird without explanation. Dijkstra doesn't work if you put "a very large number", you have to make sure it's larger than any possible path length (for example, sum of all weights of edges would work).
Those are rather pedantic and I can excuse them. The bigger issue is that it doesn't really tell you anything that you wouldn't get from the Wikipedia article? It lifts sentences from there changing the structure, but it doesn't make it any clearer. Actually, Wikipedia has an example in the text describing the "Iterative Process" steps, but ChatGPT threw it away. What's the value here, exactly?
Let's try asking something non-obvious that I didn't get first when learning Dijkstra:

What?! This is nonsense! Gibberish! Bollocks!
It does really well at first, no wonder, since the first sentences are regurgitated from Wikipedia. Then it gives a frankly idiotic example of a two vertex graph where Dijkstra does give the correct answer since it's trivial and there's only one edge. But it's really easy to come up with an actual counterexample, so I asked for it directly, and got... Jesus Christ. If images are better for you, here is the graph described by ChudGPT:

Dijkstra here correctly picks the shortest path to C:
- Distances = { 0, ∞, ∞ }, active = [A at 0], pick edges from A
- Distances = { 0, 1, 4 }, active = [B at 1, C at 4], pick edges from B
- Distances = { 0, 1, -1 }, active = [C at -1], pick edges from C
- Distances = { 0, 1, -1 }, end.
This is not a counterexample to Dijkstra. ChatGPT even says that! Its step 3 clearly finds the distance 1 to C! And then it says the actual shortest path is 4! A fucking 7 year old can see this is wrong!
It's very easy to change this to an actual counterexample as well, just replace the weight on A->B with 5. The shortest path is then 3, but because of how Dijkstra works it will visit C first, save the distance of 4, and then never revisit C. This is the actual reason Dijkstra doesn't work.
It fails miserably to explain the basics, it fails spectacularly to explain a non-obvious question an actual student just introduced to Dijkstra might have, and, I left my specialité for the end:

More computer-science-savvy among you are surely already laughing. ChatGPT just solved P=NP! With Floyd-Warshall!
Again, it starts off good -- Dijkstra indeed cannot find longest paths. The next sentence is technically correct, though rather hollow.
"Finding the longest path in a graph is a more complex problem and typically involves different algorithms or approaches." Ye, that's correct, it's extremely complex -- it's what we call an NP-complete problem 1! It's currently unknown whether these problems are solvable in reasonable time. It then gives the "negate the weights" approach and correctly remarks it doesn't actually work, and then it absolutely clowns itself by saying you can solve it with Floyd-Warshall. You can't. That's just plain dumb. How would it?
I'm not going to delve deeper into this. This is a bullshit generator that has a passing knowledge of the Wikipedia article (since it trained on it), but shows absolutely no understanding of the topic it covers. It can repeat the basic sentences it found, but it cannot apply them in any new contexts, it cannot provide sensible examples, it stumbles over itself when trying to explain a graph with three fucking vertices. If it were a student on an oral exam for Intro to Algorithms I would fail it.
And as a teacher? Jesus fucking Christ, if a guy stumbled into a classroom to teach first year students, told them that you can find shortest paths by greedily choosing the cheapest edge, then gave a counter-counterexample to Dijkstra, and finally said that you can solve Longest Path in O(n3), he better be also fucking drunk, cause else there'd be no excuse! That's malpractice!
None of this is surprising, ChudGPT is just spicy autocomplete after all, but apparently it bears laying out. The work of an educator, especially in higher education, requires flexibility of mind and deep understanding of the covered topics. You can't explain something in simple words if you don't actually get it, and you can't provide students with examples and angles that speak to them and help in their learning process if you don't understand the topic from all those angles yourself. LLMs can't do that, fundamentally and by design.
It’s fantastic for technical or dry topics
Give me a fucking break.
1. Pedantically, it's NP-hard, the decision version is NP-complete. This footnote is to prevent some smartass from correcting me in the comments...