The main use case for LLMs is writing text nobody wanted to read. The other use case is summarizing text nobody wanted to read. Except they don’t do that either. The Australian Securities and…
ATTN: If you're coming into this thread to say, "The output of AI is bad because your prompts suck," I'm just proud that you managed to figure out how to use the internet at all. Good job, you!
They certainly do. For a while it was common to see AI-generated summaries under links to articles on lemmy, so I got a feel for them. Seems to me you would not need any fancy artificial intelligence to do equally well: Just take random excerpts, or maybe just read every third sentence.
Could it be because a statistical relation isn't the same as a semantic one? No, I must be prompting it wrong. I'll just add "engineer" to my title and then everyone will take me seriously.
I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.
Is it only me, or is the linked article not super long on details & is reaching a conclusion from 2 examples? This is important & I need to hear more, & I’m generally biased against AI at this point— but the article isn’t doing enough to convince me
You could use them to know what the text is about, and if it's worth your reading time. In this situation, it's fine if the AI makes shit up, as you aren't reading its output for the information itself anyway; and the distinction between summary and shortened version becomes moot.
However, here's the catch. If the text is long enough to warrant the question "should I spend my time reading this?", it should contain an introduction for that very purpose. In other words if the text is well-written you don't need this sort of "Gemini/ChatGPT, tell me what this text is about" on first place.
EDIT: I'm not addressing documents in this. My bad, I know. [In my defence I'm reading shit in a screen the size of an ant.]
The problem is not the LLMs, but what people are trying to do with them.
They are currently spoons, but people are desperately wishing they were katanas.
They work really well for soup, but they can't cut steak. But they're being hyped as super ninja steak knives, and people are getting pissed when they can't cut steak.
If you give them watery, soupy tasks they can do successfully, they can lighten your workload, as long as you're aware of what they are and aren't good at.
What people want LLMs to be able to do, ie. "Steak" tasks:
write complex documents
apply complex knowledge/rules to a situation
Write complex code and create entire programs based on vague description
What LLMs can currently do ie. "Soup" tasks:
check this document and fix all spelling, punctuation and grammatical errors
summarise this paragraph as dot points
write a python program that sorts my photographs into folders based on the year they were taken
Half of Lemmy is hyping katanas, the other half is yelling "Why won't my spoon cut this steak?!! AI is so dumb!!!"
Update: wow, the pure vitriol pouring out of the replies is just stunning. Seems there are a lot of you out there who have, in one way or another, tied your ego very strongly to either the success or failure of AI.
Take a step back, friends, and go outside for a while.
Ok? I don't have another human available to skim a shitload of documents for me to find answers I need and I don't have time to do ot myself. AI is my best option.
I keep having to remind people. Chatgpt is only as good as the prompt you give it. I am astounded as the amount of garbage that some people get, but I also know that it's generally because their prompts are garbage.
Sometimes it's output sucks, even with good input. But likely, if the output is bad, the input was bad.