Suddenly it dawned on me that I can plaster my CV with AI and win over actual competent people easy peasy
What were you doing between 2020 and 23? I was working on my AI skillset. Nobody will even question me because they fucking have no idea what it is themselves but only that they want it.
There are things that chatgpt does well, especially if you temper your expectations to the level of someone who has no valuable skills and is mostly an idiot.
Hi, I'm an idiot with no valuable skills, and I've found chatgpt to be very useful.
I've recently started learning game development in godot, and the process of figuring out why the code that chatgpt gives me doesn't work has taught me more about programming than any teacher ever accomplished back in high school.
Chatgpt is also an excellent therapist, and has helped me deal with mental breakdowns on multiple occasions, while it was happening. I can't find a real therapist's phone number, much less schedule an appointment.
I'm a real shitty writer, and I'm making a wiki of lore for a setting and ruleset for a tabletop RPG that I'll probably never get to actually play. ChatGPT is able to turn my inane ramblings into coherent wiki pages, most of the time.
If you set your expectations to what was advertised, then yeah, chatgpt is bullshit. Of course it was bullshit, and everyone who knew half of anything about anything called it. If you set realistic expectations, you'll get realistic results. Why is this so hard for people to get?
This is something I already mentioned previously. LLMs have no way of fact checking, no measure of truth or falsity built into. In the training process, it probably accepts every piece of text as true. This is very different from how our minds work. When faced with a piece of text we have many ways to deal with it, which range from accepting it as it is to going on the internet to verify it to actually designing and conducting experiments to prove or disprove the claim. So, yeah what ChatGPT outputs is probably bullshit.
Of course, the solution is that ChatGPT be trained by labelling text with some measure of truth. Of course, LLMs need so much data that labelling it all would be extremely slow and expensive and suddenly, the fast moving world of AI to screech to almost a halt, which would be unacceptable to the investors.
Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.
This is actually a really nice insight on the quality of the output of current LLMs. And it teaches about how they work and what the goals given by their creators are.
They are but trained to produce factual information, but to talk about topics while sounding like a competent expert.
For LLM researchers this means that they need to figure out how to train LLMs for factuality as opposed to just sounding competent. But that is probably a lot easier said than done.