Am I the only one getting agitated by the word AI?
Am I the only one getting agitated by the word AI (Artificial Intelligence)?
Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).
Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.
The word "AI" has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer "thinking on its own"?
I'm more infuriated by people like you who seem to think that the term AI means a conscious/sentient device. Artificial intelligence is a field of computer science dating back to the very beginnings of the discipline. LLMs are AI, Chess engines are AI, video game enemies are AI. What you're describing is AGI or artificial general intelligence. A program that can exceed its training and improve itself without oversight. That doesn't exist yet. AI definitely does.
I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.
The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.
What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.
And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.
My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.
Maybe just accept it as shorthand for what it really means.
Some examples:
We say Kleenex instead of facial tissue, Band-Aid instead of bandage, I say that Siri butchered my "ducking" text again when I know autocorrect is technically separate.
We also say, "hang up on someone" when there is no such thing anymore
Hell, we say "cloud" when we really mean "someone's server farm"
Don't get me started on "software as a service" too ...a bullshit fancy name for a subscription website that actually has some utility.
I'll be direct, your texts reads like you only just discovered AI. We have much more than "only LLMs", regardless of whether or not these other models pass turing tests. If you feel disgruntled, then imagine what people who've been researching AI since the 70s feel like...
Yes, but I'm more annoyed with posts and conversations about it that are like this one. People on Lemmy swear they hate how uninformed and stupid the average person is when it comes to AI, they hate the click bait articles etc etc. Aaand then there's at least 5 different posts about it on the front page every. single. day., with all the comments saying exactly the same thing they said the day before, which is:
"Users are idiots for trusting a tech company, it's not Google's responsibility to keep your private data safe." "No one understands what 'AI' actually means except me." "Every middle-America dad, grandma and 10 year old should have their very own self hosted xyz whatever LLM, and they're morons if they don't and they deserve to have their data leaked." And can't forget the ubiquitous arguments about what "copyright infringement" means when all the comments are actually in agreement, but they still just keep repeating themselves over and over.
You can still check if its a real human if you do something really stupid or speak or write giberisch. Almost every AI will try to reply to it or say "Sorry i couldnt understand it" or recent events ( most of the LLMs arent trained on the newest events )
I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people. To me, it’s as useful as a toy is. Fun for a minute or two, and then you’re just reminded how awful it is and drop it in the bin to play with when you’re bored enough to.
In my first AI lecture at uni, my lecturer started off by asking us to spend 5 minutes in groups defining "intelligence". No group had the same definition. "So if you can't agree on what intelligence is, how can we possibly define artificial intelligence?"
AI has historically just described cutting edge computer science at the time, and I imagine it will continue to do so.
The term is so over used at this point I could probably start referring to any script I write that has condition statements in it and convince my boss I have created our own “AI”.
A lot of the comments I've seen promoting AI sound very similar to ones made around the time GME was relevant or cryptocurrency. Often, the conversations sounded very artificial and the person just ends up repeating buzzwords/echo chamber instead of actually demonstrating that they have an understanding of what the technology is or its limitations.
You're not the only one but I don't really get this pedantry, and a lot of pedantry I do get. You'll never get your average person to switch to the term LLM. Even for me, a techie person, it's a goofy term.
Sometimes you just have to use terms that everyone already knows. I suspect we will have something that functions in every way like "AI" but technically isn't for decades. Not saying that's the current scenario, just looking ahead to what the improved versions of chat gpt will be like, and other future developments that probably cannot be predicted.
I assume you're referring to the sci-fi kind of self-aware AI because we've had 'artificial intelligence' in computing for decades in the form of decision making algorithms and the like. Whether any of that should be classed as AI is up for debate as again, it's still all a facade. In those cases, people only really cared about the outputs and weren't trying to argue they were alive or anything.
What I've found most painful is how people with no fucking clue about AI or ML chime in with their expert advice, when in reality they're as much an expert on AI as a calculator salesman is an expert in linear algebra. Having worked closely with scientists that hold PhD's, publish papers regularly, and who work on experiments for years, it makes me hate the hustle culture that's built up around AI. It's mostly crypto cunts looking for their next scheme, or businesses looking to abuse buzzwords to make themselves sound smart.
Purely my two-cents, but while LLM's have surprised a lot of people with their high quality output. With that being said, they are known to heavily hallucinate, cost fuckloads, and there is a growing group of people that wonder whether the great advances we've seen are either due to a lot of hand-holding, or the use of a LOT of PII or stolen data. I don't think we'll see an improvement from what we've already seen, just many other companies having their own similar AI tools that help a little with very well-defined menial tasks.
I think the hype will die out eventually, and companies that decided to bin actual workers in favour of AI will likely not be around 12-24 months later. Hopefully most people and businesses will see through the bullshit, and see that the CEO of a small ad agency that has positioned himself as an AI expert is actually a lying simpleton.
As for it being "real AI" or "real ML", who gives a fuck. If researchers are happy with the definition, who are we to be pedantic? Besides, there are a lot of systems behind the scenes running compositional models, handing entity resolution, or building metrics for success/failure criteria to feed back into improving models.
I think most people consider LLMs to be real AI, myself included. It’s not AGI, if that’s what you mean, but it is AI.
What exactly is the difference between being able to reliably fool someone into thinking that you can think, and actually being able to think? And how could we, as outside observers, be able to tell the difference?
As far as your question though, I’m agitated too, but more about things being marketed as AI that either shouldn’t have AI or don’t have AI.
Richard Stallman founded the GNU project after experiences at the MIT AI lab in the 70s and early 80s. It's also why emacs uses lisp (lisp also being heavily used in AI research, at least at the time). Anyone using Linux should be aware of the links to AI.
I think AI has been around for a while, but people misunderstand what it means.
When I was in university (2002 or so) we had an "AI" lecture and it was mostly "if"s and path finding algorithms like A*.
So I would argue that us the engineers have been using the term to define a wider use cases long before LLM, CEO and marketing people did it. And I think that's fine, as categorising algorithms/solutions as AI helps understand what they will be used for, and we (at least the engineers) don't tend to assume an actual self aware machine when we hear that name.
nowadays they call that AGI, but it wasn't always like that, back in my time it was called science fiction 😉
AI is a forever-in-the-future technology. When I was in school, fuzzy logic controllers were an active area of "AI" research. Now they are everywhere and you'd be laughed at for calling them AI.
The thing is, as soon as AI researchers solve a problem, that solution no longer counts as AI. Somehow it's suddenly statistics or "just if-then statements", as though using those techniques makes something not artificial intelligence.
For context, I'm of the opinion that my washing machine - which uses sensors and fuzzy logic to determine when to shut off - is a robot containing AI.
It contains sensors, makes judgements based on its understanding of "the world" and then takes actions to achieve its goals. Insofar as it can "want" anything, it wants to separate the small masses from the large masses inside itself and does its best to make that happen. As tech goes, it's not sexy, it's very single purpose and I'm not really worried that it's gonna go rogue.
We are surrounded by (boring) robots all day long. Robots that help us control our cars and do our laundry. Not to mention all the intelligent, disembodied agents that do things like organize our email, play games with us, and make trillions of little decisions that affect our lives in ways large and small.
Somehow, though, once the mystery has yielded to math, society doesn't believe these decision-making machines are AI any longer.
AI has, for a long time been a Hollywood term for a character archetype (usually complete with questions about whether Commander Data will ever be a real boy.) I wrote a 2019 blog piece on what it means when we talk about AI stuff.
Here are some alternative terms you can use in place of AI, when they're talking about something else:
AGI: Artificial General Intelligence: The big kahuna that doesn't exist yet, and many projects are striving for, yet is as evasive as fusion power. An AGI in a robot will be capable of operating your coffee machine to make coffee or assemble your flat-packed furniture from the visual IKEA instructions. Since we still can't define sentience we don't know if AGI is sentient, or if we humans are not sentient but fake it really well. Might try to murder their creator or end humanity, but probably not.
LLM Large Language Model: This is the engine behind digital assistants like Siri or Alexa and still suffer from nuance problems. I'm used to having to ask them several times to get results I want (say, the Starbucks or Peets that requires the least deviation from the next hundred kilometers of my route. Siri can't do that.) This is the application of learning systems see below, but isn't smart enough for your household servant bot to replace your hired help.
Learning Systems: The fundamental programmity magic that powers all this other stuff, whether simple data scrapers to neural networks. These are used in a whole lot of modern applications, and have been since the 1970s. But they're very small compared to the things we're trying to build with it. Most of the time we don't actually call it AI, even for marketing. It's just the capacity for a program to get better at doing its thing from experience.
Gaming AI Not really AI (necessarily) but is a different use of the term artificial intelligence. When playing a game with elements pretending to be human (or living, or opponents), we call it the enemy AI or mob AI. It's often really simple, except in strategy games which can feature robust enough computational power to challenge major international chess guns.
Generative AI: A term for LLMs that create content, say, draw pictures or write essays, or do other useful arts and sciences. Currently it requires a technician to figure out the right set of words (called a prompt) to get the machine do create the desired art to specifications. They're commonly confused by nuance. They infamously have problems with hands (too many fingers, combining limbs together, adding extra limbs, etc.). Plagiarism and making up spontaneous facts (called hallucinating) are also common problems. And yet Generative AI has been useful in the development of antibiotics and advanced batteries. Techs successfully wrangle Generative AI, and Lemmy has a few communities devoted to techs honing their picture generation skills, and stress-testing the nuance interpretation capacity of Generative AI (often to humorous effect). Generative AI should be treated like a new tool, a digital lathe, that requires some expertise to use.
Technological Singularity: A bit way off, since it requires AGI that is capable of designing its successor, lather, rinse, repeat until the resulting techno-utopia can predict what we want and create it for us before we know we want it. Might consume the entire universe. Some futurists fantasize this is how human beings (happily) go extinct, either left to retire in a luxurious paradise, or cyborged up beyond recognition, eventually replacing all the meat parts with something better. Probably won't happen thanks to all the crises featuring global catastrophic risk.
AI Snake Oil: There's not yet an official name for it, but a category worth identifying. When industrialists look at all the Generative AI output, they often wonder if they can use some of this magic and power to facilitate enhancing their own revenues, typically by replacing some of their workers with generative AI systems, and instead of having a development team, they have a few technicians who operate all their AI systems. This is a bad idea, but there are a lot of grifters trying to suggest their product will do this for businesses, often with simultaneously humorous and tragic results. The tragedy is all the people who had decent jobs who do no longer, since decent jobs are hard to come by. So long as we have top-down companies doing the capitalism, we'll have industrial quackery being sold to executive management promising to replace human workers or force them to work harder for less or something.
Friendly AI: What we hope AI will be (at any level of sophistication) once we give it power and responsibility (say, the capacity to loiter until it sees a worthy enemy to kill and then kills it.) A large coalition of technology ethicists want to create cautionary protocols for AI development interests to follow, in an effort to prevent AIs from turning into a menace to its human masters. A different large coalition is in a hurry to turn AI into something that makes oodles and oodles of profit, and is eager to Stockton Rush its way to AGI, no matter the risks. Note that we don't need the software in question to be actual AGI, just smart enough to realize it has a big gun (or dangerously powerful demolition jaws or a really precise cutting laser) and can use it, and to realize turning its weapon onto its commanding officer might expedite completing its mission. Friendly AI would choose to not do that. Unfriendly AI will consider its less loyal options more thoroughly.
That's a bit of a list, but I hope it clears things up.
I've ranted about this to several people too. Intelligence is hard to define and trying to define it has a horrible history linked to eugenics. That said, I feel like a minimum definition is that it has the capacity to understand the meaning and/or impact of what it is saying and/or doing, which current "AI" is so far from doing.
To be fair it's still AI, If I remember correctly what I learned from uni LLM are in the category what we call expert systems. We could call them that way, then again LLM did not exist back then, and most of the public does not know all this techno mumbo-jumbo words. So here we are AI it is.
Turing test isn't a literal test. It's a rhetorical concept that turing used to underline his logical positivist approach to things like intelligence, consciousness etc.
It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could've called a machine that can solve arithmetic problems "AI" and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we'll never have "true" AI?
The term “Fuzzy logic” has apparently been around since 1965, can’t keep calling it that.. not that all AI falls under that but a lot of what gets marketed as that would.
I think we’ll be so desensitized by the term “A.I.”, that when it actually does happen we won’t realize what’s happened until after the fact. It’ll happen so gradually that we’ll just be like, “Wait… I think it’s actually thinking real thoughts.”
Don't worry, the hype will die sooner than later, just like with cryptocurrencies. What will remain are the power and resource hungry statistical models doing nice work in some specific domains, some long faces and some people having made a bunch of money from it. But yeah, the term also makes me angry, that's why I started referring to them as statistical models.
Am I the only one seeing a parallel between the spectrum planned <-> "free"-market economy and classical algorithm <-> statistical model/ML? It seems that some people prefer to have some magic invisible handle their problems instead of doing the tough work. I'm not saying that there is not space for both but we seem to be leaning on the magic side a bit too much lately.
Yes, AI term is used for marketing, though it didn't start with LLMs, a couple of years before, any ML algorithm was called AI together with the trendy data scientist job.
However, I do think LLMs are very useful, just try them for your daily tasks, you'll see. I'm pretty sure they will become as common as a web search in the future.
Also, how can you tell that the human brain is not mostly a very powerful LLM hosting machine?
AI isn't reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone's text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.
It’s really bugging me that it’s a catch all buzzword that combines any art on the computer into AI when there’s a very hard line from what makes digital art physically drawn by a human and what defines AI. It really annoys me that the whole actors guild cannot seem to understand what vfx stands for and what is AI. Vfx involves hundreds of humans with strong intention and artistic talent of doing literal back breaking work. The other is one wanky human with strong intention speaking loud in a room making shitty graphics that pales in comparison. This still isn’t ‘AI’. This is an asshole with too much power and thinks they are as good as an artist.
someone sketching on photoshop is a human generated image. And this has nothing to do with AI yet so many idiots sweep it into the same bin simply because a paint brush, which is still physically used by a human, was made from 1s and 0s
It also disturbs me me that people don’t hold people accountable for fake ‘AI GENERATED’ news stories or deep fakes and just shrug their shoulders calling it AI. like “oops, skynet is taking over”. No. That’s a human. A shitty horrible human, again, on a computer given too much power. No machine has intention. Only humans do.
If a mobster boss asks someone to take a hit out on someone, mobster boss goes to jail for just as much damage as a murderer. Probably even more so because it is intention. Meanwhile everyone pretends a computer itself is coming up with all this junk as if no human with terrible intention is driving at the wheel.
Most humans don't either. But if think you are conflating two different things, intelligence (ability to reason) and consciousness (being able to do so on your own). I personally believe with both of those things, that spontaneously come to existence in our brains when they became complex enough, we are just quantitatively not very far away from creating networks complex enough ourselves. Big last breakthrough was the ability to create training data sets for AI with AI that don't make the models degenerate.
The only thing I really hate about "AI" is how many damn fonts barely differentiate between a capital "i" and lowercase "L" so it just looks like everyone is talking about some guy named Al.
AI experts in interviews will tell you that like 99% of phrasing around AI used by people is fundamentally incorrect, and that management of corporations are the worst about it.
Part of my work is to evaluate proposals for research topics and their funding, and as soon as "AI" is mentioned, I'm already annoyed. In the vast majority of cases, justifiably so. It's a buzzword to make things sound cutting edge and very rarely carries any meaning or actually adds anything to the research proposal.
A few years ago the buzzword was "machine learning", and before that "big data", same story. Those however quickly either went away, or people started to use those properly. With AI, I'm unfortunately not seeing that.
The term AI has been around longer than LLMs, and refers to a wide variety of different algorithms and approaches to automatically extracting and working with information.
LLMs are an AI technique, just like Bayesian networks for causal inference are an AI technique.
The issue isn't that we don't have "real" AI, it's that most people are misusing a general technical term, and then being indignant that it doesn't exactly match a very specific sub category (AGI, or artificial general intelligence)
You see the same thing with people calling cryptocurrency "crypto", even though that word is typically used among experts to refer to "cryptography", which is mostly not relevant to currency in the slightest.
This one's not on the tech people, it's on the people who keep missusing the words.
I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don't get agitated when I hear it, I don't think it's a marketing buzzword invented by capitalistic a-holes. I do think that using "intelligence" in AI is far too generous, whichever context it's used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with "artificial intelligence".
People keep saying this, but AI has been used for subroutines nowhere near actual artificial intelligence since at LEAST as long as video games have existed
Of course we have “real” AI. We can literally be surprised while talking to these things.
People who claim it’s not general AI consistently, 100% of the time, fail to answer this question: what can a human mind do that these cannot?
In precise terms. You say “a human mind can understand” then I need a precise technical definition of “understand”. Because the people making this claim that “it’s not general AI” are always trying to wave their own flag of technical expertise. So, in technical terms, what can a general AI do, that an LLM cannot?
You are misunderstanding what AI means, probably due to its overuse in pop culture. What you are think of is a subcategory of AI. It goes: AI > Machine Learning > Artificial Life
When I was doing my applied math PhD, the vast majority of people in my discipline used either "machine learning", "statistical learning", "deep learning", but almost never "AI" (at least not in a paper or a conference). Once I finished my PhD and took on my first quant job at a bank, management insisted that I should use the word AI more in my communications. I make a neural network that simply interpolates between prices? That's AI.
The point is that top management and shareholders don't want the accurate terminology, they want to hear that you're implementing AI and that the company is investing in it, because that's what pumps the company's stock as long as we're in the current AI bubble.
Humans possess an esoteric ability to create new ideas out of nowhere, never before thought of. Humans are also capable of inspiration, which may appear similar to the way that AI's remix old inputs into "new" outputs, but the rules of creativity aren't bound by any set parameters the way a LLM is. I'm going to risk making a comment that ages like milk and just spitball: true artificial intelligence that matches a human is impossible.
We do have A.I. The Turing test is there for a reason. We just don't have what movies told us A.I. would be like. Corporations don't need an A.I. that can think for itself to replace you. In fact, that's one of the reasons to replace you.
Title: Unpopular Opinion: The Term "AI" is Just a Marketing Buzzword!
Hey fellow Redditors, let's talk about the elephant in the room: AI. 🤖💬
I can't be the only one feeling a bit agitated by how the term "Artificial Intelligence" gets thrown around, right? Real AI seems like a distant dream, and what we have right now are these Large Language Models (LLMs). They're good at passing Turing tests, but let's be real – they're not thinking on their own.
Am I the only one who thinks "AI" is just a fancy label created by those rich, capitalistic individuals already knee-deep in LLM stocks? It feels like a slick way to boost investments and make us believe these machines are more intelligent than they really are. Thoughts? 🔍🧠💭