Google CEO Sundar Pichai says problems with its AI can't be solved because hallucinations are an inherent problem in these AI tools.
You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."
They keep saying it's impossible, when the truth is it's just expensive.
That's why they wont do it.
You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.
Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.
If you can't fix it, then get rid of it, and don't bring it back until we reach a time when it's good enough to not cause egregious problems (which is never, so basically don't ever think about using your silly Gemini thing in your products ever again)
Has No Solution for Its AI Providing Wildly Incorrect Information
Don't use it??????
AI has no means to check the heaps of garbage data is has been fed against reality, so even if someone were to somehow code one to be capable of deep, complex epistemological analysis (at which point it would already be something far different from what the media currently calls AI), as long as there's enough flat out wrong stuff in its data there's a growing chance of it screwing it up.
Wow, in the 2000's and 2010's google my impression was that this is an amazing company where brilliant people work to solve big problems to make the world a better place.
In the last 10 years, all I was hoping for was that they would just stop making their products (search, YouTube) worse.
Now they just blindly riding the AI hype train, because "everyone else is doing AI".
It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.
The best part of all of this is that now Pichai is going to really feel the heat of all of his layoffs and other anti-worker policies. Google was once a respected company and place where people wanted to work. Now they're just some generic employer with no real lure to bring people in. It worked fine when all he had to do was increase the prices on all their current offerings and stuff more ads, but when it comes to actual product development, they are hopelessly adrift that it's pretty hilarious watching them flail.
You can really see that consulting background of his doing its work. It's actually kinda poetic because now he'll get a chance to see what actually happens to companies that do business with McKinsey.
these hallucinations are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.
Then what made you think it’s a good idea to include that in your product now?!
So if a car maker releases a car model that randomly turns abruptly to the left for no apparent reason, you simply say "I can't fix it, deal with it"? No, you pull it out of the market, try to fix it and, if this it is not possible, then you retire the model before it kills anyone.
This is so wild to me... as a software engineer, if my software doesn't work 100% of the time as requested in the specification, it fails tests, doesn't get released and I get told to fix all issues before going live.
AI is basically another word for unrealiable software full of bugs.
This is what happens every time society goes along with tech bro hype. They just run directly into a wall. They are the embodiment of "Didn't stop to think if they should" and it's going to cause a lot of problems for humanity.
That's ok, we were already used to not getting what we wanted from your search and are already working on replacing you since you opted to replace yourselves with advertising instead of information, the role you were supposed to fulfill which you betrayed.
die in ignominy. Open source is the only way forward.
So you have a product that you've made into a system for getting answers. And then you couldn't be bothered to try and sanitize training data enough to get your answer system's new headline feature from spreading blatantly incorrect information? If it doesn't work, maybe don't ship it.
I mean yeah... if he had a solution they would be actually have the revolutionary AI tool the tech writers write about.
It's kinda written like a "gotcha" but it's really the fundamental problem with AI. We call it hallucinations now but a few years ago we just called it being wrong or returning bad results.
It's like saying we have teleportation working in that we can vaporize you on the spot but are just struggling to reconstruct you elsewhere. "It's halfway there!"
Until the AI is trustworthy enough to not require fact checking it afterwards it's just a toy.
Let's turn that frown upside down! Instead of saying "Google failed to generate a useful LLM to bolster its search feature," say "Google successfully replicated the output of an average Reddit troll!"
God I'm fucking sick of this loss leading speculative investment bullshit. It's hit some bizarre zenith that has infected everybody in the tech world, but nobody has any actual intention of being practical in the making of money, or the functionality of the product. I feel like we should just can the whole damned thing and start again.
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."
That's a lot of """quotation marks""" for something that is a very well established fact, and absolutely should not be a shock to anyone.
Yes, it's an unsolved problem. It always will be, because there is no algorithm for truth. All we can do is get incrementally better.
"Are we making progress? Yes, we are," he added. "We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved."
Let’s be fair with our headlines!
CEO of Google Says It Is Still Solving for Its AI Providing Wildly Incorrect Information [and is okay with people dying from rattlesnake bite misinformation in the meantime]
What y'all are forgetting is that when it comes to dominating a technology space, historically, it's not proving the better product, is providing the cheapest/widest available product. With the goal being to capture enough of the market to get and retain that dominant position. Nobody knows what the threshold is for that until years later when the dust has settled.
So from Google's perspective if a new or current rival is going to get there first, then just push it out and fix it live. What are people going to do? Switch to Bing?
So is you want Google to stop doing this dumb broken LLM shite, use the network effect against them. Switch to a different search provider and browser and encourage all of your friends and family to do so as well.
Maybe Google should put a disclaimer... warning people it's not 100% accurate. Or.. just take down the technology because clearly their AI is chit tier.
The "solution" is to curate things, invest massive human resources in it, and ultimately still gets accused of tailoring the results and censoring stuff.
Let's put that toy back in the toy box, and keep it at the few things it can do well instead of trying to fix every non-broken things with it.
I've seen suggestions that the AI Overview is based on the top search results for the query, so the terrible answers may be more to do with Google Search just being bad than any issue with their AI. The AI Overview just makes things a bit worse by removing the context, so you can't see the glue on pizza suggestion was a joke on reddit or it was The Onion suggesting eating rocks.
I just realized that Trump beat them to the punch. Injecting cleaning solution into your body sounds exactly like something the AI Overview would suggest to combat COVID.
Think I'll try that glue pizza. An odd taste choice, sure. But google wouldn't reccomend actually harmful things. They're the kings of search baby! They would have to be legally responsible as individuals for the millions of cases brought against them. They know that as rich people, they will face the harshest consequences! If anything went wrong, they'd find themselves in a.......STICKY situation!!!!
These models are mad libs machines. They just decide on the next word based on input and training. As such, there isn’t a solution to stopping hallucinations.
So crazy that humanity has so far allowed the idea of "hallucinations", even just the term, to be normalized and acceptable to any level into a product that's being forced into every layer of our daily existence.
Stop just going with it. Call out hallucinations on their face.
I got a solution, stop being a lil baby and turn off the AI and go on to the next big thing. CRISPR, maybe? Not techbro enough? Make it like Crypto Crispr, only you own this little piece of DNA, and all the corporations that can read the ledger and get your biometrics
I have a solution! Employ a human to verify the work of AI, perhaps you need more than one with all the junk AI might produce. Maybe you will even need an entire department to do that, and maybe you should just not use AI.
I'm curious, are these hallucinations very prevalent? I'm outside under US so haven't seen the feature yet. But I have noticed that practically every article references the same glue incident.
So I'm not sure if the hallucinations are happening all the time, or everyone is just jumping on a handful of mistakes the AI made. If the latter, the situation reminds me of how every single accident involving a Tesla was reported on back in the day.
But this week’s debacle shows the risk that adding AI – which has a tendency to confidently state false information – could undermine Google’s reputation as the trusted source to search for information online.
The problem with all these chat AIs is that they're just a gloried autocorrect. It never knew what it was saying from the beginning. That's why it "hallucinates".
You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries?
Well, according to an interview at The Verge with Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."
So expect more of these weird and incredibly wrong snafus from AI Overviews despite efforts by Google engineers to fix them, such as this big whopper: 13 American presidents graduated from University of Wisconsin-Madison.
Despite Pichai's optimism about AI Overviews and its usefulness, the errors have caused an uproar online, with many observers showing off various instances of incorrect information being generated by the feature.
And it's staining the already soiled reputation of Google's flagship product, Search, which has already been dinged for giving users trash results.
"Google’s playing a risky game competing against Perplexity & OpenAI, when they could be developing AI for bigger, more valuable use cases beyond Search."
The original article contains 344 words, the summary contains 183 words. Saved 47%. I'm a bot and I'm open source!
just like there's no solution for not punishing youtubers who follow the rules while allowing doxxers and pedos to use youtube to dox people and lure little girls into their houses.
If its job is to write a fan fic on what may or may not be true on what you asked for, then it does a great job. But typically people search for information, and getting what is essentially a glorified auto complete isn't useful. It's like big tech has learned nothing about the massive issue of disinformation and just added fuel to the fire to an unsolved problem we're still very much trying to figure out.
If you want smart AI, then you have to have smart people teach it facts. You have to send it to college, not train it on gigs of Reddit sh*tposts and hope for the best.
Good lord what is wrong with the people in this thread. The guy is literally owning up to the hard limitations of LLMs. I'm not a fan of him or Google either, but hey kudos for being honest this once. The entire industry would be better off if we didn't treat LLMs like something they're not. More of this please!
Using the best signals, you can turn $500 into $3000 in just a few days of trading in the future and on the site, just start copying our signals and start enjoying your trades.As for a referral for good trading, checking out (Expert~~Eloi$e Wilbert) on ilint$ttrragrram, They have a user-friendly platform and offer a wide range of trading options.