Google has sent internet into ‘spiral of decline’, claims DeepMind co-founder
Google has sent internet into ‘spiral of decline’, claims DeepMind co-founder

Google has sent internet into ‘spiral of decline’, claims DeepMind co-founder

Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.
Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”
He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.
Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.
The part about Google isn't wrong.
But the second half of the article, where he says that AI chatbots will replace Google search because they give more accurate information, that simply is not true.
I'd say they at least give more immediately useful info. I've got to scroll past 5-8 sponsored results and then the next top results are AI generated garbage anyways.
Even though I think he's mostly right, the AI techbro gameplan is obvious. Position yourself as a better alternative to Google search, burn money by the barrelful to capture the market, then begin enshitification.
In fact, enshitification has already begun; responses are comparatively expensive to generate. The more users they onboard, the more they have to scale back the quality of those responses.
ChatGPT is already getting worse at code commenting and programming.
The problem is that enshitification is basically a requirement in a capitalist economy.
I mean most top searches are AI generated bullshit nowadays anyway. Adding Reddit to a search is basically the only decent way to get a proper answer. But those answers are not much more reliable than ChatGPT. You have to use the same sort of skepticism and fact checking regardless.
Google has really gotten horrible over the years.
Most of the results after the first page on Google are usually the same as the usable results, just mirrored on some shady site full of ads and malware.
I already go to ChatGPT more than Google. If you pay for it then the latest version can access the internet and if it doesn’t know the answer to something it’ll search the internet for you. Sometimes I come across a large clickbait page and I just give ChatGPT the link and tell it to get the information from it for me.
Do you fact-check the answers?
give it time, algos will fuck those results as well
ChatGPT powers Bing Chat, which can access the internet and find answers for you, no purchase necessary (if you're not on edge, you might need to install a browser extension to access it as they are trying to push edge still).
Do you fact-check the answers?
Its already happening at my work. Many are using bing AI instead of google.
Don't worry they'll start monetizing LLMs and injecting ads into them soon enough and we'll be back to square one
Chatgpt flat out hallucinates quite frequently in my experience. It never says "I don't know / that is impossible / no one knows" to queries that simply don't have an answer. Instead, it opts to give a plausible-sounding but completely made-up answer.
A good AI system wouldn't do this. It would be honest, and give no results when the information simply doesn't exist. However, that is quite hard to do for LLMs as they are essentially glorified next-word predictors. The cost metric isn't on accuracy of information, it's on plausible-sounding conversation.
I suspect that client-side AI might actually be the kind of thing that filters the crap from search results and actually gets you what you want.
That would only be Chat-AI if it turns out natural language queries are better to determine the kind of thing the user is looking for than people trying to craft more traditional query strings.
I'm thinking each person would can train their AI based on which query results they went for in unfiltered queries, with some kind of user provided feedback of suitability to account for click-bait (i.e. somebody selecting a result because it looks good but it turns out its not).
If you aren't paying for chatgpt, give a look to perplexity.ai, it is free.
You'll see that sources are references and linked
Don't judge on the free version of chatgpt
Edit. Why the hell are you guys downvoting a legit suggestion of a new technology in the technology community? What do you expect to find here? Comments on steam engines?
Okay but the problem with that is that LLMs not only don't have any fidelity at all, they can't. They are analogous to the language planning centre of your brain, which has to be filtered through your conscious mind to check if it's talking complete crap.
People don't realise this and think the bot is giving them real information, but it's actually just giving them spookily realistic word-salad, which is a big problem.
Of course you can fix this if you add some kind of context engine for them to truly grasp the deeper and wider meaning of your query. The problem with that is that if you do that, you've basically created an AGI. That may first of all be extremely difficult and far in the future, and second of all it has ethical implications that go beyond how effective of a search engine it is.