I have no doubt that a ML chatbot is perfectly capable of being as useless as an untrained human first level supporter with a language barrier.
And the dude in the article basically admits that's what his call center was like:
Suumit Shah never liked his company’s customer service team. His agents gave generic responses to clients’ issues. Faced with difficult problems, they often sounded stumped, he said.
So evidently good support outcomes were never the goal.
Agreed. Should we also mourn for the horse and buggy drivers? The gas station attendants? And the whole slew of jobs that have become obsolete over the centuries?
I do think we need something like UBI and I’m not ignoring the lost jobs but shit jobs shouldn’t have to exist. I’ll mourn for the workers but not for the job. Continuing to employee people to do thankless/hard/dangerous/etc jobs is just silly.
Doubt. These large language models can't produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn't in their dataset they can't help, just like all those cheap Indian call centers operating off a script. It's just a bigger script. They'll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it's an upgrade for their shit automated call systems.
Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don't have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.
I'd say at best it's an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.
If your problem falls outside the realm of the script, I just hope it recognises the script isn't solving the issue and redirects you to a human. Oftentimes I've noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it's wrong. In my scenario this isn't so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.
Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let's be honest - those calls are usually going to a manager anyway so I'm not seeing your argument. An escalation is an escalation. The article itself is even saying that's not a literal 100% replacement of humans.
You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.
A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it's justified, it's just that is doesn't necessarily have to be such a draining job if not for the greed that puts them in that situation.
Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.
Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.
I've worked in this field for 25 years and don't think that ChatGPT by itself can handle most workloads, even if it's trained on them.
There are usually transactions which must be done and often ad hoc tasks which end up being the most important things because when things break, you aren't trained for them.
If you don't have a feedback loop to solve those issues, your whole business may just break without you knowing.
I ordered Chipotle for delivery and I got the wrong order. I don't eat meat so it's not like I could just say whelp, I'm eating this chicken today I guess.
The only way to report an issue is to chat with their bot. And it is hell. I finally got a voucher for a free entree but what about the delivery fee and the tip back? Impossible.
I felt like Sisyphus.
I waited for the transaction to post and disputed the charge on my card and it credited me back.
There's so many if-and-or-else scenarios that no amount of scraping the world's libraries is AI today able to sort out these scenarios.
Yes these kind of transactions really need to be hand coded to be handled well. LLM's are very poorly suited to this kind of thing (though I doubt you were dealing with an LLM at Chipotle just yet).
On one hand, they're crap jobs. On the other hand, in most economies we have crap jobs not because they're necessary for productivity, but to give us an excuse to pay people to live.
Maybe if enough jobs are lost to automation, we'll start to rethink the structure of a society that only allows people to live if they're useful to a rich person.
Essentially, we're just still doing feudalism with extra steps, and it's high time we cut that nonsense out.
I think once workers can be replaced, there will be some virus that wipes out most of humanity. No point keeping billions of people around if they aren't needed.
Username checks out... suffice to say that a time of increasing social unrest is on the way, when it's even easier for the haves to sideline the have nots than it already was.
We have crappy jobs because jobs need doing and it was still cheaper to get humans to do it without a substantial loss in functionality. They don't exist because of some form of social altruism, as evidenced by the fact that as soon as a semi-viable alternative is offered then the jobs are gone.
With the dynamic shifting to automation, prematurely I would add, then employers are seeing a much cheaper way to achieve 80% of what they currently offer.
When I think of crappy jobs I think of a number of different sets.
Busywork for the extra hands in the clerical pool. This is the stuff that defines the careers of a lot of people in developed countries, in which they're hired and trained, may even work on projects for a while, and then are dropped into a holding cubical and tasked with sometime benign but probably useless (say entering archived paper files from decades ago into the new data system in case we need them someday -- I did that.) Here in the states (and according to anecdotes, the UK) we have a lot of this kind of work, and while it should only be a temporary measure between company projects, entire department clerical pools have been stuck in such holding patterns for years at a time.
It happens for two reasons I've seen: One, the economy tanks such as during the subprime mortgage crisis of 2008, in which lower management in a grand effort of humanitarian desperation, tell the upper management that no, my crew are working hard and very necessary in hopes that theirs is not a department that gets eliminated during the downsizing. (These managers in question are covering their own butts too, but ones I talked with recognized that anyone they dismissed would be eating ramen in a month). And two, the mismanagement of responsibility in linking tasks that need to be done with worker pools capable of doing them. Either the managers tasked to making such links are overwhelmed, or the process of connecting pools to duties is distributed so broadly that it's de-prioritized by everyone. If those tasks are particularly odious (say they involve interacting with a toxic upper manager) then the lower management will find reasons that their own pool is not able to help, and so the company has simultaneous worker shortages and surpluses. For large, multinational conglomerates, this sort of thing is routine.
Jobs that are facades to cover for social or moral obligations that are expected of the company, but (from the perspective of shareholders) are too expensive to actually do, such as the faux tech-support services the US exported to phone banks in India that are limited to some very short troubleshooting trees rather than someone actually familiar with the technical aspects of the product. This is (I think) what the business owner of the article is talking about replacing.
Now what he should be doing is hiring a tech service and including the troubleshooting tree in the manual, what is typically done with household appliances. The workers on that phone bank are being set up with pressure by angry customers to offer some productive solutions, while also getting pressure from management to placate the angry customers, for which they have insufficient facilities. I'm reminded of my own experience being told by upper management I should be spending only fifteen minutes explaining to customers how to install CD-ROM drives (to MS DOS, mind you), which it usually took forty-five minutes to an hour to walk a non-geek through the process.
Such jobs shouldn't exist, rather the company should actually hire real departments to deal with social responsibilities, rather than front veneers and marketing campaigns, but that's a problem intrinsic to the system and not one that will be solved with LLMs given the same short troubleshooting trees. (An LLM with a big troubleshooting tree developed by a serious tech team might work, but would require ongoing development and maintenance, and the occasional tech-support call with a human being. Also a better LLM than we have.)
Jobs that are odious because they're labor intensive, hazardous, tedious, frustrating or otherwise taxing on the worker, and yes there are a lot of necessary tasks that need to be done that fall into these categories. So when you say We have crappy jobs because jobs need doing, I assume you're talking about these.
Because we're in a capitalist system that mandates shareholder primacy, our companies first seek out a labor pool they can exploit since they don't have any other choice. This is classified as bonded servitude, id est slavery but we don't like to call it that when an enterprise uses human beings like interchangeable, disposable parts. Historically, we've hired children, exploited prison populations, immigrants, invoked a truck system, a culture of obligatory productivity, whatever, anything to force our fellow human beings to toil under cruel conditions.
Without an exploitable population enterprises face labor unrest (unions are the least violent version of this we know) in order to improve conditions and compensation, leaving industries to either capitulate and pay extra and provide proper gear or to automate wherever they can.
I imagine in collectives, everyone eventually gets pissed off from drawing straws and start working on ways to make odious tasks less odious, either through automation or improving the conditions of the task that it's no longer odious, e.g. making actual cleaning as close to Power Wash Simulator as possible.
We don't need to keep all bullshit jobs around. The printing press putting hand written scribes out of jobs was a good thing. This is similar. New jobs will be created that will hopefully create more productive work.
I mean, if you go to your credit card provider with a copy of the log with their rep, and the rep says "i authorize a refund", you can atleast make the argument.
Any company scummy enough to trust an AI for this wouldnt give it the authority, though
Hopefully it'll be the end of capitalism. How is the economical model supposed to function when nobody is working? Where are people supposed to get money from? How is anything going to be taxed?
Realistically though it'll somehow push capitalism into hyperdrive and enslave the global population under the control of the AI owners.
A lot of jobs are just busy work that does nothing and makes nothing. Talking about automating them misses the point of why the jobs exists in the first place.
"I see that you are throwing a ball at a target that is connected to a platform with a human sitting above a tank of water. Here is a AI generated picture of a random human underwater to sate your needs. Ya! I have made this process 200% more efficient!"
It's crazy how people seem fundamentally incapable of looking at the big picture and ask themselves things like, "what even is the purpose of society? Is this the best society humanity is able to come up with? What if I am not ready to accept society as it is presented to me, what are my alternatives, do I even have any? What are my obligations towards a society that marginalizes me and treats me like a second or third tier human, without any hope of ever improving my lot?"
Ask people if they would rather be free and get everything they want without having to work for it. The answers you'll get will boggle your mind.
We've been permeated by the idea that "you have to be financially productive to be a decent human" for so long, even people against excessive/useless work still sometimes miss the point of this crazy race toward making more benefit regardless of anything else.
Sometimes, reaching the "it works" point is enough, but higher ups never stops there. It always have to be "better/more".
I'm surprised by the number of workaholics that exist, like why do you want to work so much? Go explore the world, learn things, make things, but people want to work instead?
You still need to employ some humans as a backup when the AI catastrophically fucks up, but for the most part it makes sense. Not all jobs need to continue to exist.
Not every customer service employee should worry about being replaced, but those who simply copy and paste responses are no longer safe, according to Shah.
Working conditions in this industry are not great. The turnover rate can reach 80% sometimes. It can be a difficult, stressful and low paid job that few people enjoy. At the same time, the demand for this work keeps increasing as more and more of consumer activity shifts online and remote. It seems to me that the technology may be a net benefit in this case. The public and its regulatory authority should, however, keep a close eye on developments to make sure humans are not left behind.
I've been working with gpt-4 since the week it came out, and I guarantee you that even if it never became any more advanced, it could already put at least 30% of the white collar workforce out of business.
The only reason it hasn't is because companies have barely started to comprehend what it can do.
Within 5 years the entire world will have been revolutionized by this technology. Jobs will evaporate faster than anyone is talking about.
If you're very smart, and you begin to use gpt-4 to write the tools that will replace you, then you MIGHT have 10 good years left in this economy before humans are all but obsolete.
If you're not staying up nights, scared shitless by what's coming, it's because you don't really understand what gpt-4 can do.
You sound like one of those idiots preaching the apocalypse from a street corner. Humans obsolete in 10 years? Yeah sure buddy, right after all those profits trickle down. This is just another tool, an interesting one to be sure, but still just a tool. If you're staying up nights worrying about this, you don't really understand the technology, or maybe you're just worried someone is going to realize you don't do shit.
I work with AI stuff, just getting into LLM, but I have been doing SD work since the public release last year. In just over 1 year the SD capability has gone from being able to draw a passable image of a cat at 512x512 pixels that required a reasonably powerful graphics card to complete to being able to create 4k images on the same cards that are nearly indistinguishable from actual photos/paintings. It is the single fastest adaptation and development of a technology I have seen in my 30 years in tech. I have actually been tracking the job market and the impacts that this will have and he is not all that far off in his estimate. The current push in AI development is nearly a ubiquitous existential threat to employment as we view it in the society of the United States. Everyone is on the chopping block and you'd best believe that the C-level executives want to eliminate as many positions as possible. Labor is viewed as an atrocious expense and the first place that cuts should be made. I challenge you to actually come up with a list of 10 jobs that employ more than 100,000 people in the country that you think would be safe from AI and I will see how many of them I can find information on someone who is already actively working on eliminating them.
Companies don't want employees, only paying customers. If they can eliminate employees, they will. Hence self-checkouts in grocers, pay at the pump for gas stations, order kiosks at McDonald's, mobile ordering for virtually every fast food place, the list goes on and on. These are all recent non-AI replacements that have cut into the employment prospects for people.
I think once sap and jira start implementing a lot more AI and make it simpler to use it could cut down a lot of corporate jobs, not the hands on stuff but a lot of the simpler jobs like purchasing and inventory staff could be shrunken down to a fewer people and fewer cubicles. At least that's what we talked about at our company how everyone is adjusting to the new world especially advertising now that everything will be served to you by a bot instead of a search
You sound like one of those peasants standing on street corners saying, "horses replaced with fuming metal boxes in 10 years? Hah, yeah, sure buddy, right after we put a man on the moon! Getoutta here, you loon!"
I'm a senior Linux sysadmin who's been following the evolution of AI over this past year just like you, and just like you I've been spending my days and nights tinkering with it non stop, and I have come to more or less the same conclusion as you have.
The downvotes are from people who haven't used the AI, and who are still in the Internet 1.0 mindset. How people still don't get just how revolutionary this technology is, is beyond me. But yeah, in a few years that'll be evident enough, time will show.