AI slows down some experienced software developers, study finds
AI slows down some experienced software developers, study finds
AI slows down some experienced software developers, study finds
Yeah... It's useful for summarizing searches but I'm tempted to disable it in VSCode because it's been getting in the way more than helping lately.
I work for an adtech company and im pretty much the only developer for the javascript library that runs on client sites and shows our ads. I dont use AI at all because it keeps generating crap
I have to use it for work by mandate, and overall hate it. Sometimes it can speed up certain aspects of development, especially if the domain is new or project is small, but these gains are temporary. They steal time from the learning that I would be doing during development and push that back to later in the process, and they are no where near good enough to make it so that I never have to do the learning at all
Explain this too me AI. Reads back exactly what's on the screen including comments somehow with more words but less information Ok....
Ok, this is tricky. AI, can you do this refactoring so I don't have to keep track of everything. No... Thats all wrong... Yeah I know it's complicated, that's why I wanted it refactored. No you can't do that... fuck now I can either toss all your changes and do it myself or spend the next 3 hours rewriting it.
Yeah I struggle to find how anyone finds this garbage useful.
This was the case a year or two ago but now if you have an MCP server for docs and your project and goals outlined properly it's pretty good.
Not to sound like one of the ads or articles but I vice coded an iOS app in like 6 hours, it's not so complex I don't understand it, it's multifeatured, I learned a LOT and got a useful thing instead of doing a tutorial with sample project. I don't regret having that tool. I do regret the lack of any control and oversight and public ownership of this technology but that's the timeline we're on, let's not pretend it's gay space communism (sigh) but, since AI is probably driving my medical care decisions at the insurance company level, might as well get something to play with.
You shouldn't think of "AI" as intelligent and ask it to do something tricky. The boring stuff that's mostly just typing, that's what you get the LLMs to do. "Make a DTO for this table
<paste>
" "Interface for this JSON<paste>
"I just have a bunch of conversations going where I can paste stuff into and it will generate basic code. Then it's just connecting things up, but that's the fun part anyway.
Most ides do the boring stuff with templates and code generation for like a decade so that's not so helpful to me either but if it works for you.
If you give it the right task, it’s super helpful. But you can’t ask it to write anything with any real complexity.
Where it thrives is being given pseudo code for something simple and asking for the specific language code for it. Or translate between two languages.
That’s… about it. And even that it fucks up.
I bet it slows down the idiot software developers more than anything.
Everything can be broken into smaller easily defined chunks and for that AI is amazing.
Give me a function in Python that if I provide it a string of XYZ it will provide me an array of ABC.
The trick is knowing how it fits in your larger codebase. That's where your developer skill is. It's no different now than it was when coding was offshored to India. We replaced Ravinder with ChatGPT.
Edit - what I hate about AI is the blatant lying. I asked it for some ServiceNow code Friday and it told me to use the sys_audit_report table which doesn't exist. I told it so and then it gave me the sys_audit table.
The future will be those who are smart enough to know when AI is lying and know how to fix it when it is. Ideally you are using AI for code you can do, you just don't want to. At least that's my experience. In that, it's invaluable.
I have asked questions, had conversations for company and generated images for role playing with AI.
I've been happy with it, so far.
That's kind of outside the software development discussion but glad you're enjoying it.
Sounds like you just need to find a better way to use AI in your workflows.
Github Copilot in Visual Studio for example is fantastic and offers suggestions including entire functions that often do exactly what you wanted it to do, because it has the context of all of your code (if you give it that, of course).
Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.
And... that's about it. It sucks at code review, and will break shit in your repo if you let it.
On that last note, important thing they left out here being general news reporting tech stuff is that this was specifically bug fixing tasks. It can typically only provide the broadest of advice on that, and it’s largely incapable of tackling problems holistically when you often need to be thinking big picture while tackling a bug.
Interesting that the AI devs thought they were being quicker though.
Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.
In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.
Like I said, I do find it useful at times. But not only shouldn't it replace coders, it fundamentally can't. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a "really bad path" is that it's basically glorified autocomplete. It doesn't know anything.
On top of that, spoken and written language are very imprecise, and there's no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase "fruit flies like a banana." Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It's a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we've got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don't.
I have limited AI experience, but so far that's what it means to me as well: helpful in very limited circumstances.
Mostly, I find it useful for "speaking new languages" - if I try to use AI to "help" with the stuff I have been doing daily for the past 20 years? Yeah, it's just slowing me down.
and the only reason it's not slowing you down on other things is that you don't know enough about those other things to recognize all the stuff you need to fix
I like the saying that LLMs are good at stuff you don’t know. That’s about it.
Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn't worked very well. One of my less technical coworkers tried to vibe code something and it didn't work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.
I actively hate the term "vibe coding." The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don't know what they're doing, they'll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn't working.
Everyone on Lemmy is a software developer.
Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on "misunderstanding". Last week it was suggesting a spelling fix I'd already made because it didn't understand the - in the diff meant I'd changed the line already.
I've found it to be great at writing unit tests too.
I use github copilot in VS and it's fantastic. It just throws up suggestions for code completions and entire functions etc, and is easily ignored if you just want to do it yourself, but in my experience it's very good.
Like you said, using it to get the meat and bones of an application from scratch is fantastic. I've used it to make some awesome little command line programs for some of my less technical co-workers to use for frequent tasks, and then even got it to make a nice GUI over the top of it. Takes like 10% of the time it would have taken me to do it - you just need to know how to use it, like with any other tool.
Exactly what you would expect from a junior engineer.
Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.
Something something craftsmen don’t blame their tools
AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.
The difference being junior engineers eventually grow up into senior engineers.
Exactly what you would expect from a junior engineer.
Except junior engineers become seniors. If you don't understand this ... are you HR?
I agree with the depicted actual developers, but this is still funny
no shit. ai will hallucinate shit I’ll hit tab by accident and spend time undoing that or it’ll hijack tab on new lines inconsistently
Fun how the article concludes that AI tools are still good anyway, actually.
This AI hype is a sickness
LLMs are very good In the correct context, forcing people to use them for things they are already great at is not the correct context.
Upper management said a while back we need to use copilot. So far just used Deepseek to fill out the stupid forms that management keep getting us to fill out
Writing code is the easiest part of my job. Why are you taking that away?
For some of us that’s more useful. I’m currently playing a DevSecOps role and one of the defining characteristics is I need to know all the tools. On Friday, I was writing some Java modules, then some groovy glue, then spent the after writing a Python utility. While im reasonably good about jumping among languages and tools, those context switches are expensive. I definitely want ai help with that.
That being said, ai is just a step up from search or autocomplete, it’s not magical. I’ve had the most luck with it generating unit tests since they tend to be simple and repetitive (also a major place for the juniors to screw up: ai doesn’t know whether the slop it’s pumping out is useful. You do need to guide it and understand it, and you really need to cull the dreck)
I think about how much the planet is heating up because people like me are a little too lazy to be competent. I am glad my nieces and nephews get to pay our price we are raising every day on their behalf, to improve their world supposedly with our extra productivity, right?
I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.
Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.
This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.
My fear for the software industry is that we'll end up replacing junior devs with AI assistance, and then in a decade or two, we'll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.
That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.
It really sucks.
That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.
100% agreed. It should not be used as a replacement but rather as an augmentation to get the real benefits.
They can be helpful when using a new library or development environment which you are not familiar with. I've noticed a tendency to make up functions that arguably should exist but often don't.
Couldn't have said it better myself. The amount of pure hatred for AI that's already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.
Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.
People specifically hate having tools they find more frustrating than useful shoved down their throat, having the internet filled with generative ai slop, and melting glaciers in the context of climate change.
This is all specifically directed at LLMs in their current state and will have absolutely zero effect on any research funding. Additionally, openAI etc would be losing less money if they weren't selling (at a massive loss) the hot garbage they're selling now and focused on research.
As far as worker protections, what we need actually has nothing to do with AI in the first place and has everything to do with workers/society at large being entitled to the benefits of increased productivity that has been vacuumed up by greedy capitalists for decades.
Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.
They aren’t detail oriented enough to write full applications or complicated scripts.
I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.
In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.
Yep, I agree with that.
There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.
Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.
Code reviews take up a lot of time, and if I know a lot of code in a review is AI generated I feel like I'm obliged to go through it with greater rigour, making it take up more time. LLM code is unaware of fundamental things such as quirks due to tech debt and existing conventions. It's not great.
Code reviews seem like a good opportunity for an LLM. It seems like they would be good at it. I’ve actually spent the last half hour googling for tools.
I’ve spent literally a month in reviews for this junior guy on one stupid feature, and so much of it has been so basic. It’s a combination of him committing ai slop without understanding or vetting it, and being too junior to consider maintainability or usability. It would have saved so much of my time if ai could have done some of those review cycles without me
This has been solved for over a decade. Include a linter and static analysis stage in the build pipeline. No code review until the checkbox goes green (or the developer has a specific argument for why a particular finding is a false positive)
I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.
My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.
I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.
That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.
That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.
This is the must frustrating problem I have. With a few exceptions, LLM use seems to be inversely proportional to skill level, and having someone tell me "chatgpt said ___" when asking me for help because clearly chatgpt is not doing it for their problem makes me want to just hang up.
Just the other day I wasted 3 min trying to get AI to sort 8 lines alphabetically.
I had to sort over 100 lines of data hardcoded into source (don’t ask) and it was a quick function in my IDE.
I feel like “sort” is common enough everywhere that AI should quickly identify the right Google results, and it shouldn’t take 3 min
By having it write a quick function to do so or to sort them alphabetically within the chat? Because I've used GPT to write boilerplate and/or basic functions for random tasks like this numerous times without issue. But expecting it to sort a block of text for you is not what LLMs are really built for.
That being said, I agree that expecting AI to write complex and/or long-form code is a fool's hope. It's good for basic tasks to save time and that's about it.
I’ve actually had a fair bit of success getting GitHub Copilot do things like this. Heck I even got it to do some matrix transformations of vectors in a JSON file.
The tool I use can rewrite code given basic commands. Other times I might say, "Write a comment above each line" or "Propose better names for these variables" and it does a decent job.
I wouldn’t mention this to anyone at work. It makes you sound clueless
My boss insists I use it and I insist on telling him when it can't do the simplest things.
Great! Less productivity = more jobs, more work security.
"Using something that you're not experienced with and haven't yet worked out how to best integrate into your workflow slows some people down"
Wow, what an insight! More at 8!
As I said on this article when it was posted to another instance:
AI is a tool to use. Like with all tools, there are right ways and wrong ways and inefficient ways and all other ways to use them. You can’t say that they slow people down as a whole just because some people get slowed down.
I use github copilot as it does speed things up. but you have to keep tight reins on it, but it does work because it sees my code, sees what i'm trying to do, etc. So a good chunk of the time it helps.
Now something like Claude AI? yeah...no. Claude doesn't know how to say "I don't know." it simply doesn't. it NEEDS to provide you a solution even if the vast majority of time it's one it just creates off the top of it's head. it simply cannot say it doesn't know. and it'll get desperate. it'll provided you with libraries or repos that have been orphaned for years. It'll make stuff up saying that something can magically do what you're really looking for when in truth it can't do that thing at all and was never intended to. As long as it sounds good to Claude, then it must be true. It's a shit AI and absolutely worthless. I don't even trust it to simply build out a framework for something.
Chatgpt? it's good for providing me with place holder content or bouncing ideas off of. that's it. like you said they're simply tools not anything that should be replacements for...well...anything.
Could they slow people down? I think so but that person has to be an absolute moron or have absolutely zero experience with whatever their trying to get the AI to do.