Skip Navigation

AI coders think they’re 20% faster — but they’re actually 19% slower

General Programming Discussion @lemmy.ml

AI coders think they’re 20% faster — but they’re actually 19% slower

65 10
Bad News @lemmy.ml

AI coders think they’re 20% faster — but they’re actually 19% slower

20 3
TechTakes @awful.systems

AI coders think they’re 20% faster — but they’re actually 19% slower

391 107
96 comments
  • I feel this -- we had a junior dev on our project who started using AI for coding, without management approval BTW (it was a small company and we didn't yet have a policy specifically for it. Alas.)

    I got the fun task, months later, of going through an entire component that I'm almost certain was 'vibe coded' -- it "worked" the first time the main APIs were called, but leaked and crashed on subsequent calls. It used double- and even triple-pointers to data structures, which the API vendor's documentation upon some casual reading indicated could all be declared statically and re-used (this was an embedded system); needless arguments; mallocs and frees everywhere for no good reason (again due to all of the un-needed dynamic storage involving said double/triple pointers to stuff). It was a horrible mess.

    It should have never gotten through code review, but the senior devs were themselves overloaded with work (another, separate problem) ...

    I took two days and cleaned it all up, much simpler, no mem leaks, and could actually be, you know, used more than once.

    Fucking mess, and LLMs (don't call it "AI") just allow those who are lazy and/or inexperienced to skate through short-term tasks, leaving huge technical debt for those that have to clean up after.

    If you're doing job interviews, ensure the interviewee is not connected to LLMs in any way and make them do the code themselves. No exceptions. Consider blocking LLMs from your corp network as well and ban locally-installed things like Ollama.

  • I'll quote myself from some time ago:

    The entire article is based on the flawed premise, that "AI" would improve the performance of developers. From my daily observation the only people increasing their throughput with "AI" are inexperienced and/or bad developers. So, create terrible code faster with "AI". Suggestions by copilot are >95% garbage (even for trivial stuff) just slowing me down in writing proper code (obviously I disabled it precisely for that reason). And I spend more time on PRs to filter out the "AI" garbage inserted by juniors and idiots. "AI" is killing the productivity of the best developers even if they don't use it themselves, decreases code quality leading to more bugs (more time wasted) and reducing maintainability (more time wasted). At this point I assume ignorance and incompetence of everybody talking about benefits of "AI" for software development. Oh, you have 15 years of experience in the field and "AI" has improved your workflow? You sucked at what you've been doing for 15 years and "AI" increases the damage you are doing which later has to be fixed by people who are more competent.

    • from some time ago

      It's a fair statement and personal experience, but a question is, does this change with tool changes and user experience? Which makes studies like OP important.

      Your >95% garbage claim may very well be an isolated issue due to tech or lib or llm usage patters or whatnot. And it may change over time, with different models or tooling.

      • At this point I assume ignorance and incompetence of everybody talking about benefits of "AI" for software development.

  • It’s hard to even call them specialists, they are at the level of cashiers, for whom the computer does everything, and sometimes they do something at the level of communicating with clients and that’s all. I'm certainly not a professional, but I think the main message is clear.

96 comments