I remember about 10 months ago or so ChatGPT used to output some surprisingly top-tier code. I'd ask it to create a method with some required functionality and it would output the code, fully commented and everything. I didn't have to edit the code, it just worked, and it was more or less efficient.
Now? I can't even get it to write comments for code I give to it.
The free version or the paid version? Part of it is that they're trying to push people towards the paid version, which is a much more sophisticated model.
Maybe it's because I'm only using it as plan B or C (after the documentation has already failed me), but I have never gotten any usable code out of chatGPT.
And yet co-pilot is able to finish my code perfectly after I type the first few characters... even though they're the same model.
I think co-pilot works better because it has the context of the whole project for reference when suggesting auto completion. I've gotten a lot of unusable junk from it too though
ChatGPT is amazing for describing what you want, getting a reasonable output, and then rewriting nearly the whole thing to fit your needs. It's a faster (shittier) stack overflow.
I normally have it output toy examples of the syntax I don't want to bother learning and then remix that into what I need. IMO it's better than stackoverflow because stackoverflow code is more likely to be not really what you were searching for or not actually run because the author didn't bother testing it and there's a typo or something.
It’s funny when it starts to just invent things. Like packages, with version number!, that.. do not exists..
Or when it outputs code without using the variables ..
The most annoying thing is imho that it keeps explaining everything al the time. Even when I prompt “you have a working app with vuejs..” and others it sometimes still explains how to setup the app.
That said: the tool has become a staple in my workflow whenever I need a starting point. Or have to do some math algorithmic things
I feel like for simple algorithms chatGPT could be good. Like as a reference for how to code something. But if it's simple code I often find it faster to just write it myself then reorganize whatever it makes to work with and match the style of other code in my codebase. And if it's complex code I often find it harder to describe what I want then to just make it.
In my experience, what makes gpt-4 great for coding is its astonishing knowledge of available software libraries, built-in interface features, etc.
I'll tell it the task I want done, and it will tell me where to find, and how to install the necessary dependencies.
With zero experience in browser extension design, gpt-4 helped me to build an incredibly complicated Chrome extension, using vector database; creating a custom, cloud-based server; web scraping with headless browsers, voice recognition, speech synthesis, wake-word capabilities, and a sophisticated user interface. I had ZERO experience with ANY of these.
For me, using gpt-4 was like collaborating with a just okay programmer, but one who had extensive experience with literally every programming language, API, protocol, etc.
And it was a collaboration. We would talk through problems together. I would make observations and guesses about why a block of code wasn't working, and it would tell me why I was wrong, or alternately tell me I was right, and produce a fixed version.
This is so apt ! though it does help to get difficult syntax for small fragments working quickly so you can get some proof for your concept instead of struggling with syntax errors for an hour
Unless it's Microsoft documentation in which case it feels more like bill gates beating me over the head with the frying pan until I give up and find an alternative way to achieve my goal