Skip Navigation

computer science is dead now that LLMs can do what people can’t: write trivial crossword apps

https:// archive.ph /IuzyF

this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

48
48 comments
  • But I knew the task would be tricky

    Is it just me or isn't this not even that tricky (just a bit of work, so I agree with him on the free evening thing, esp when you are a bit rusty)? Anyway, note how he does give a timeframe for doing this himself (an evening) but doesn't mention how long he worked on the chatgpt stuff, nor does he mention if he succeeded at his project at all

    E: anyway what he needs is an editor.

    17
  • This response is going to be rambling.

    For the example problem: If the dictionary file comfortably fits in memory and this was just a one-off hack, I probably wouldn't even have to think about the solution; it's a bash one-liner (or a couple lines of Python) and I can a certainly write it faster than I could prompt an LLM for it. If I'm reading the file on a Raspberry Pi or the file is enormous, I'd use one of the reservoir sampling algorithms. If performance isn't all that important I'd just do the naive one (which I could probably hack up in a couple of minutes), if I needed an optimal one I'd have to look at some of my old code (or search the internet). An LLM could probably do the optimal version faster than I could (if prompted specifically to do so) ... but obviously I'd have to check if it got it right, anyway, so I'm not sure where the final time would land.

    I am sure, however, that it'd be less enjoyable. And this (like I think the author is trying to express) is saddening. It's neat that the hardware guy in the story could also solve a software problem, but a bit sad that he can do it without actually learning anything, just by prompting a machine built out of appropriated labour - I imagine this is what artists and illustrators feel about the image generators. It feels like skills it took a long time to build up are devaluing, and the future the AI boosters are selling - one where our role is reduced to quality controlling AI-generated barf, if there's a role left for us at all - is a bleak one. I don't know how well-founded this feeling actually is: In a world that has internet connections, Stack Overflow, search engines and libraries for most of the classic algorithms, the value of being able to blam out a reservoir sampling algorithm from memory was very close to zero anyway.

    It sure wasn't that ability I got hired for: I've mentioned before that I've not had much luck trying to use LLMs for things that resemble my work. I help maintain an open-source OS for industrial embedded applications. The nice thing about open source is that whenever we need to solve some problem someone else already solved and put under an appropriate license, we can just use their solution directly without dragging anything through an LLM. But this also definitionally means that we spend pretty much all our time on problems that haven't been solved publicly (and that LLMs haven't seen examples of). For us, at the moment, LLMs don't help with any of the tasks we actually could use help with. Neither does Stack Overflow.

    But the explicit purpose of generative AI is the devaluation of intellectual and creative labour, and right now, a lot of money is being spent on an attempt to make people like me redundant. Perhaps this is just my anxiety speaking, but it makes me terribly uneasy.

    14
    • I’ve been conducting DevOps and SRE interviews for years now. There’s a huge difference between someone that can copypasta SO code and someone that understands the SO code. LLMs are just another extension of that. GitHub Copilot is great for quickly throwing together an entire Terraform file. Understanding how to construct the project, how to tie it all together, how to test it, and the right things to feed into Copilot requires actually having some skill with the work.

      I might hire this person at a very junior level if they exhibited a desire to actually understand what’s going on with the code. Here an LLM can serve as a “mentor” by spitting out code very quickly. Assuming you take the time to understand that code, it can help. If you just commit, push, deploy, you can’t figure out the deeper problems that span files and projects.

      To me the only jobs that might not be safe are for executives a good programmer probably doesn’t want to work for.

      11
    • I help maintain an open-source OS for industrial embedded applications.

      fuck yes. there’s something weirdly exciting about work like that — not only is it a unique set of constraints, but it’s very likely that an uncountable number of people (myself possibly included) have interacted with your code without ever knowing they did

      But the explicit purpose of generative AI is the devaluation of intellectual and creative labour, and right now, a lot of money is being spent on an attempt to make people like me redundant. Perhaps this is just my anxiety speaking, but it makes me terribly uneasy.

      absolitely same. I keep seeing other programmers uncritically fall for poorly written puff pieces like this and essentially do everything they can to replace themselves with an LLM, and the pit drops out of my stomach every time. I’ve never before seen someone misunderstand their own career and supposed expertise so thoroughly that they don’t understand that the only future in that direction is one where they’re doing a much more painful version of the same job (programming against cookie cutter LLM code) for much, much less pay. it’s the kind of goal that seems like it could only have been dreamed up by someone who’s never personally survived poverty, not to mention the damage LLM training is doing to the concept of releasing open source code or even just programming for yourself, since there’s nothing you can do to stop some asshole company from pilfering your code.

      9
  • shuf -n 100 /usr/share/dict/words

    Master hacker

    I would have expected JS standard library to contain something along the lines of random.sample but apparently not. A similar thing exists in something called underscore.js and I gotta say it's incredibly in-character for JavaScript to outsource incredibly common utility functions to a module called "_".

    Language bashing aside, there's something to enjoy about these credulous articles proclaiming AI superiority. It's not the writing itself, but the self-esteem boost regarding my own skills. I have little trouble doing these junior dev whiteboard interview exercises without LLM help, guess that's pretty impressive after all!

    13
    • Absolutely this, shuf would easily come up in a normal google search (even in googles deteriorated relevancy).

      For fun, "two" lines of bash + jq can easily achieve the result even without shuf (yes I know this is pointlessly stupid)

      cat /usr/share/dict/words | jq -R > words.json
      cat /dev/urandom | od -A n -D | jq -r -n '
        import "words" as $w;
        ($w | length) as $l |
        label $out | foreach ( inputs * $l / 4294967295 | floor ) as $r (
          {i:0,a:[]} ;
          .i = (if .a[$r] then .i  else .i + 1 end) | .a[$r] = true ;
          if .i > 100 then break $out else $w[$r] end
        )
      '
      

      Incidentally this is code that ChatGPT would be utterly incapable of producing, even as toy example but niche use of jq.

      5
    • It's so incredibly easy to randomly select a few lines from a file that it really doesn't need to be in the standard library. Something like 4 lines of code could do it. Could probably even do it in a single unreadable line of code.

      2
  • “This was a detailed problem, one that could easily consume the better part of an evening.”

    what

    13
  • ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this),

    It's worse than that, because there's been incredibly simple, efficient ways to k-sample a stream with all sorts of guarantees about its distribution with no buffering required for centuries. And it took me all of 1 minute to use a traditional search engine to find all kinds of articles detailing this.

    If you can't bother learning a thing, it isn't surprising when you end up worshiping the magic of the thing.

    11
    • reading back, I wonder if they were looking for a bash command or something that’d do it? which both isn’t programming, and makes their inability to find an answer in seconds much worse

      6
    • So I haven't programmed in a long time but like isn't a simple approach for this sort of thing (if want low numbers like 100) just something like:

      from distribution I like(0, len(file)) get 100 samples read line at sample forall samples

      or if file big

      sort samples, stream file, if line = current sample add line to array, remove sample from other array.

      Like that is literally off the top of my head. I'm sure there are real approachs but if googling is too hard isn't shit like that obvious?

      edit: wait you'd have to dedupe this. also the real approach is called: (unspellable French word for pit of holdy water etc) sampling

      4
  • Our puzzle generator printed its output in an ugly text format... I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But Iknew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem

    I'm so confused. how could the generator have output something that amounts to a crossword, but not in such a way that this task is trivial? does he mean that his puzzle generator produces an unsorted list of words? what the fuck is he talking about

    10
    • you know, you’re fucking right. I was imagining taking a dictionary and generating every valid crossword for an N x N grid from it, but like you said he claims to already have a puzzle generator. how in fuck is that puzzle generator’s output just a list of words (or a list of whatever the fuck "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a" is supposed to mean, cause it’s not valid syntax for a list of single characters with words delimited from * in most languages, and also why is that your output format for a crossword?) if it’s making valid crossword puzzles?

      fractally wrong is my favorite kind of wrong, and so many of these AI weirdos go fractal

      5
      • I... think (hope??) the "*" is representing filled in squares in the crossword and that he has a grid of characters. But in that case the problem is super easy, you just need to print out HTML table tags between each character and color the table cell black when the character is "*". It takes like 10 minutes to solve without chatgpt already. :/

        5
  • Ah yes, the future of coding. Instead of directly searching for stack overflow answers, we raise the sea level every time we need to balance a tree.

    AI chuds got the wrong message about the guy that tried to use tensorflow to write fizzbuzz.

    Edit: I looked it up. Tensorflow fizzbuzz guy is also an AI chud it seems

    10
  • I felt like there was a 100% chance that there was a python library that you could just import and use in two lines.

    Turns out it's like 4 lines depending on which of the multiple ones you use.

    I do love internet people who make cool things because they are smarter than me and share.

    8
  • @self This was the point where I started wanting to punch things:

    “At one company where I worked, someone got in trouble for using HipChat, a predecessor to Slack, to ask one of my colleagues a question. “Never HipChat an engineer directly,” he was told. We were too important for that.”

    Bless his heart. That, dearie, isn’t “engineers are so special”, it’s managers wanting to preserve old-fashioned lines of communication and hierarchy because they fear becoming irrelevant. Gatekeeping access to other people’s knowledge to make yourself important goes back millennia.

    8
  • Really love the bit about how gpt is able to tackle the simple stuff so easily. If an original insight, I take my hat off to you. I came to the edge of it, but never quite really saw it as you point out.

    6
    • if you never have, find YouTube videos of folks trying to use an LLM to generate code for a mildly obscure language. one I watched that gave the game away was where someone tried to get ChatGPT to write a game in Commodore BASIC, which they then pasted directly into a Commodore 64 emulator to run. not only did the resulting “game” perform like a nonsensical mashup of the simple example code from two old programming books, there was a gigantic edit in the middle of the video where they had to stop and make a significant number of fixes to the LLM’s output, where it either fictionalized something like a line number or constant, or where the mashup of the two examples just didn’t function. after all that programming on their part for an incredibly substandard result, their conclusion was still (of course) that the LLM did an amazing job

      7
  • If anyone else can't load archive.ph due to having normie Google DNS like me, here is the URL: https://www.newyorker.com/magazine/2023/11/20/a-coder-considers-the-waning-days-of-the-craft

    5
  • Yah I skimmed thru this article a couple days ago when I came across it.

    The author did not bother doing any legwork.

    He claims to be a programmer, but doesn't want to spend a little bit of time investigating a tool that helps him code faster.

    4
  • @self

    >I keep thinking of Lee Sedol. Sedol was…

    Okay. Lee is his family name! Come on, how could the New Yorker fuck this up?

    2
You've viewed 48 comments.