Skip Navigation

My thoughts on AI

I think I'm the type of person who gets into things after everyone. To that regard AI is no different, and for a long time I considered LLMs a toy - this was truer of older models, such as the original chatGPT models that came out in 2022-2023.

The discourse has understandably evolved over time and it's clear that AI is not going anywhere. It's like quadcopters in warfare, or so many other new techs before. As much as we'd like them not to be used or exist, they will still be. To refuse to adopt new advancements means to be left behind and giving oneself a disadvantage on purpose.

Ultimately the problems around AI stem from capitalism. Yes, there are excesses. But this is true of humans too.

AI - especially LLMs, which I have more experience with - are great at some tasks and absolutely abysmal at others. Just like some people are good at their job and others don't know the first thing about it. I used to get an ad on Twitter about some guy's weird messianic book, and in it he showed two pages. It was the most meaningless AI bullshit, just faffing on and on while saying nothing, written in the most eye-rolling way.

That's because LLMs currently aren't great at writing prose for you. Maybe if you prompt them just right they might, but that's also a skill in itself. So we see that there is bottom-of-the-barrel quality, and better quality, and that exists with or without AI. I think the over-reliance on AI to do everything for them regardless of output will eventually be pushed out, and people who do it will stop finding success (if they even found it in the first place, don't readily believe people when they boast about their own success).

I use AI to code, for example. It's mostly simpler stuff, but:

1- I would have to learn entire coding languages to do it myself, which takes years. AI can do it in 30 minutes and better than I could in years, because it knows things I don't. We can talk about security for example, but would a hobbyist programmer know to write secure web code? I don't think so.

2- You don't always have a coder friend available. In fact, the reason I started using AI to code my solutions is because try as we might to find coders to help, we just never could. So it was either don't implement cool features that people will like, or do it with AI.

And it works great! I'm not saying it's the top-tier quality I mentioned, but it's a task that AI is very good at. Recently I even gave deepseek all the JS code it previously wrote for me (or even handwritten code) and asked it to refactor the entire file, and it did. We went from a 40kb file to 20 after refactoring, and 10kb after minifying. It's not a huge file of course, but it's something AI can do for you.

There is of course the environmental cost. To that I want to say that everything has an environmental cost. I don't necessarily deny AI is a water-hog, just that the way we go about it in capitalism, everything is contributing to climate change and droughts. Moreover to be honest I've never seen actual numbers and studies, everyone just says "generating this image emptied a whole bottle of water". It's just things people repeat idly like so many other things; and without facts, we cannot find truth.

Therefore the problem is not so much with AI but with the mode of production, as expected.

Nowadays it's possible to run models on consumer hardware that doesn't need to cost 10,000 dollars (though you might have seen that post of the 2000$ rig that can run the full deepseek model). Deepseek itself is very efficient, and there are even more efficient models being made to the point that soon it will be more costly (and resource-intensive) to meter API usage than give it out for free.

I think the place you have as a user is finding where AI can help you individually. People also like to say AI fries your brain, that it incentivizes you to shut your brain off and just accept the output. I think that's a mistake, and it's up to you not to do that. I've learned a lot about how linux works, how to manage a VPS, and how to work on mediawiki with AI help. Just like you should eat your vegetables and not so many sweets, you should be able to say "this is wrong for me" and stop yourself from doing it.

If you're a professional coder and work better with handwritten code, then continue with that! When it comes to students relying on AI for everything, then schools need to find other methods. Right now they're going backwards to doing pen and paper tests. Maybe we should rethink the entire testing method? When I was in school, years before AI, my schoolmates and I already could tell that rote memorization was torture and a 19th century way of teaching. I think AI is just the nail in the coffin for a very, very outdated method of teaching. Why do kids use AI to do their homework for them? That is a much more important question than how are they using AI.

As a designer I've used AI to help get me started on some projects, because this is my weakness. Once I get the ball rolling it becomes very easy for me, but getting it moving in the first place is the hard part. If you're able to prompt it right (which is definitely something I lament, it feels like you have to say the right magic words and they don't work), it can help with that, and then I can do my thing.

Personally part of my unwillingness to get into AI initially was from the evangelists who like to say literally every new tech thing is the future. Segways were the future, crypto was the future, VR was the future, NFTs were the future, google glasses were the future... They make money on saying these things so of course they have an incentive to say it. It still bothers me that they exist, if you were wondering (if they bother you too lol), but ultimately you have to ignore them and focus on your own thing.

Another part of it I think is how much mysticism there is around it, with companies and let's say AI power users who are so unwilling to share their methods or how LLMs actually work. They retain information for themselves, or lead people to think this is magic and does everything.

Is AI coming for your job? Yes, probably. But burying our heads in the sand won't help. I see a lot of translators talking about the soul of their art - everything has a soul and is art now (even saw a programmer call it that to explain why they don't use AI in their work), we've gone full circle back to base idealism to "explain" how human work is different from AI work. AI already handles some translation work very well, and professionals are already losing work to it. Saying "refuse to use AI" is not materially sound, it is not going to save their client base. In socialism getting your job automated is desirable, but not in capitalism of course. But this is not new either, machines have replaced human workers for centuries now, as far back as the printing press to name just one. Yet nobody today is saying "return to scribing monks".

I think it would be very useful to have an AI guide written for communists by communists. Something that everyone can understand, written from a proletarian perspective - not the philosophy of it but more like how the tech works, how to use it, etc. I can put it up on the ProleWiki essays space if someone wants to write it, we've put up guides before, e.g. if you want to see a nutrition and fitness guide written from a communist perspective.

48 comments
  • I would have to learn entire coding languages to do it myself, which takes years. AI can do it in 30 minutes and better than I could in years

    this is an overstatement. once you learn the basics of one programming language (which does not take a full year), you can apply the knowledge to other programming languages, many of which are almost identical to one another.

    There is of course the environmental cost. To that I want to say that everything has an environmental cost. I don’t necessarily deny AI is a water-hog, just that the way we go about it in capitalism, everything is contributing to climate change and droughts. Moreover to be honest I’ve never seen actual numbers and studies, everyone just says “generating this image emptied a whole bottle of water”. It’s just things people repeat idly like so many other things; and without facts, we cannot find truth.

    according to a commonly-cited 2023 study:

    training the GPT-3 language model in Microsoft’s state-of-the-art U.S. data centers can directly evaporate 700,000 liters of clean freshwater, but such information has been kept a secret

    the global AI demand is projected to account for 4.2 – 6.6 billion cubic meters of water withdrawal in 2027, which is more than the total annual water withdrawal of 4 – 6 Denmark or half of the United Kingdom.

    GPT-3 needs to “drink” (i.e., consume) a 500ml bottle of water for roughly 10 – 50 medium-length responses, depending on when and where it is deployed.

    there's also the energy costs:

    according to google's 2024 environmental report:

    In 2023, our total GHG emissions were 14.3 million tCO2e, representing a 13% year-over-year increase and a 48% increase compared to our 2019 target base year. This result was primarily due to increases in data center energy consumption and supply chain emissions. As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute, and the emissions associated with the expected increases in our technical infrastructure investment.

    according to the mit technology review:

    The carbon intensity of electricity used by data centers was 48% higher than the US average.

    and

    [by 2028] AI alone could consume as much electricity annually as 22% of all US households.

    there's also this article by the UN, but this comment is getting kinda long and the whole thing is relevant imo so it is left as an exercise to the reader

    i have my own biases against ai, so i'm not gonna try to write a full response, but this is what stood out to me

    • this is an overstatement. once you learn the basics of one programming language (which does not take a full year), you can apply the knowledge to other programming languages, many of which are almost identical to one another.

      I've tried getting into javascript at different points. My brain doesn't like OOP for some reason. Then after that you have to learn jquery, then apparently React or Vue.js... That's when I stopped looking lol because as much as in my job knowing web dev is useful I'm not a frontend dev either.

      I could maybe get something working after 6-9 months on it, if I don't give up. But it would be inefficient, amateurish and might not even work the way I want it to.

      I'm not even talking about full apps with GUIs yet, just simple-ish scripts that do specific things.

      Or I can send the process to AI and it does it in five minutes. By passing it documentation and the code base it can also stay within its bounds, and I can have it refactor the code afterwards. People say it has a junior dev level and I agree, but it may not stay that way for much longer and it's better than my amateur level.

      To say "you must learn programming it'd the only way" was true only before 2022. I would still say it's good/necessary to know how code and computers work so you know how to scope the AI but aside from that like I said we don't always have a programmer friend around to teach us or make our scripts for us (as much as I love them)

    • I think it is helpful to put some things in perspective, like for electricity usage, data centers only take up 1-1.5% of global electricity usage. Like stated here https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks

      What is the role of data centres and data transmission networks in clean energy transitions?

      Rapid improvements in energy efficiency have helped limit energy demand growth from data centres and data transmission networks, which each account for about 1-1.5% of global electricity use. Nevertheless, strong government and industry efforts on energy efficiency, renewables procurement and RD&D will be essential to curb energy demand and emissions growth over the next decade.

      To also cite form that article, there also this mention to.

      Data centres and data transmission networks are responsible for 1% of energy-related GHG emissions

      So even for overall GHG, data center’s general account very little. Of course with this technology being used more, electricity usage will rise a bit more but it still likely will be small in the grand scheme of things. Another question how much of that is specifically AI in regards to data centers in general? One cited figure is 10-20% of data centers is designated to AI usage. Like here https://time.com/6987773/ai-data-centers-energy-usage-climate-change/

      Porter says that while 10-20% of data center energy in the U.S. is currently consumed by AI, that percentage will likely “increase significantly” going forward.

      So, a lot of data centers are just being used for lots of other things like cloud stuff for example, but the share by AI is growing a bit more however.

      Besides that, to go to the water usage, that is a problem, especially when data centers, in general, are built in areas that can’t really sustain such things. However this is just data centers in general, and this was happening before AI in the last two years. I think it is also worth mentioning to that like, google and the rest are able to buy water rights to which also completely fucks over First Nations to which don't get a say in these things.

      To quote Kaffe, who I think is also on here to??

      Instead of weaponizing climate anxiety to attack AI merely to defend property law and labor aristocracy, let's cut to specific issues like Meta's and Google's ability to purchase water in violation of treaties.

      https://xcancel.com/probablykaffe/status/1905480887594361070#m

    • I've been doing programming for a long time, and I can tell you that learning to use a language effectively takes a long time in practice. The reality is that it's not just syntax you have to learn, but the tooling around the language, the ecosystem, its libraries, best practices, and so on. Then, there are families of languages. If you know one imperative language then core concepts transfer well to another, however they're not going to be nearly as useful if you're working with a functional language. The effort in learning languages should not be trivialized. This is precisely the problem LLMs solve because you can focus on what you want to do conceptually, which is a transferable skill, and the LLM knows language and ecosystem details which is the part that you'd be spending time learning.

      Meanwhile, studies about GPT3 are completely meaningless today. The efficiency has already improved dramatically and models that outperform those requiring a data centre even a year ago, can now be run on your laptop. You can make the argument that the aggregate demand for using LLM tools is growing, but that just means these tools are genuinely useful and people reach for them more than other tools they used to use. It's worth noting that people are still discovering new techniques for optimizing models, and there's no indication that we're hitting any sort of a plateau here.

      • The efficiency has already improved dramatically

        the mit article was written this may, and as it notes, ai datacenters still use much more electricity than other datacenters, and that electricity is generated through less environmentally-friendly methods. openai, if it is solvent long enough to count, will

        build as many as 10 data centers (each of which could require five gigawatts, more than the total power demand from the state of New Hampshire)

        even the most efficient models take several orders of magnitude more energy to create than to use:

        it’s estimated that training OpenAI’s GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy

        and overall, ai datacenters use

        millions of gallons of water (often fresh, potable water) per day

        i'm doubtful that the uses of llms justify the energy cost for training, especially when you consider that the speed at which they are attempting to create these "tools" requires that they use fossil fuels to do it. i'm not gonna make the argument that aggregate demand is growing, because i believe that the uses of llms are rather narrow, and if ai is being used more, it's because it is being forced on the consumer in order for tech companies to post the growth numbers necessary to keep the line growing up. i know that i don't want gemini giving me some inane answer every time i google something. maybe you do.

        if you use a pretrained model running locally, you know the energy costs of your queries better than me. if you use an online model running in a large datacenter, i'm sorry but doubting the environmental costs of making queries seems to be treatler cope more than anything else. even if you do use a pretrained model, the cost of creation likely eclipses the benefit to society of its existence.

        EDIT: to your first point, it takes a bit to learn how to write idiomatic code in a new paradigm. but if you're super concerned about code quality you're not using an llm anyway. at least unless they've made large strides since i last used one.

  • Regarding the energy cost, my 65W cpu takes ~20 seconds to generate an image with sdxl turbo (4 cycles 3.66 seconds each + decoding). Thats 2769 images / kWh

  • If you are writing code that could easily introduce security vulnerabilities, you HAVE to understand it. No matter if you ai generate it or don't. So you have to learn the language either way. If you are that good with a language that you can understand its code, it will most likely be easier to write your code manually than to generate it and completely understand it.

    If you are only doing small PHP plugins and the ai doesn't do anything critical like reading/writing files or taking user input, it should be fine though.

  • I very much agree with all that. This is already a very useful tool, and it can save you a lot of time once you learn how to apply it effectively. As with any tool, it takes time to develop intuition for cases where it works well, and how to use it to get the results you want. I get the impression that a lot of people try using LLMs out of spite already having a bias that the tool is not useful, then they naturally fail to produce good results on the first try and declare it to be useless.

    As you point out, it's an excellent tool for learning to work with new languages, to discover tricks for system configuration, and so on. I've been doing software development for over 20 years now professionally, and I know some languages well and others not so much. With LLMs, I can basically use any language like an expert. For example, I recently had to work on a Js project, and I haven't touched the language in years. I wasn't familiar with the ecosystem, current best practices, or popular libraries. Using an LLM allowed me to get caught up on that very quickly.

    I’m also not too worried about the loss of skill or thinking capacity because the really useful skills lie in understanding the problem you're trying to solve conceptually and designing a solution that will solve it. High level architecture tends to be the really important skill, and I find that’s basically where the focus is working with agents. The LLM can focus on the nitty gritty aspects of writing the code, while I focus on the structure and the logic flow. One approach I've found very effective is to stub out the functions myself, and have the agent fill in the blanks for me. This helps focus the LLM and prevent it from going off into the weeds.

    Another trick I found is that's handy is to ask the agent to first write a plan for the solution. Then I can review the plan and tell the agent to adjust it as needed before implementing. Agents are also pretty good at writing tests, and tests are much easier to evaluate for correctness because good tests are just independent functions that do one thing and don’t have a deep call stack. My current approach is to get the LLM to write the plan, add tests, and then focus on making sure I understand the tests and that they pass. At that point I have a fairly high degree of confidence that the code is indeed doing what's needed. The tests act as a contract for the agent to fill.

    I suspect that programming languages might start shifting in the direction of contracts in general. I can see stuff like this becoming the norm, where you simply specify the signature for the function. You could also specify parameters like computational complexity and memory usage. The agent could then try to figure out how to fill the contract you've defined. It would be akin to genetic algorithm approach where the agent could converge on a solution over time. If that’s the direction things will be moving in, then current skills could be akin to being able to write assembly by hand. Useful in some niche situations, but not necessary vast majority of the time.

    Finally, it's very helpful to structure things using small components components that can be tested independently and composed together to build bigger things. As long as the component functions in the intended way, I don’t necessarily care about the quality of the code internally. I can treat them as black boxes as long as they’re doing what’s expected. This is already the approach we take with libraries. We don't audit every line of code in a library we include in a project. We just look at its surface level API.

    Incidentally, I’m noticing that functional style seems to work really well here. Having an assembly line of pure functions naturally breaks up a problem into small building blocks that you can reason about in isolation. It’s kind of like putting Lego blocks together. The advantage over stuff like microservies here is that you don’t have to deal with the complexity of orchestration and communication between the services.

    • This is exactly how I use LLMs to code too, I'm good at laying out the steps to solving the problem, not so good a coder (I basically hand code html and css because it's faster for me than using an LLM but I never learned JS and have never felt like learning it even before AI was a thing).

      I also have it create constitutive components, e.g. for the reading mode on prolewiki I had it make it trigger on 0 press and apply a class to

      <html>

      which I then customize myself in the CSS file, then after that I had it make other functions to have a page progress bar or hoverline. The hoverline was actually an idea from an LLM, to keep track of which line you are on. Finally just recently I gave deepseek these three different functions and told it to refactor and optimize efficiency, and it did just that. It doesn't do everything in one step yet but if you know even passably well what it's capable of you can have it do it in several steps.

      edit - and of course just asking the AI to answer questions about itself. "Write as a Midjourney prompt" for example. That's why I think it would be important having a proletarian guide to AI, so that everyone could start somewhere because a lot of the knowledge is gatekept individually.

      What do you use for agents? I downloaded agent0 and it runs but gets stuck on 'checking memory' every time. I'm not on a great rig to be running local models right now but apparently this is a problem several people are facing.

      • If you've just been using the web UI for DeepSeek, I highly recommend checking out using tools that let you run models on the actual codebase you're working with. It's a much better experience because the model has a lot more context to work with.

        There are two broad categories of tools. One is REPL style interface where you start a chat in the terminal, and the agent manages all the code changes while you prompt it with what you want to do. You don't have as much control here, but the agents tend to do a pretty good job of analyzing the codebase holistically. The two main ones to look at are Aider and plandex.

        The other approach is editor integration as seen with Cursor. Here you're doing most of the driving and high level planning, and then use the agent contextually to add code like writing individual functions. You have a lot more granular control over what the agent is doing this way. It's worth noting that you also have a chat mode here as well, and you can get the agent to analyze the code, find things in the project, etc. I find this is another aspect that's often under appreciated where you can use the LLM to find the relevant code you need to change. A couple of projects to look at are Continue and Roo-Code.

        All these projects work with ollama locally, but I've found DeepSeek API access is pretty cheap and you do tend to get better results that way. Obviously, caveat is that you are sending code to their servers.

  • I think it would be very useful to have an AI guide written for communists by communists. Something that everyone can understand, written from a proletarian perspective - not the philosophy of it but more like how the tech works, how to use it, etc. I can put it up on the ProleWiki essays space if someone wants to write it, we’ve put up guides before, e.g. if you want to see a nutrition and fitness guide written from a communist perspective.

    This is not comprehensive, but here is a draft write-up of sorts for the time being. There are probably others who would know better how to write about best use of chat models like Deepseek, I'm not very familiar with them in detail:

    What is AI?: AI stands for Artificial Intelligence, which can have broad connotation and could be applied to a number of forms of automation. More recently, it has become synonymous with generative AI, a specific subset of AI in which an AI can be given a prompt (some input from the user) and it will generate something based on the prompt (some output for the user). For the purposes of this article, I will be using AI primarily to refer to generative AI.

    Is AI actually "intelligent"?: What is defined as intelligence and whether any AI falls within that may be up for some debate, but instead I want to address the nature of associations with intelligence, autonomy, and behavior. There is no evidence that AI has a "mind of its own" in the way that human beings have a mind of their own and when you consider that it is some math happening on a GPU trained to impersonate human things, not something with a sensate material form interacting with the physical world and then bouncing that back to an inner world, it makes sense that even if it were to have anything resembling intelligence, there is no reason to think it would look similar to the human experience. The point here is not to weigh in on the whole nature of consciousness and intelligence, but that you don't want to be fooled by an AI acting convincingly human. That's what it is trained for, but that doesn't mean it is like a human on a material level. It is still fundamentally a GPU doing some math and shares no actual material traits with you.

    AI can be confidently wrong, even when it has high confidence in the right answer: An AI model effectively has a certain amount of probabilistic confidence in what should come next in any given input fed into it. The way it continues is token by token and a token is "however the tokenizer breaks things up", which varies based on the tokenizer, but most of them do a lot as whole words. There are, however, components of words sometimes too, such as -ly. Tokenizing aims to maximize the semantic literacy of the AI, i.e. it can better understand why some words follow other words if the components give context. A tokenizer that breaks everything up into letters would probably end up more like the word game Hangman than word association. Further, the base probabilities of the model tokens get sampled from through various statistical algorithms, which can dramatically alter the end probability of a given token. In effect, this means that it could have 95% confidence that "purple" is an accurate continuation of "red and blue make " and still choose "yellow."

    AI has no ideology, but it does have biases: Because AI does not have a mind of its own in the way that we do, I think it's better not to think of it like it has an ideology or belief system. However, this doesn't prevent it from having biases. The probabilities it learns are based on what kind of material it was given in training data. If everything in its training is saying "Stalin did nothing wrong," it probably will too. If everything in its training is saying "commie bad", it probably will too. Further, some companies may consciously try to tune a model's biases in a particular direction, or block off certain paths of conversation, for this reason. The same goes for image generation. If most hair in its training is blonde, unspecified hair color will likely be blonde. If most skin is white, well... you get the idea.

    AI, like anything, is limited by material constraints: Contrary to how it may feel at times, AI is not magic and its capabilities do have limits. If you ask it about theoretical physics, it might have a very good answer depending on how well it has been trained on material written by experts in theoretical physics. If you ask it about a TV show that came out yesterday, chances are it has never seen anything about that show in training and so it will have no idea what you are talking about. Like a human, if it has never encountered something before, it's not going to magically be able to know what it is. However, unlike a human, you can't just explain something to it that it doesn't know and it will now know going forward. More on that below.

    An AI model can't remember what you said to it: This is tricky wording in a way because there are text services that offer some form of automated simulation of memory, where the AI can draw from some kind of record of certain things that you have said and these are supposed to play into how it responds. But the model itself is still not remembering what you said. Training AI models is expensive and training them to be different for a specific user would mean needing to have tons of variations on a model, which would get absurd fast and might result in an overall worse model for any given user (training can be a sensitive process and easily go wrong). Instead, most of what models are working with is a sort of "short-term memory", sometimes called context; this is still not really any kind of actual working memory in the human sense, it's just a way of thinking about input. The AI generates from input and so if part of the input is "I already said that, what are you talking about?", that hints at a kind of conversation where a misunderstanding has occurred. This doesn't mean the AI will "know" what you already said, it can only "see" what is given as input, plain and simple, and generate based on that. If what you already said is part of the input, it might be able to reference it. Otherwise, it can't.

    AI doesn't know there is a "you" and an "it": Some chat-focused AI can be very convincing at acting like a real person, but this doesn't mean that the AI really "knows" there are separate entities talking to each other. The standard text model is a continuation model, which means they are continuing the given input, i.e. adding onto it. So to get an AI to simulate being a chat partner, special designing behind the scenes forces it to stop writing before it would start writing your side of the conversation. Without these rails, you could watch an AI simulate a conversation between two people, doing both sides of it.

    Most AI is not private: This is not to say that people are necessarily interested in reading your specific chats with your username attached, but most AI companies use your chat data in some form, probably anonymized but nevertheless plaintext data they can read, in order to further tune their models. There are rare exceptions, such as NovelAI (which uses a special system of encryption) and ChubAI (possibly, based on wording of their policy), but you need to read carefully through Privacy Policies and how things are worded. A policy that simply says "your stuff is private" doesn't necessarily mean it can't be seen, it might just mean that there is no explicit policy in place for members of the company to be looking at what is on your profile.

    Steering AI and avoiding unwanted outcomes: Most AI can go in a wide variety of different directions. This opens up a lot of possibility, but it also means a lot of ways in which the AI can not know what you want and go down a path you weren't expecting and would rather it didn't go down. Getting it to head toward the realm you want can be subtle or overt, and with the models that are tuned for instruction, overt may be easier, but either way, expect AI will not meet your expectations sometimes. And if it goes down a path you really don't want, it's usually better to move on as fast as possible, rather than getting bogged down in telling it how wrong it is. Remember that it's just continuing based on input, so if it sees "argument", it may figure "more arguing fits here". It doesn't know when to quit, it doesn't get weary, and the more there is of something in input, the more likely it will be to double down on that.

    P.S. These are the points to cover I could think of at the moment based on what I know about generative AI. If there are other points you are thinking of when you talk about writing a guide, I'm curious to know. Can't guarantee I'd be able to cover it all myself, but yeah. Also am curious if this comes across clear enough for layperson. I'm not an ML researcher or the like, but I've picked up a fair bit on it over time.

    • I think this is a great start, for a guide I would personally see it divided into sections/chapters and subsections, and the concepts broken down even more, with examples and down to their basic elements. Could also imagine including diagrams. Think of the stuff you would have liked to know starting out. I also thought prior to that that it could include example prompts with good and bad practices. Also a part on self-hosting a model.

      To give you an idea the nutrition and fitness guide took me an entire weekend to write and then several more days of refining and fine-tuning x)

      If you put it up on a google doc (or similar) we can more easily copy and paste to a prolewiki page afterwards!

  • I would say that you should not count on AI to write security critical code. This is pretty likely to result in vulnerabilities. AI often has oversights. At the very least you need to learn what common practices are considered secure and work towards putting them into practice. AI can help you do this with code examples, but you should try to limit this to things that the AI can prove to work correctly by running them. And especially do not try to roll your own cryptography.

    We already have a tool for doing very common tasks in programming like authentication - they are called libraries and are at least an order of magnitude more reliable than what you get from AI. AI has its place too, but it's not useful for everything and can be especially harmful in certain situations like security critical code where you absolutely need the code to have certain properties that the AI cannot reason about.

    • I second that, anything security related should absolutely be reviewed by a human. Public APIs, authentication, authorization, and so on, is all very sensitive code that needs to be carefully designed.

  • I'd like to write an article on this or co author for it. We need a proletarian pov as you said.

    • I can probably set up a Google doc or similar (I like cryptpad but it's slow) and send you and memorablename the link to collaborate on it, and anyone else who wants to participate. Though in my experience it's good to limit the authors to 3 or 4.

  • You should not count on AI if you make open-source software. Many FOSS devs are against AI. Relying on AI would ruin years-old contributions to open source software, which are written manually by various contributors.

    • What are their arguments against it?

    • I don't see how that follows. If I get a contribution to an open source project I'm maintaining, I'm going to review it and evaluate it on the quality of the code being submitted, whether it has tests, and so on. How that was done is not really my concern, nor would I likely know whether LLM was used in the process or not.

48 comments