My thoughts on AI
My thoughts on AI
I think I'm the type of person who gets into things after everyone. To that regard AI is no different, and for a long time I considered LLMs a toy - this was truer of older models, such as the original chatGPT models that came out in 2022-2023.
The discourse has understandably evolved over time and it's clear that AI is not going anywhere. It's like quadcopters in warfare, or so many other new techs before. As much as we'd like them not to be used or exist, they will still be. To refuse to adopt new advancements means to be left behind and giving oneself a disadvantage on purpose.
Ultimately the problems around AI stem from capitalism. Yes, there are excesses. But this is true of humans too.
AI - especially LLMs, which I have more experience with - are great at some tasks and absolutely abysmal at others. Just like some people are good at their job and others don't know the first thing about it. I used to get an ad on Twitter about some guy's weird messianic book, and in it he showed two pages. It was the most meaningless AI bullshit, just faffing on and on while saying nothing, written in the most eye-rolling way.
That's because LLMs currently aren't great at writing prose for you. Maybe if you prompt them just right they might, but that's also a skill in itself. So we see that there is bottom-of-the-barrel quality, and better quality, and that exists with or without AI. I think the over-reliance on AI to do everything for them regardless of output will eventually be pushed out, and people who do it will stop finding success (if they even found it in the first place, don't readily believe people when they boast about their own success).
I use AI to code, for example. It's mostly simpler stuff, but:
1- I would have to learn entire coding languages to do it myself, which takes years. AI can do it in 30 minutes and better than I could in years, because it knows things I don't. We can talk about security for example, but would a hobbyist programmer know to write secure web code? I don't think so.
2- You don't always have a coder friend available. In fact, the reason I started using AI to code my solutions is because try as we might to find coders to help, we just never could. So it was either don't implement cool features that people will like, or do it with AI.
And it works great! I'm not saying it's the top-tier quality I mentioned, but it's a task that AI is very good at. Recently I even gave deepseek all the JS code it previously wrote for me (or even handwritten code) and asked it to refactor the entire file, and it did. We went from a 40kb file to 20 after refactoring, and 10kb after minifying. It's not a huge file of course, but it's something AI can do for you.
There is of course the environmental cost. To that I want to say that everything has an environmental cost. I don't necessarily deny AI is a water-hog, just that the way we go about it in capitalism, everything is contributing to climate change and droughts. Moreover to be honest I've never seen actual numbers and studies, everyone just says "generating this image emptied a whole bottle of water". It's just things people repeat idly like so many other things; and without facts, we cannot find truth.
Therefore the problem is not so much with AI but with the mode of production, as expected.
Nowadays it's possible to run models on consumer hardware that doesn't need to cost 10,000 dollars (though you might have seen that post of the 2000$ rig that can run the full deepseek model). Deepseek itself is very efficient, and there are even more efficient models being made to the point that soon it will be more costly (and resource-intensive) to meter API usage than give it out for free.
I think the place you have as a user is finding where AI can help you individually. People also like to say AI fries your brain, that it incentivizes you to shut your brain off and just accept the output. I think that's a mistake, and it's up to you not to do that. I've learned a lot about how linux works, how to manage a VPS, and how to work on mediawiki with AI help. Just like you should eat your vegetables and not so many sweets, you should be able to say "this is wrong for me" and stop yourself from doing it.
If you're a professional coder and work better with handwritten code, then continue with that! When it comes to students relying on AI for everything, then schools need to find other methods. Right now they're going backwards to doing pen and paper tests. Maybe we should rethink the entire testing method? When I was in school, years before AI, my schoolmates and I already could tell that rote memorization was torture and a 19th century way of teaching. I think AI is just the nail in the coffin for a very, very outdated method of teaching. Why do kids use AI to do their homework for them? That is a much more important question than how are they using AI.
As a designer I've used AI to help get me started on some projects, because this is my weakness. Once I get the ball rolling it becomes very easy for me, but getting it moving in the first place is the hard part. If you're able to prompt it right (which is definitely something I lament, it feels like you have to say the right magic words and they don't work), it can help with that, and then I can do my thing.
Personally part of my unwillingness to get into AI initially was from the evangelists who like to say literally every new tech thing is the future. Segways were the future, crypto was the future, VR was the future, NFTs were the future, google glasses were the future... They make money on saying these things so of course they have an incentive to say it. It still bothers me that they exist, if you were wondering (if they bother you too lol), but ultimately you have to ignore them and focus on your own thing.
Another part of it I think is how much mysticism there is around it, with companies and let's say AI power users who are so unwilling to share their methods or how LLMs actually work. They retain information for themselves, or lead people to think this is magic and does everything.
Is AI coming for your job? Yes, probably. But burying our heads in the sand won't help. I see a lot of translators talking about the soul of their art - everything has a soul and is art now (even saw a programmer call it that to explain why they don't use AI in their work), we've gone full circle back to base idealism to "explain" how human work is different from AI work. AI already handles some translation work very well, and professionals are already losing work to it. Saying "refuse to use AI" is not materially sound, it is not going to save their client base. In socialism getting your job automated is desirable, but not in capitalism of course. But this is not new either, machines have replaced human workers for centuries now, as far back as the printing press to name just one. Yet nobody today is saying "return to scribing monks".
I think it would be very useful to have an AI guide written for communists by communists. Something that everyone can understand, written from a proletarian perspective - not the philosophy of it but more like how the tech works, how to use it, etc. I can put it up on the ProleWiki essays space if someone wants to write it, we've put up guides before, e.g. if you want to see a nutrition and fitness guide written from a communist perspective.
this is an overstatement. once you learn the basics of one programming language (which does not take a full year), you can apply the knowledge to other programming languages, many of which are almost identical to one another.
according to a commonly-cited 2023 study:
there's also the energy costs:
according to google's 2024 environmental report:
according to the mit technology review:
and
there's also this article by the UN, but this comment is getting kinda long and the whole thing is relevant imo so it is left as an exercise to the reader
i have my own biases against ai, so i'm not gonna try to write a full response, but this is what stood out to me
I've tried getting into javascript at different points. My brain doesn't like OOP for some reason. Then after that you have to learn jquery, then apparently React or Vue.js... That's when I stopped looking lol because as much as in my job knowing web dev is useful I'm not a frontend dev either.
I could maybe get something working after 6-9 months on it, if I don't give up. But it would be inefficient, amateurish and might not even work the way I want it to.
I'm not even talking about full apps with GUIs yet, just simple-ish scripts that do specific things.
Or I can send the process to AI and it does it in five minutes. By passing it documentation and the code base it can also stay within its bounds, and I can have it refactor the code afterwards. People say it has a junior dev level and I agree, but it may not stay that way for much longer and it's better than my amateur level.
To say "you must learn programming it'd the only way" was true only before 2022. I would still say it's good/necessary to know how code and computers work so you know how to scope the AI but aside from that like I said we don't always have a programmer friend around to teach us or make our scripts for us (as much as I love them)
To add to the last part, my preferred way is to get acquainted with the library/framework if I'm gonna be using it a lot, and then complete everything else with AI. That way I still learn and know how it works under the hood so I can also guide the AI if it starts getting off topic.
It's a teaching by example tool. I don't necessarily read or review the code but I ask it, why do it this way? Wait, I didn't think you would do it like that, explain?
A lot of the time documentation is severely lacking or meant for other devs. I remember getting in bootstrap years ago took me weeks. With AI I could probably get around to it in an hour.
yeah, i guess it's fair to do vibe coding or whatever. idk, when it comes to existing codebases i hate the thought of having ai contributions mixed in with real contributions. but i guess realistically, if there are no developers anyway, and if the model is running locally, none of my hangups apply
I think it is helpful to put some things in perspective, like for electricity usage, data centers only take up 1-1.5% of global electricity usage. Like stated here https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks
To also cite form that article, there also this mention to.
So even for overall GHG, data center’s general account very little. Of course with this technology being used more, electricity usage will rise a bit more but it still likely will be small in the grand scheme of things. Another question how much of that is specifically AI in regards to data centers in general? One cited figure is 10-20% of data centers is designated to AI usage. Like here https://time.com/6987773/ai-data-centers-energy-usage-climate-change/
So, a lot of data centers are just being used for lots of other things like cloud stuff for example, but the share by AI is growing a bit more however.
Besides that, to go to the water usage, that is a problem, especially when data centers, in general, are built in areas that can’t really sustain such things. However this is just data centers in general, and this was happening before AI in the last two years. I think it is also worth mentioning to that like, google and the rest are able to buy water rights to which also completely fucks over First Nations to which don't get a say in these things.
To quote Kaffe, who I think is also on here to??
https://xcancel.com/probablykaffe/status/1905480887594361070#m
the IEA report was made in mid-2023, and i would imagine ai electricity usage has skyrocketed since then. as mentioned in the mit source, dating to may 2025, electricity usage by ai is 48% dirtier than the us average. my problem with ai isn't that it violates intellectual property rights, it's that llms are a net-negative to society because of their climate effects. if ai datacenters were built using clean energy and cooled using dirty water, it would likely be little more than a mild annoyance for me. as it stands, we are putting the global south underwater so that people who are surrounded by yes-men can have yes-robots too.
I've been doing programming for a long time, and I can tell you that learning to use a language effectively takes a long time in practice. The reality is that it's not just syntax you have to learn, but the tooling around the language, the ecosystem, its libraries, best practices, and so on. Then, there are families of languages. If you know one imperative language then core concepts transfer well to another, however they're not going to be nearly as useful if you're working with a functional language. The effort in learning languages should not be trivialized. This is precisely the problem LLMs solve because you can focus on what you want to do conceptually, which is a transferable skill, and the LLM knows language and ecosystem details which is the part that you'd be spending time learning.
Meanwhile, studies about GPT3 are completely meaningless today. The efficiency has already improved dramatically and models that outperform those requiring a data centre even a year ago, can now be run on your laptop. You can make the argument that the aggregate demand for using LLM tools is growing, but that just means these tools are genuinely useful and people reach for them more than other tools they used to use. It's worth noting that people are still discovering new techniques for optimizing models, and there's no indication that we're hitting any sort of a plateau here.
the mit article was written this may, and as it notes, ai datacenters still use much more electricity than other datacenters, and that electricity is generated through less environmentally-friendly methods. openai, if it is solvent long enough to count, will
even the most efficient models take several orders of magnitude more energy to create than to use:
and overall, ai datacenters use
i'm doubtful that the uses of llms justify the energy cost for training, especially when you consider that the speed at which they are attempting to create these "tools" requires that they use fossil fuels to do it. i'm not gonna make the argument that aggregate demand is growing, because i believe that the uses of llms are rather narrow, and if ai is being used more, it's because it is being forced on the consumer in order for tech companies to post the growth numbers necessary to keep the line growing up. i know that i don't want gemini giving me some inane answer every time i google something. maybe you do.
if you use a pretrained model running locally, you know the energy costs of your queries better than me. if you use an online model running in a large datacenter, i'm sorry but doubting the environmental costs of making queries seems to be treatler cope more than anything else. even if you do use a pretrained model, the cost of creation likely eclipses the benefit to society of its existence.
EDIT: to your first point, it takes a bit to learn how to write idiomatic code in a new paradigm. but if you're super concerned about code quality you're not using an llm anyway. at least unless they've made large strides since i last used one.