Have you used so-called "AI" for actually productive work ?
With copilot included in Professional-grade Office 365 and some politician claiming that their government should use AI to be more efficient. I am curious on whether some of you did use "AI" to get some productive things done. Or if it's still mostly a toy for you.
I use it as a glorified Google search since Google search is absolute dogshit these days. But that's about it. ChatGPT is one of the most over hyped bullshit I've ever seen in my life.
You shouldn't use it for search like that. They (Gemini and ChatGPT) love to be confidently incorrect. Their perfect grammar trick you into believing their answers, even when they are wildly inaccurate.
I use GPT in the sense of "I need to solve X problem, are there established algorithms for this?" which usually gives me a good starting point for actual searching.
Most recent use-case was judging the similarity of two strings: I had never heard of "Levenschtein distance" before, but once I had that keyword it was easy to work from there.
I think I'm going to disagree with the accuracy statement.
Yes - AIs can be famously inaccurate. But so can web pages - even reputable ones. In fact, any single source of information is insufficient to be relied upon, if accuracy is important. And today, deliberate disinformation on the internet is massive - it's something we don't even know the scale of because the tools to check it may be compromised. </tinfoilhat>
It takes a lot of cross-referencing to be certain of something, and most of us don't bother if the first answer from either method 'feels right'.
AI does get shown off when it's stupidly wrong, which is to be expected, but the world doesn't care when it's correct time and again. And each iteration gets better at self-checking facts.
Copilot is actually linked directly into their search engine and it provides the links it pulls its data from. But you're correct, ChatGPT is not hooked into the live internet and should not be used for such things. I'm not sure if Gemini is or not since I haven't used it or looked into it much, so I can't comment on it.
Edit: I stand corrected, ChatGPT is hooked into the live web now. It didn't used to be and I haven't used it in awhile since my work has our own private trained model running that we're supposed to use instead.
Absolutely agree!! LLMs are good for quick "shallow" search for me (stuff I would find on google in a few minutes). Bad for "deeper" learning (because it's not capable of doing it). It's overhyped.
Pretty useful for software engineering, particularly helpful in writing a test suite, you still need to actually check the output though ofc
Also made use of it for writing my end of year review to solve the blank page problem, I find it a lot easier to edit down than starting HR stuff like that entirely from scratch
I use it all the time, and not just for myself or for work. Yesterday I fed my son's study guide into ChatGPT and had it create a CSV file with flash cards for Anki. It's great at any kind of transformation / summarizing or picking out specific information.
When school sends me overly verbose messages about everything that's going on I can feed the message into ChatGPT and have it create an ical file that has events for the important stuff that happens in school in in the coming week.
I used it to write a greeting card for my dad on his birthday ("I'm giving him X, these are his interests, give me ten suggestions for greeting cards").
I have it explain the reasons behind news stories (by searching for previous information and relating it to the news story). I ask tons of questions about anything I wonder about in the world such as chemical processes, the differences between oil frying and air frying, finding scientific papers about specific things, how to factory reset my Bose headphones... the list goes on.
I don’t think it can get the information for this with 100% accuracy unless the process is same for all Bose headphones. How did it go?
Why not? I told it the model (Bose 700). It searched the web for information for that model, found an article that described how to do it, and provided me with the key points without having to scroll past tons of ads and noisy language. Of course it sometimes gives me the wrong info (usually because the sources are incorrect), but I'll notice soon enough.
How did this go? It can hallucinate stuff even when you post static data to it, last time I tried.
It went perfectly. Again, there are certainly times when it makes errors / hallucinates, but I can fix those manually. In my example of producing flash cards for my son, we obviously had to proofread the cards but that's much faster than writing all the cards by hand. One out of the 20 flash cards had a nonsensical question/answer so we just removed it.
I wanted to update the logo for my business, I tried hiring an artist though a number I know from having working for various comic cons. No luck, so I went to friends i knew that were artists, got strung along with no results. Tried hiring via Craigslist and Reddit, got garbage.
I was out $1600 with nothing to show for it except some sketches that were no where near what I wanted. So I tried using AI. It was horrid. Anything that was half decent was cropped and unusable.
FYI, in the future, just use Fiverr. I had the same problems when helping my wife get her business logo created. People either wanted a ridiculous amount of money for a simple logo ($1500+ and formal contracts) or like $100-200 just for prototyping to start and then another $100-200 for final image (the latter was commonplace on the freelance artist subreddits). I went with a couple artists on Reddit and they completely missed on what she wanted, despite us providing ample examples and our own rough sketch ideas.
She ended up finding a local artist through a friend who captured the logo exactly how she pictured it, and it ended up costing around $150. Ironically, I didn't know she did that, and I'd hit up a random artist on Fiverr who came very close to what the local artist did and it was only like $50.
Sorry for the tangent, I was just somewhat surprised at how complicated and potentially expensive getting a simple logo created. I know artists gotta eat, but some of them wanted more than what plumbers or electricians ask for, which is crazy to me.
I used AI to generate random fake data to use in training on Excel , also to understand various concepts in my feild of study and to answer my sudden random questions
I would say that I have used an LLM for productive tasks unrelated to work. I run a superhero RPG weekly, and have been using Egyptian & north African myths as the origin for my monsters of the week. The LLM has taken my research and the monster-creating phase of my prep from being multiple hours to sometimes under one hour - I do confirm everything the LLM tells me with either Wikipedia or a museum. I can also use an LLM to generate exemplary images of the various monsters for my players, as a visual aid.
That means I have more time to focus on the "game" elements - like specific combats, available maps, and the like. I appreciate the acceleration it provides by being a combined natural-language search engine and summary tool. Frankly, if Ask Jeeves (aka ask(dot)com) was still good at parsing questions and providing clear results, I would be just as happy using it.
I think ChatGPT and other Ai can be a fantastic tool if you use it responsibly. It’s a great help for learning and practicing things. I’ve very recently made my first server and it’s great at answering all my simple questions that I sometimes feel hesitant to bother people with, and little things like that. Sometimes I’ll ask it to give me a simple kind of structure or bullet point list of topics I need to make sure to hit in my writing, or weirdly enough it’s pretty good at helping me with substitutions in recipes, or other little things like that. And I personally think that all of this is fine.
But I’m entirely against using it to create any kind of final product. Having it do any kind of actual final work is just stupid and lazy. I truly don’t think Ai is capable of making anything that’s worth peoples time really, and in the amount of time you’ll spend meticulously explaining everything that you want or need for whatever it is you have it generating for you, you could’ve just made it yourself and done a far better job. That’s where I draw the line. I don’t think Ai has to be inherently evil or anything because it can be a great tool, but you can’t rely on it to actually do things for you. Maybe others will disagree me I know many people especially in Fediverse circles are very very strongly anti Ai in all facets but that’s just my thoughts.
My work often involves talking a lot of observation notes and I used to spend a lot of time sifting through them to make the actual summaries and analyses. Now AI basically does my first draft and I can even ask it to highlight examples of different things from my notes. It honestly saves me a lot of time and effort but also proves to me that on it's own, AI still isn't good enough to beat a real human expert, it's just WAY faster and gets me like 70~80% of the way there in seconds. I was at a conference just a few weeks back and found at least one other person in my field of work doing the same and a lot more people were looking to adopt it for this kind of use specifically after our discussions.
This practice is banned at our company and it is a fireable offence. We also do not allow for code to be shown or shared on Teams. If there is ANY confidential information or even proprietary internal subject matters in your notes, you are essentially feeding it to the AI to plagiarize.
Nothing that would be proprietary, I don't work in software or tech. And a simple find and replace all gets rid of any confidential or personal information before I paste it into any AI. Redacting and/or concealing confidential info has been a thing I've had to do way before AI
Used it as a toy for the longest time but by now I had to do a lot of coding and I was actually able to make good use of code completion AI.
Saved me about a quarter of my time. Definitely worth something. (FYI I use supermaven free tier).
Also I'm using ChatGPT to ask dumb questions because that way I don't have to constantly interrupt other people. And also as a starting point to research something. I usually start with ChatGPT, then Google specific jargon and depending on the depth of the topic I will read either studies, articles or forum threads afterwards.
It did take me a long time to figure out which AI and when to use it, so mandating this onto the entire government is a gong show more than anything.
No AI is not useless, but it's always a very specific use case.
If you're interested, I suggest using the free ChatGPT version to ask dumb questions together with Google to get a feel for what you get. Then you can better decide if it's worth it for you.
The amount of shit we have to clean up from devs using AI generated code nowadays is insane. Some people are using it to generate the bulk of their code and the output can be trash tier.
I was supposed to have a nice long weekend to rest and I spent most of it cleaning up after clients who pushed AI generated code into production that froze all data processing. Even after we found the problem and fixed it for them, the data didn't catch up until yesterday afternoon. The entire holiday I had to spend with a laptop a few feet away on a Teams call because a dev used AI-gen code.
I am not saying that it isn't helpful to your situation. What I am saying is that a growing number of outfits are starting to depend on "devs" who's code is mostly LLM generated, and they push it without understanding what it does or how it interacts with the environment they are deploying it in.
Yeah. I think AI literacy is a real thing and should be taken seriously. Before generating everyone should internalize the boundaries and limitations of any model used.
If you have a hammer, everything's a nail. And that reflex exists with AI as well, so everyone who uses is has to be careful in regards to that.
I use it all the time. I can't imagine my work or my life without it. I need it to work as a designer. I use it to brainstorm, I use image generation for everything from moodboards to helping me with final designs. I've started to also program, because for a lot of tasks I don't need a developer anymore.
I use it to help me write emails, social media posts, to translate,... I use it to ask for any trivia I'm interested in during the day. I use it for music recommendations,...
I could probably go on... But my work, my job is completely different than it was a year ago.
I'm using Hugo to design a new website and Gemini has been useful in find the actual useful documentation that I need. Much faster and more accurate than trawling the official pages, and does a better job of providing relevent examples. It's also really good at sensing what I'm actually asking, even if I'm clumsy at the phrasing.
And for those who continue to say AI isn't really useful for learning - another thing I've been using it for. "write perl to convert a string to only container lowercase, converting any non-alpha chars to dashes" - I've learned how to do stuff like that over and over again, but the exact syntax falls out of my head after a few months of not doing it. AI is good at providing a quick recollect. I've already learned perl properly (including from paper books - yes, I first wrote perl a quarter of a century ago) - and forgotten it so many times. AI doesn't prevent me learning, just makes it faster.
Much as I dislike praising AI, I must admit I got some good results using an AI powered search engine for academic articles to find sources for a term paper I'm writing for a seminar class I'm taking for my masters degree.
When I'm stuck in debugging and can't think of what can go wrong, LLM chats are quite useful. I can ask for possibilities and often I find something meaningful that didn't come to mind. These kind of things are hard to do with search engines.
(If I'm debugging something unfamiliar this becomes very counterproductive though, as I can't filter hallucinations by looking).
A smart text formatter
Simple bash one liner, boilerplate code generation.
I tried it but non trivial/bit longer code generation again gets effectively slower as I find myself fixing/working around AI mistake quite often.
Everyday, the company I work for have all their code in SAS, I use our LLM to translate it to python. I also write my python scripts and ask the llm to refract it and optimize it. Sometimes it save me 2 seconds so I just use my code that is usually simple, but other times it saves me half an hour.
I’m a radiologist and our group uses an LLM tool to assist with generating reports on imaging studies. Our reports have a body that includes all of the imaging findings (which we dictate) and then a conclusion/summary calling out what is most important (and serving as a tl;dr for other physicians). The LLM tool analyzes the body to generate that summary of important findings. It certainly is not perfect and frequently requires some editing. Overall it is faster than me creating the summary each time though.
I've used llama3 to help me rewrite my ancient CV and gotten good results so far. I took what it suggested and changed things myself to make a bit more sense. I also use it to summarize things occasionally but that's about it.
I use it to outline and layout big documents and reports. I give it a list of tasks I did and it writes the long-form text in the approved style. I use it anytime I need to translate my thoughts or process into corporate jargon. And occasionally my bosses ask me for a report on something totally unrelated to what we are doing and I'll ask GPT to do the first pass on the topic and then come I'll back and re-write it iteratively as I figure out what part of the topic the boss really cares about.
I have used it as a nicer version of web search, mostly for "How do I write code using this library I'm not yet familiar with?" It provides passable tutorials when the library's documentation is sparse (I get it) or poorly written (they tried 🤷♂️).
Absolutely. I've used it to write basic scripts that I didn't feel like spending time on. I've also used it to write cover letters. I always make sure to peruse through it to see what it did and make sure it works or sounds right.
I basically use it on rare rare occasion to help get me "unstuck" with creative tasks, I don't really use what it produces in the end, I wind up dismantling it entirely and rewriting it "properly" but it has a use you know?
Really worth listening to this podcast as well. It's a guy teaching corporate teams to make best use of AI. He goes over how to use it to get really great use by using it as a discussion rather than just asking it a question and expecting and accurate answer in the first instance
AI has been most useful for tech support for me. I wouldn't have been able to switch to Linux completely if AI didn't instantly find solutions for me, rather than being told by the community to read tomes of documentation.
I also use it a lot to find how to get office apps to do what I want.
I'm famous at work for being a poet, when I actually just ask AI to write a short witty poem.
You can use image generators to make nice personalised cards to share on special events.
AI can make mind maps and things like that if you tell it what you want.
I've used it once to suggest a specific term that I'm going to use in my comic. I was utterly incapable of formulating a conventional search query for a search engine so, after endlessly browsing various thesauri, in the end I resorted to asking perplexity ai. Still took a bit and I had to fight it to get it to understand what I was asking but I did eventually find a term that fits. Felt dirty afterwards. Does that count as "productive"?
The only other thing was the title of a book I read 30 years ago and had only vague memory of. So I gave it an approximate description including a plot point I thought I remembered. The first result it gave me wasn't it. But it claimed the plot involved the thing I remembered. I then asked again and the second result actually was the correct book - turns out I had almost completely misremembered the plot point but it still said "yep, this happens in this book". Very weird experience.
When I had a mold problem it was affecting my mind. I couldn’t think straight or focus, so I had ChatGPT make me a step by step plan for dealing with it, and it had it break each step down into nested sub-steps until no step was more than five minutes of effort, then I had it format the plan to copy-paste into workflowy.
It was really helpful. I could have made that plan myself, except that I was fucked up.
ChatGPT as an interactive search. Last one was about EU GDPR compliance checklist to give a quick answer on what areas need to be looked at. I use it like once a week for work.
Productive in othen ways I use it once a month for recipes. Recipes are probably my favourite since I can say "Write it using grams and ml" and "give me some options to replace eggs and it writes out a legit recipe based on these millions of annoying blogs recipes.
Jetbrains AI auto complete for programming which is getting better slowly and I'm getting the hang of using it. It's really good for cases where I have a common thing that I don't remember the syntax of and I just type a name of a variable like "cspHeaderValue" and it will format thing that's very annoying to look up based on what I some values I wrote above.
I'm not a 10x engineer now for it, it's more like +10% overall and really depends on the task. I can see it go up to around +50% but an AI plateau might come before then.
I started using Debian full-time a year and half ago. It was a very frustrating experience initially and I leaned on LLMs heavily for advice. It was pretty hit or miss initially, and still occasionally gives wrong advice, but it has become much more helpful as the models have progressed. I have been able to restore a broken bup backup, learned the innards of systemd, troubleshoot scripts not launching correctly, optimized my Wayland config, correct fstab boot errors, configure my openWRT router, etc. Obviously I can just blindly copy/paste, but because I ask questions and try and tie things together, I learn along the way as well.
Currently taking a stats course and use the paid version of Claude to check/correct my workings when I do an exercise outside of the course. Also, it's great for explaining concepts in relatable terms. For example I was having trouble understanding confidence intervals, but told Claude to explain it using Steph Curry's 3pt shooting % as example.
There are going to be a lot of people left behind because they haven't kept up w/ the rate of progress and still see LLMs as they were when they first launched.
My physics professor has us compare our answers to physics problems with a LLM's output. Somehow, the AI is even worse at physics then I am, it once simplified (4pi2) to 4.
I mainly use it to get a general direction/names/sources when I want to learn about someting but don't know where to start. So far it's the only use case for which I've found it reliably useful.
GitHub Copilot for fancy find and replace at work (rewriting a database migration from the old schema to the new schema). I pasted in the old migration, started the pattern and the AI finished for me.