Have you found any cool uses/life hacks for AI?
Have you found any cool uses/life hacks for AI?
Have you found any cool uses/life hacks for AI?
I’m piping my in house camera to Gemini. Funny how it comments our daily lives. I should turn the best of in a book or something.
Another one;
Night Hall Motion Detected, you left the broom out again, it probably slid a little against the wall. I bemoan my existence, is this what life is about? Reporting on broom movements?
Yeah I have a full collection of super sarcastic shit like that.
Do you take any precautions to protect your privacy from Google or are you just like, eh, whatever?
.
Absolutely « whatever ». I became quite cynical after working for a while in telco / intelligence / data and AI. The small addition of a few pic is just adding few contextual clues to what they have already.
LLMs are pretty good at reverse dictionary lookup. If I'm struggling to remember a particular word, I can describe the term very loosely and usually get exactly what I'm looking for. Which makes sense, given how they work under the hood.
I've also occasionally used them for study assistance, like creating mnemonics. I always hated the old mnemonic I learned in school for the OSI model because it had absolutely nothing to do with computers or communication; it was some arbitrary mnemonic about pizza. Was able to make an entirely new mnemonic actually related to the subject matter which makes it way easier to remember: "Precise Data Navigation Takes Some Planning Ahead". Pretty handy.
On this topic it's also good at finding you a acronym full form that can spell out a specific thing you want. Like you want your software to spell your name/some fun world but actually have full form related to what it does , AI can be useful.
Before it was hot, I used ESRGAN and some other stuff for restoring old TV. There was a niche community that finetuned models just to, say, restore classic SpongeBob or DBZ or whatever they were into.
These days, I am less into media, but keep Qwen3 32B loaded on my desktop… pretty much all the time? For brainstorming, basic questions, making scripts, an agent to search the internet for me, a ‘dumb’ writing editor, whatever. It’s a part of my “degoogling” effort, and I find myself using it way more often since it’s A: totally free/unlimited, B: private and offline on an open source stack, and C: doesn’t support Big Tech at all. It’s kinda amazing how “logical” a 14GB file can be these days, and I can bounce really personal/sensitive ideas off it that I would hardly trust anyone with.
…I’ve pondered getting back into video restoration, with all the shiny locally runnable tools we have now.
Do you run this on NVIDIA or AMD hardware?
Nvidia.
Back then I had a 980 TI RN I am lucky enough to have snagged a 3090 before they shot up.
I would buy a 7900, or a 395 APU, if they were even reasonably affordable for the VRAM, but AMD is not pricing their stuff well…
But FYI you can fit Qwen 32B on a 16GB card with the right backend/settings.
How do you get it to search the internet?
The front end.
Some UIs (Like Open Web UI) have built in “agents” or extensions that can fetch and parse search results as part of the context, allowing LLMs to “research.” There are in fact some finetunes specializing in this, though these days you are probably best off with regular Qwen3.
This is sometimes called tool use.
I also (sometimes) use a custom python script (modified from another repo) for research, getting the LLM to search a bunch of stuff and work through it.
But fundamentally the LLM isn’t “searching” anything, you are just programmatically feeding it text (and maybe fetching its own requests for search terms).
The backend for all this is a TabbyAPI server, with 2-4 parallel slots for fast processing.
Do you have any recommendations for a local Free Software tool to fix VHS artifacts (bad tracking etc., not just blurriness) in old videos?
That work well out of the box? Honestly, I’m not sure.
Back in the day, I’d turn to vapoursynth or (Or avisynth+) filters and a lot of hand editing, basically go through the trouble sections one-by-one and see which combination of VHS-specific correction and regeneration looks best.
These days, we have far more powerful tools. I’d probably start by training a LoRA for Wan 2B or something, then use it to straight up regenerate damaged test sections with video-2-video. Then I’d write a script to detect them, and mix in some “traditional” vapoursynth filters.
…But this is all very manual, like python dev level with some media/ml knowledge, unfortunately. I am much less familiar with, like, a GUI that could accomplish this. Paid services out there likely offer this, but who knows how well they work.
Great for giving incantatons for ffmpeg, imagemagick, and other power tools.
"Use ffmpeg to get a thumbnail of the fifth second of a video."
Anything where syntax is complicated, lots of half-baked tutorials exist for the AI to read, and you can immediately confirm if it worked or not. It does hallucinate flags, but fixes if you say "There is no --compress flag" etc.
This is the way.
Tailored boilerplate code
I can write code, but it's only a skill I've picked up out of necessity and I hate doing it. I am not familiar with deep programming concepts or specific language quirks and many projects live or die by how much time I have to invest in learning a language I'll never use again.
Even self-hosted LLMs are good enough at spitting out boilerplate code in popular languages that I can skip the deep-dive and hit the ground running- you know, be productive.
I do this as well—I'm currently automating a repetitive workflow for work using python. What's the latest project you've generated boilerplate code for?
It's good for boring professional correspondence. Responding to bosses emails and filling out self evaluations that waste my time
I've done lots of cool things with AI. Image manipulation, sound manipulation, some simple videogames.
I've never found anything cool to do with an LLM.
Care to expand on sound manipulation? Are you talking about for removing background noise from recordings or something else?
Some speech recognition work, some selective gain adjustments –not just amplifying certain bands of frequencies, but trying to write a robot that can identify a specific instrument and amplify or mute just that. Also fun with throwing cellular automata at sound files. And with throwing cellular automata at image files to turn them into sound files.
Employment. I got a job with one of the big companies in the field. Very employee-focused. Good pay. Great benefits. Commute is 8 miles. Smart, pleasant, and capable co-workers.
As far as using the stuff - nope. Don't use it at all.
Legitimately, no. I tried to use it to write code and the code it wrote was dog shit. I tried to use it to write an article and the article it wrote was dog shit. I tried to use it to generate a logo and the logo it generated was both dog shit and raster graphic, so I wouldn’t even have been able to use it.
It’s good at answering some simple things, but sometimes even gets that wrong. It’s like an extremely confident but undeniably stupid friend.
Oh, actually it did do something right. I asked it to help flesh out an idea and turn it into an outline, and it was pretty good at that. So I guess for going from idea to outline and maybe outline to first draft, it’s ok.
Crappy but working code has its uses. Code that might or might not work also has its uses. You should primarily use LLMs in situations where you can accept a high error rate. For instance, in situations where output is quick to validate but would take a long time to produce by hand.
The output is only as good as the model being used. If you want to write code then use a model designed for code. Over the weekend I wrote an Android app to be able to connect my phone to my Ollama instance from off my network. I've never done any coding beyond scripts, and the AI walked me through setting up the IDE and a git repository before we even got started on the code. 3 hours after I had the idea I had the app installed and working on my phone.
I didn’t say the code didn’t work. I said it was dog shit. Dog shit code can still work, but it will have problems. What it produced looks like an intern wrote it. Nothing against interns, they’re just not gonna be able to write production quality code.
It’s also really unsettling to ask it about my own libraries and have it answer questions about them. It was trained on my code, and I just feel disgusted about that. Like, whatever, they’re not breaking the rules of the license, but it’s still disconcerting to know that they could plagiarize a bunch of my code if someone asked the right prompt.
(And for anyone thinking it, yes, I see the joke about how it was my bad code that it trained on. Funny enough, some of the code I know was in its training data is code I wrote when I was 19, and yeah, it is bad code.)
My experience is that while it's useful for creating code from scratch it's pretty alright if you give it a script and ask it to modify it to do something else.
For instance I have a cron job that runs every 15min and attempts to extract .rar files in a folder and email me if it fails to extract. Problem is if something does go wrong it emails me every 15minutes until I fix it. This is especially annoying if its stuck copying a rar at 99%.
I asked deepseek to store failed file names in a file and have the script ignore those files for an increasing amount of time for each failure. It did a pretty good job, although it changed the name of a variable halfway through (easy fix) and added a comment saying it fixed a typo despite changing nothing about that line. I probably probably would have written almost identical code but it definitely saved me time and effort
I'm an author working on an online story series. Just finished S04. My editing was shit and I could not afford to pay someone to do it for me.
So I write the story, rewrite the story, put it through GPT to point out irregularities, grammatical errors, inconsistencies etc, then run it through Zoho's Zia for more checks and finally polish it off with a final edit of my own. This whole process takes around a year.
Overall, quality improved, I was able to turn around stuff quicker and it made me a lot more confident about the stuff I am putting out there.
I also use Bing image creator for the artwork and have seen my artwork improve dramatically from what Dream (Wombo) used to generate.
Now I am trying to save up to get a good GPU so that I can run Stable Diffusion so that I can turn it into a graphic novel.
Naturally I would like to work with an artist cause I can't draw but everyone I meet asks for 20 - 30k dollars deposit to do the thing. Collaborations have been discussed and what I've learnt is that as times get tough, people are requesting for greater shares in the project than I, the originator, have. At some point when I was discussing with an artist, he was side lining me and becoming the main character. I'm not saying that all artists are like this, but dang, people can be tough to deal with.
I respect that people have to eat, but I can't afford that and I have had this dream for years so finally I get a chance to pull it off. My dream can't die without me giving it my best so this is where I am with AI.
You don’t strictly need a huge GPU. These days, there are a lot of places for free generations (like the AI Horde), and a lot of quantization/optimization that gets things running on small VRAM pools if you know where to look. Renting GPUs on vast.ai is pretty cheap.
Also, I’d recommend languagetool as a locally runnable (and AI free/CPU only) grammar checker. It’s pretty good!
As for online services, honestly Gemini Pro via the AI Studio web app is way better and way more generous than ChatGPT, or pretty much anything else. It can ingest an entire story for context, and stay coherent. I don’t like using Google, but if I’m not paying them a dime…
Well, if I am going to push this into the project I envision, privacy is going to be key, so everything will be done locally. I have privacy concerns about running on someone else's hardware regardless of the provided guardrails and layers of protection I can provide for myself.
I used to use Languagetool and Scribens but found my current working model as the best for me at the moment. I will definitely look at options as I move to the next chapter so Languagetool is still an option. Also, I believe they went AI too? At least online?
ChatGPT kind of sucks but is really fast. DeepSeek takes a second but gives really good or hilarious answers. It’s actually good at humor in English and Chinese. Love that it’s actually FOSS too
Getting my ollama instance to act as Socrates.
It is great for introspection, also not being human, I'm less guarded in my responses, and being local means I'm able to trust it.
I bought a cheap barcode scanner and scanned all my books and physical games and put it into a spreadsheet. I gave the spreadsheet to ChatGPT and asked it to populate the titles and ratings, and genre. Allows me to keep them in storage and easily find what I need quickly.
One day I'm going to get around to hooking a local smart speaker to Home Assistant with ollama running locally on my server. Ideally, I'll train the speech to text on Majel Barrett's voice and be able to talk to my house like the computer in Star Trek.
Pasting code and error messages in saves time in debugging stupid mistakes.
Finding specific words in an MP3 log file for our radio station. Free app called Vibe transcribes locally.
It's good at paraphrasing paragraphs to contain no 'fifth glyphs'
That's a big bound forward from last I was looking at it! Avoiding that nasty glyph was notably not in its portfolio of tricks. It would say it was avoiding the fifth, but still slip many through.
Assuming that this discussion is about LLMs, anyway.
I had to instruct it to consult a script to know how many words did contain fifth glyphs, but it did work with that.
What's a "fifth glyph?"
That glyph post D in our ABC
I love fantasy worldbuilding and write a lot. I use it as a grammar checker and sometimes use it to help gather my thoughts, but never as the final product.
Made a product search script that sorts eBay listings based on total per unit price (including shipping). Good for finding the cheapest multi-pack, lot, bundle, etc. by unit. Using Qwen 3 4B and feeding it a single listing at a time to parse.
Do you self host or use one of the Free™ cloud services?
Self host. Just Ollama running on a machine without a GPU! I never said it was fast. :D
I've used llms to generate dialogue trees for a game and generate data with coordinates to describe the layout of the game world. in some ways it can replace procedural generation code.
Table top games?
video game
I use it for books/movies/music/games recommandations (at least while it isn't used for ads...). You can ask for an artist similar to X or a short movie in genre X. The more demanding you are the better, like a "funny scifi book in the YA genre with a zero to hero plot".
Nope. Any use case I have tried with it, I usually find that either a python script, database, book, or piece of paper can always accomplish the same job but usually with a better end result and with a more reliably reproducible outcome.
Apart from avoiding it?
Indeed. I can proudly say that I managed to renew my unfortunately required M365 without the unfortunately included CoPilot trash. And that’s no mean feat, it is a veritable quest through an everchanging maze of clickables to get it this way.
Makes a good litmus test
Good for gaining an outside perspective/insight on an argument, discussion, or other form of communication between people. I fed it my friend’s and their ex’s text conversation to it (with permission), and it was able to point out emotional manipulation in the text when asked neutrally about it:
Please analyze this conversation between A and B and tell me what you think of their motivations and character in this conversation. Is there gaslighting? Emotional manipulation? Signs of an abusive communication style? Etc. Or is this an example of a healthy communication?
It is essential not to ask a leading question that frames A or B in particular as the bad or the good guy. For best results, ask neutral questions.
It would have been quite useful for my friend to have this when they were in that relationship. It may be able to spot abusive behaviors from your partner before you and your rose-colored glasses can.
Obvious disclaimers about believing anything it says are obvious. But having an outside perspective analyze your own behavior is useful.
It’s helping me understand how I think so that I can create frameworks for learning, problem solving, decision making etc. I’m neurodivergent.
This is a very rare use case, but one where i definetly found them very useful. Similar to another answer mentioning reverse-dictionary lookup, i used llms for reverse-song/movie lookup. That is, i describe what i know about the song/movie (whatever else, could be many things) and it gives me a list of names that i can then manually check or just directly recognize.
This is useful for me because i tend to not remember names / artists / actor names, etc.
Very effective at translating between different (human) languages. Best if you can find a native speaker to double-check the output. Failing that, reverse translate with a couple different models to verify the meaning is preserved. Even this sometimes fails though -- e.g. two words with similar but subtly different definitions might trip you up. For instance, I'm told "the west" refers to different regions in english and japanese, but translating and reverse translating didn't reveal this error.
I use a model in the app SherpaTTS to read articles from rssaggregator Feedme
I use it for alt text for photos. Mostly because I just don't know how to describe my images.
Writing HomeAssistant scripts.
I thought they would reject it, but my band friends and their peers all like to use AI to brainstorm and draft songs and go from there making their own songs.
I thought that's interesting. I've asked them about it a few times on the lazy way of using AI and just make slop and yeah they're against that
I don't have any close friends who are drawing artists though I know a few through mutual hobbies on discord. They don't seem to be using AI as tools from what I can tell.
My dad and his circle are definitely churning slop though but says it's mostly for in-group joking and shooting the shit, so I guess that's fine
Me personally, I'm still hesitant using it. I'm an "everything" consultant that hates his place in the small IT company but rising my BPD II wave too much to change it. Everyone around me is fine using AI to help analyze and what not documents and stuff to help them work. I can see how they are useful once you know how to ask the thing, but I just don't want to.
As a DJ with ADHD, it's great for helping me decide what to play next when I forget where I was going with the set, and mix myself into a corner. That said, it's not very good at suggesting songs with a compatible BPM and key, but it works well enough for finding tunes with a similar vibe to what I'm already playing. So I just go down the list until I find a tune that can be mixed in.
As for the usual boring stuff, I'm learning how to code by having it write programs for me, and then analyzing the code and trying to figure out how it works. I'm learning a lot more than I would from studying a textbook.
I also used to use it for therapy, but not so much anymore when I figured out that it will just tell you what you want to hear if you challenge it enough. Not really useful for personal growth.
One thing it's useful for is learning how stuff works, using metaphors comparing it to subjects I already understand.
I've used them both a good bit for D&D/TTRPG campaigns. The image generation has been great for making NPC portraits and custom magic item images. LLM's have been pretty handy for practicing my DM-ing and improv, by asking it to act like a player and reacting to what it decides to do. And sometimes in the reverse by asking it to pitch interesting ideas for characters/dungeons/quest lines. I rarely took those in their entirety, but would often have bits and pieces I'd use.
I’ve used LLMs to reverse engineer some recipes.
Can you make an example?
I can’t be too specific without giving away my location, but I’ve recreated a sauce that was sold by a vegan restaurant I used to go to that sold out to a meat-based chain (and no longer makes the sauce).
The second recipe was the seasoning used by a restaurant from my home state. In this case the AI was rather stupid: its first stab completely sucked and when I told it it said something along the lines of “well employees say it has these [totally different] ingredients” then got it right.