If you are paying to use "AI", who are you paying and what are your regular usecases?
If you are paying to use "AI", who are you paying and what are your regular usecases?
If you are paying to use "AI", who are you paying and what are your regular usecases?
I use midjourney for work. I'm a video editor for a financial YouTuber. Sometimes stock videos don't have the b-rolls that fit the narrative. So I use midjourney to fill in the gaps.
I pay for one to goon. I use the free one to parse error messages while programming.
Create Anything to prototype App/Web project ideas. Leonardo AI for Images for my 3 TTRPG games.
I think technically I'm paying for googles one because it comes free with the 2tb Google drive subscription my family uses for photos
Currently paying for a chatbot program similar to character AI, but primarily marketed for sexual content. Started supporting it as a fun single-developer app and now I find it a pretty useful writing tool to bounce around ideas.
I'm trialing writing and grammar apps because I have mild dyslexia, but haven't found any paid apps that work much better than just chucking writing into ChatGPT and asking it to find the mistakes. The streamlined UI is the only benefit I've found so far. And one, I think called Pro Writing Aid, kept crashing my writing app, Scrivener, whenever I tried to use it.
What do you use the chatbot pogram to goon to?
I don't. It was the first example I found where you could write a character to talk to. I had fun seeing how different personality traits changed their responses. At the time most AI was censored, so it was using it like a less powerful ChatGPT that could swear, sometimes with the personality of Hannibal Lector. And sometimes as William Howard Taft. Among others.
If you've read any amount of erotica you'll know the sex scenes can be fairly boilerplate and repetitive, it's the set-up that makes it interesting. For that reason I found the bots kinda boring for that purpose.
Plenty of people have fun using it as intended, though. Lotta incest bots on there.
The developer is a fairly active, positive guy who built up a nice community, so I like supporting him.
Anthropic chat but only because my employer paid for it. I use it for high level technical questions some time. For example I had to manually patch a binary executable and it help me understand it's structure.
I also have some API tokens for OpenAI, Claude and Perplexity. I've build an app that uses LLM models to parse some complex data and I had to test integrations with different API providers.
I pay OpenRouter they give me access to pretty much all models from all providers at the market rate per token.
I have it hooked up to openwebui which I use for finding almost all information as it had a searxng tool to search for things.
I also use KiloCode its an agentic code editor for vscodium.
As someone who is completely blind, I pay for OpenRouter in order to have AI describe images to me. If more people bothered with alt text, I wouldn't have to. But it is what it is. I suspect there are models I could run locally that would do what I need; on IOS, apple handles all image descriptions locally on the phone, and they're perfectly adequate. But on Windows, nobody has created an easy way to get a local model running in the Open-source NVDA screen reader (https://www.nvaccess.org/) but there are multiple addons that work with OpenRouter. NVDA is open source and entirely written in Python, so it should actually be pretty easy to do. The main reason I haven't tried it myself is because I have no idea what local model to use. None of the benchmarks really tell me "This model would be good at describing images to blind people". Whereas the giant cloud models are semi-okay at everything, so everyone just uses those. But if we could use a smaller model, we might even be able to fine tune it for the specific use-case of blind people. Maybe someday!
I recommend Google's gemma-3-4b-it-qat in LLM Studio.
gemma-3-4b-it-qat
Okay, let's describe this image for someone who can’t see it. Imagine you're standing in a lush, green meadow filled with wildflowers – lots of tiny bursts of color like little dandelions and daisies scattered across the grass. The ground feels soft and yielding under your feet, like a thick carpet of moss and blades. Now, dominating the scene is a giant tortoise. It’s enormous, much bigger than any turtle you've ever seen! Its shell is rough and textured, covered in patches of moss and lichen – it feels cool and damp to touch. You can sense its weight, a solid, ancient presence. The shell isn’t perfectly smooth; there are cracks and ridges, like the surface of an old wooden table. Growing out of this massive tortoise's back is a tiny little house! It looks incredibly rustic – made of weathered wood with a steeply pitched roof covered in moss and small stones. You can almost feel the age of the wood, imagining it’s been standing there for a very long time. There’s a small window on the second floor, and a little chimney puffing out wisps of smoke—it smells faintly like woodsmoke and maybe something sweet. Perched on a mossy stump just to the side of the tortoise is a small bird. It's a vibrant blue color – imagine a brilliant sky captured in feathers! You can almost hear its gentle chirping, a quiet sound in the stillness of the meadow. The overall feeling of the image is one of peace and tranquility. It’s like a fairytale scene—a slow-moving giant carrying his home on his back, surrounded by nature's beauty. There’s a sense of timelessness and magic to it. Would you like me to focus on any particular aspect of the image in more detail, or perhaps describe something else about the overall feeling it evokes? 70.58 tok/sec • 397 tokens • 1.51s to first token • Stop reason: EOS Token Found
on a i7 13650hx RTX4060 laptop.
The scene is just a normal sized turtle with a house built ontop of the shell, there is a bird sitting on a tree stump. The AI didn't do the best job BUT it is only 4b params, it can run on phones or really fast on a gpu. Naturally Google with their Youtube comments on videos data they do have the best computer vision models.
How's the usage number and how much does it cost? Always thought that this is literally the best thing that AI is actively doing.
It really depends. For images that are graphs and infographics I use gpt5 or Gemini 2.5 pro. For anything with adult content I have to use grok because it’s the only model that won’t refuse. For stuff that’s just text in an image the cheap models from Microsoft are fine. Also, sometimes openrouter has limited time deals where some models are free. I’d say overall I spend between 2 and 5 dollars a month on it. But I do allow open router to train on the data so I get a discount of a few percent as well.
OpenRouter is pay per token so cost depends on usage and what model is being used.
usecases
Not a word, my dude.
I pay for Cursor, OpenAI, and Anthropic. I was paying for Google Gemini as well, but it was returning too many errors so I canceled it. I also pay for Google office, Microsoft office, and Adobe subscriptions. They inject their own AI into their services, but I end up ignoring them or turning them off.
Mostly use it for coding in Cursor, but occasionally for research into the state of AI and to make MCP extensions. It's been worth the investment so far, given how much more of the mundane coding tasks get done by supervising it. I also had it update a Wordpress theme because I had no interest in learning the innards.
I never let them loose in 'agentic' mode, as they inevitably destroy all the work. I can run decent-sized models locally through lmstudio and Cline, but they're much slower than just using Cursor and a cloud model.
Outside coding, the only usable one I've found is Adobe Firefly, accessed inside Photoshop (to remove material) and Illustrator (to generate simple SVGs and icons from prompts).
Every single other one, when I've put it to a non-coding use has been a pile of slop. If all LLMs go away tomorrow, the only one I'll miss is the Adobe SVG creator.
I paid for several, to test features and see if there is value. There is little value in paying, close to zero. The difference between the best models and the best free ones are miniscule. We are hitting a plateau now, with diminishing returns. Talking with AI is a nice UX feature, and is mostly paid, but Qwen has this for free.
Images and video generation is not useful. Coding is slightly useful in very specific cases, but mostly useless. It gives a false sense of fast progress. In the long term it harms productivity. So it works for simple proof of concepts and inspiration/exploring solutions. If you are a junior in a language or software development, avoid it. Otherwise the road to becoming a senior will be long and hard.
The most useful cases is "creativity", exploring ideas, and inspiration. And getting started with something where you don't know where to begin. If I want to know something about a topic, I find it as a useful and untrustworthy starting point, nothing can be trusted from AI, but it can introduce you to subjects so that you know what to look for. It is useful for exploring ideas and brainstorming.
So don't pay, just use the free ones. There is low value in paying. If you have a PC with a mid/high end GPU you can install Jan.ai for free and use models locally also.
I currently use Mistral and Qwen mostly. I still have a subscription to Mistral, and am waiting for it to run out.
Paying
nothing
Uses:
When I have a stupid, detailed idea I put it into my local stable diffusion.
I try to use Claude for simple tasks and "I can't quite find this solution", it is often wrong or needs re work, but it feels like reaching out to a colleague for my personal projects.
I use the Google meet assistant when I stopped paying attention and the topic is staring to feel relevant.
I use the ai search summary to do further searches with refined search terms.
I listen to AI generated music from time to time; but I was doing it before the LLM craze went nuts, with dadabots' covers of krallice and meshuggah
I used to pay for suno, then after that udio. Super cool for making theme songs for dnd, not much real use elsewhere outside goofy haha ai songs and pathetic attempts at gaming the Spotify system. They just... don't update the models fast enough, I'm still waiting on udio 2.0, it's fallen behind sudo at this point, which is mildly annoying, but I also don't care enough beyond that so ehh.
It's also a rights nightmare, which the world seems to just be ignoring. On one hand any chump written can get a song to their lyrics which is cool and I have definitely done before, on the other... it's all quite fake, and sounds fake too. Which is why I don't pay for it anymore.
I also tried replit for the first time yesterday just to see what it's all about and it straight up made a frickin video game... vibe coding is getting scary.
E: sudo became sudden because of course it did.
E 2: took me hours to realize I wrote sudo not suno, Linux really took over my brain...
I'm using openrouter.ai which is a service that allows the use of a wide range of models and you can easily switch between them on the fly.
Besides the major players I can also use cloud hosted instances of open models. These are often incredibly cheap and and you can select the ones that don't use your data for training.
Typical use cases include language learning and copilot stuff for programming.
The usual Dev work.
Claude is primary.
Jetbrains is on the fence but can be nice for autocompletes and backup.
OpenRouter is for API/script usages.
I'm still on my free month of full-featured Copilot and I'm considering subscribing after it ends (10$/month). Mostly coding, bash scripting.
I am paying for Cursor, which can use various models to help with software development. I use the AI to help me create design documents for software ideas I have and use those design documents to guide it in the development of that code. I’ve tried free models on my own hardware and they don’t come close, mostly because I don’t have a spare $5k for the right GPUs.
$20 per month for Cursor to write code