Your public ChatGPT queries are getting indexed by Google and other search engines
BreadstickNinja @ BreadstickNinja @lemmy.world Posts 0Comments 680Joined 2 yr. ago
BreadstickNinja @ BreadstickNinja @lemmy.world
Posts
0
Comments
680
Joined
2 yr. ago
Yes, Ollama or a range of other backends (Ooba, Kobold, etc.) can run LLMs locally. Huggingface has a huge number of models suited to different tasks like coding, storywriting, general purpose, and so on. If you run both the backend and frontend locally, then no one monetizes your data.
The part I'd argue that the previous poster is glazing over a little bit is performance. Unless you have an enterprise-grade GPU cluster sitting in your basement, you're going to make compromises on speed and/or quality relative to the giant models that run on commercial services.