This is what the 3rd party access to API was really all about.
When API access was allowed , all reddit content was effectively free:
They needed to ban 3rd party apps so they could sell the accumulated content.
I expect using content to train AI also factors into it.
Reddit is a trove of user built content under the guise of community. What Spez did was to say "thanks for all the free work, suckers!", put a price sticker on it, and laughed all the way to the bank.
And this is why I'm not active on any Internet community anymore. Nevermind, I guess I just can't help myself...
Considering some of the very wrong and upvoted domain specific knowledge I've seen on Reddit over the years I'm not sure the training data is going to be useful for much beyond what every other model can do.
Out of all things to hate Reddit for, giving data to AI isn't something fediverse users can really criticize it for, though making money from it perhaps.
Remember: All data in federated platforms is available for free and likely already being compiled into datasets. Don't be surprised if this post and its comments end up in GPT5 or 6 training data.
I wish there was a license for content like the GPL, that states if you use this content to train generative AI, the model must be open source. Not sure that would legally be enforceable though (due to fair-use).
I am not sure on what I'm going to say, but I think that LLMs are a technological dead end. They might get some use now, but eventually the industry will shift towards better models for machine text generation. And, if those models rely on a tiny corpus of hand-reviewed data, instead of shoving down as much text as possible into the model (the first "L" in "LLM" is "large"), then Reddit posts/comments will become outright useless.
In other words: Reddit is degrading further the trust of its userbase, and it might not even get much in return.
Good thing I had multiple bots overwrite my content before I deleted it all. Not that someone couldn't recover it, I'm not naive. But the AI bots should miss me.
I feel like AI companies have been scraping Reddit for their datasets already since the beginning and without permission. In fact, unless there's been a regulation change that i'm not aware of, i'm not sure why they would have Reddit "sign away" the data when they can just scrape it.
Also dubious if the current form of AI has a future. They seem like they should revolutionize every sector when you look at their capacities, but in practice their applications might be more limited than we thought?
Anyway, if Reddit does go public i will be deleting my account within the hour. The only reason i haven't yet is that i've been a moderator of the same subreddit for eight years and it's the only thing that's been consistent in my life in that time, i'm kind of attached. The reason i will is i didn't sign up to create value for shareholders, i signed up to create value for a community.
They say it’s $60 million on an annualized basis. I wonder who’d pay that, given that you can probably scrape it for free.
Maybe it’s the AI act in the EU. That might cause trouble in that regard. The US is seeing a lot of rent-seeker PR, too, of course. That might cause some to hedge their bets.
Maybe some people had not realized that yet, but limiting fair use does not just benefit the traditional media corporations but also the likes of Reddit, Facebook, Apple, etc. Making “robots.txt” legally binding would only benefit the tech companies.
Just like that? No thought or anything put into what makes good vs bad training data?
Good luck lmfao.
Makes you wonder how hard it would be to clog up the training data with outputs from other AI models to really bake in that echo defect that they all seem to have to some extent as fast as possible. Wouldn’t that suck!