Now we are facing an unprecedented growth of AI as a whole. Do you think is time for FSF elaborate a new version of GPL to incorporate the new challenges of AI in software development to keep protecting users freedom?
I keep saying "no" to this sort of thing, for a variety of reasons.
"You can use this code for anything you want as long as you don't work in a field that I don't like" is pretty much the opposite of the spirit of the GPL.
The enormous companies slurping up all content available on the Internet do not care about copyright. The GPL already forbids adapting and redistributing code without licensing under the GPL, and they're not doing that. So another clause that says "hey, if you're training an AI, leave me out" is wasted text that nobody is going to read.
Making "AI" an issue instead of "big corporate abuse" means that academics and hobbyists can't legally train a language model on your code, even if they would otherwise comply with the license.
The FSF has never cared about anything unless Stallman personally cared about it on his personal computer, and they've recently proven that he matters to them more than the community, so we probably shouldn't ever expect a new GPL.
The GPL has so many problems (because it's been based on one person's personal focuses) that they don't care about or isolate in random silos (like the AGPL, as if the web is still a fringe thing) that AI barely seems relevant.
I mean, I get it. The language-model people are exhausting, and their disinterest in copyright law is unpleasant. But asking an organization that doesn't care to add restrictions to a license that the companies don't read isn't going to solve the problem.
The problem of recent AI is about fair use of data, not about copyright. To solve the AI problem, we need laws to stop abuse of data rather than to stop copying of code.
Some portion of the "data" fed into these models is copyrighted, though. Github's copilot is trained on code. Does it violate the GPL to train an AI model on all GPL source code published out there?
Too soon. The GPL is a license aligning prevalent copyright laws to some ideological goals. There are no prevalent copyright laws regarding AI yet, so there is nothing to base a copyright license on.
First step: introduce AI into copyright law (and pray The Mouse doesn't introduce it first).
It might be time to start thinking about it, however it will depend on the consensus among the legal system on weather you need to provide attribution through AI.
There is already consensus, it just hasn't been concluded explicitly yet.
There is no "AI" and there's no "learning", so there's no new unbeaten path in law. like some would make you believe. LLMs are data processing software that take input data and output other data. In order to use the input data you have to conform to its licensing, and you can't hide behind arguments like "I don't know what the software is doing with the data" or "I can't identify the input data in the output data anymore".
LLM companies will eventually be found guilty of copyright infringement and they'll settle and start observing licensing terms like everybody else. There are plenty of media companies with lots of money with a vested interest in copyright.
The FSF is a non-working organization which refuses to let go of its horrible founder. I hoped it would move on, it didn't and refused to despite massive amounts of community backlash. I no longer believe they should have any role in representing the Free Software movement.
The GPL is a license made by the FSF, not sure who else could make a new version other than them. Other entities make their own licenses, which might or not be compatible with the GPL.
Anyone can make a license and make it compatible with the GPL but yes, I forgot that the FSF doesn't allow modifications of the GPL which is pretty fucking weird thinking more about it.
As in to keep it from being used to train AI? I think GPL and especially AGPL already cover that, but nobody cares and FSF can't afford to litigate it.
GPLv3 already takes all of that.
Programs that train AI have normal licencing applied.
Programs that was modified by AI must be under GPL too.
The neural network itself if not a program, it's a format and is always modifiable anyway as there is no source code. You can take any neural network and train it futher without data it was trained on before.
There's also the fact that GPL is ultimately about using copyright to reduce the harm that copyright can cause to people's rights.
If we look through the cases that could exist with AI law:
Training can legally use copyrighted materials without a licence, but models cannot be copyrighted: This probably is a net win for software freedom - people can train models on commercial software even and generate F/L/OSS software quickly. It would undermine AGPL style protection though - companies could benefit from F/L/OSS and use means other than copyright to undermine rights, but there would be nothing a licence could do to change that.
Training can legally use copyrighted materials without a licence, models can be copyrighted: This would allow companies to benefit heavily from F/L/OSS, but not share back. However, it would also allow F/L/OSS to benefit from commercial software where the source is available.
Training cannot legally use copyrighted materials without complying with licence, models cannot be copyrighted (or models can be copyrighted, outputs can't be copyrighted): This is probably the worst for F/L/OSS because proprietary software wouldn't be able to be used for training, but proprietary software could use a model trained on F/L/OSS by someone else.
Training cannot legally use copyrighted materials without complying with licence, models can be copyrighted, outputs can be copyrighted: In this case, GPLv2 and GPLv3 probably make the model and its outputs a derivative work, so it is more or less status quo.