Skip Navigation
28 comments
  • Simple solution: all AI output is copyleft.

    60
  • Typical corporate attitudes.

    You take from us, you have to pay us or we will come for you.

    We take from you…well, too bad.

    44
  • Having to pay for training data would rapidly sky rocket costs making it impossible for open source projects or even smaller for profit companies to survive. We are rapidly going to find ourselves in an AI driven economy and this would cement Google and Microsoft owning it.

    Not to mention that not a dime would go to individuals. Companies like Reddit, Getty, Adobe and Penguin have all the data, we already gave it to them a long time ago.

    They write strongly worded letters so we play right into their hands but the big AI companies are drooling at the thought of it. It would fuck us hard.

    27
  • Regardless of the AI companies, copyright has been broken for decades thanks to Disney. The system has been redesigned to benefit and serve large corporations with more money than anyone else.

    12
  • And I agree with them. When I learn to paint or take a cool picture, I may learn and be inspired from copyright materials. No one asks successful artists to audit the training materials that inspired them. But start telling AI companies they must do that, and I guarantee the precedent will be set to go after a human for learning from them. Don’t you dare tell people who you were inspired from when you make it big in your craft.

    When I pay AI companies for anything, it’s not a proxy for copyright material, it’s for a service they provide serving, processing, or training the model. We will still require artists and creative people, even if all they do is skillfully prompt an AI tool to render art. But doing only that will be banal and not the pinnacle of what can be achieved with AI-assisted art creation. Art will still require the toil and circumstance that it always has.

    Restricting AI from training on copyright materials is a vain and pointless exercise, but one of many that are meant to bring us to fear and loathe AI. It is one of many fears that the powerful want us to adopt… This is a technology that can and will lift us all if we can stop fearing it. But if we can’t do that, it won’t simply go away… It will only be driven into the bowels of the rich and powerful, so that they alone will benefit from it.

    All the shovel journalism out there has a very strong purpose… to scare us, so this great equalizer will not be open and free and accessible. Don’t let them do this.

    6
    • I was going to make this exact point. Very well said.

      If we start saying intelligence owes fees to its training data, we're basically saying humans are restricted by the licenses of the things they've been exposed to.

      It's only a matter of time until artificial intelligence matches biological intelligence, and if the precedent is set now, it's going to make the future very sticky.

      -1
    • So tired of this bs argument.

      When I learn to paint

      ... you will never be able to generate millions of paintings per day, so why even pretend it's relevant here?

      -4
      • Your argument is tired. Have you ever simply prompted a generative txt2img and told it to make 100 or even 200 in the batch? You might have 1 or 2 that shine and are interesting without any touch up. But almost every one will require inpainting, photoshop work, or other creative modifications to be worth a damn. And even then some won’t be.

        Like I said in my comment. It will be banal without real creativity. It doesn’t even take millions of “paintings” to get there. No one will care about cheaply manufactured junk after the novelty wears off. We will demand more than that.

        Ultimately it will be a tool that extends all our creativity. It already is. But if we fear it because of arguments like yours then laws will be made to keep it out of the hands of the common plebe. But it won’t disappear. You can bet your ass it won’t. It will just be used in dark places by powerful people, and not just for banal image prompting. And then you can fear it rightfully.

        4
      • @admin

        It was a hypothetical, I was just using myself as an example. Here's one that's not hypothetical:

        I'm already a practiced in 3D modelling, UV unwrapping, texturing, lightning, rendering, compositing, etc. I could recreate a painting, pixel for pixel, in 3D space.

        If I just hit render, is that my art now? It took a lot of research to learn how to do this, I should be able to make money on that effort, right?

        I can do that millions of times and get the same result. I can set it on a loop and get as many as I want. It's the same as copying the first render's file, it just takes longer.

        Now I decide to change the camera angle. Almost the entire image is technically different now, but the composition is the same. The colors, the subjects, relative placement in the scene, all the same, but it's not really the same image anymore. Is it mine yet?

        I can set the camera to a random X,Y,Z position, and have it point at a random object in the scene (so it never points off into blank space). Are those images mine? It's never the same twice, but it still has the original artist's style of subjects and lighting. I can even randomize each subjects position, size, hue, direction, add a modifier that distorts them to be wobbly or cubic... I can start generating random objects and throwing them in too, let's call those "hallucinations", thats a fun word...

        At what specific point in this madness does the imagery go from someone else's work to mine?

        I absolutely can generate millions of unique images all day. Without using machine learning, based on work I recreated with my own human hands, and code I write uniquely from my experience and abilities. None of the work - artistically - is mine. I made no decisions on composition, style, meaning, mood, color theory, etc.

        You may want to try to write these questions off, but I can tell you with certainty that other artists won't.

        4
    • @burliman

      You can prompt an image genrater to just spit out the original art it trained on.

      Imagine I had been classically trained as a painter. I study works from various artists. I become so familiar with those works - and skilled as a renderer of art in my own right - that I can reproduce, say, the Mona Lisa from memory with exacting accuracy. Should I be allowed to claim it as my art? Sign my name to it? Sell it as my own?

      Now lets say we compare the original and my work at the micron level. I'm human, there's no way I can match the original stroke for stroke, bristle to bristle. However small, there are differences. When does the work become transformative?

      Let's switch to an image generator. I ask for a picture of a smiling woman, renaissance style. The model happens to be biased to DaVinci, and it spits out almost exactly the same work as the Mona Lisa. Let's say as a prompt engineer, I've never heard of or seen the Mona Lisa. I take the image, decide "meh, good enough for what I need right now", and use it in some commercial product (say, a t-shirt). Should I be able to do that? What if it's not the Mona Lisa, it's a work from a living artist?

      What if it's not an image? Say I tell some model to make a song and it accidentally produces Greenday's Basketcase (which itself is basically just a modified Pachelbel's Canon), can I put that on a record and sell it? Who's responsibility is it to make sure that a model's output is unique or transformative? Shit, look at all the legal cases where musicians are suing other musicians because the chord progression is similar in two songs; What happens when it's exactly the same because the prompt engineer for a music generation model isn't paying attention?

      You might have noticed that I haven't referred to this technology as AI. That's because it's not. It's Machine Learning. It has no intelligence. It neither seeks to create beautiful, original art, nor does it intend to rip someone off. It has no plans, no aspirations, no context, no whims. It's a parrot, spitting out copies of things we ask it for. In general, these outputs are mixtures of various things, but sometimes they aren't. They just output some of the training data, because that's the output that - statistically - was the best match for the prompt.

      As an artist myself, I don't fear machine learned models. I fear that these greedy fuckin' companies will warehouse any and every bit of data they can get their hands on, train their models on other people's work, never pay them a dime, and rip off the essence of their art without any regard for what will happen to the original artists after some jackass execs tell all their advertising/webdesign/programming/scriptwriting/etc departments to just ask the "AI" to "design" everything.

      You can already see this happening with game studios. Writers went on strike over it.

      -5
      • You can prompt an image genrater to just spit out the original art it trained on.

        This is a common misconception. It's not true, except in the extremely rare case of "overfitting" that all AI trainers work very hard to avoid. It's considered a bug, because why would anyone spend millions of dollars and vast computer resources poorly replicating what can be accomplished with a simple copy/paste operation? That completely misses the point of all this.

        If an AI art generator spits out copies of its training data it's a failure of an AI art generator.

        5
      • You can prompt an image genrater to just spit out the original art it trained on.

        This is incorrect. Have you tried doing it?

        That's not how AI works. It's not magic, nor does it create "copies". It creates entirely original works, with influences from other works (similar to what other humans do).

        4
  • It’s an interesting question. To me, it only makes sense that AI companies should respect artist copyrights - especially since AIs purpose is to replace and minimize/eliminate the need for artists. On the other hand, licensing fees would quickly add up and be absolutely enormous. Only the biggest, wealthiest corporations (the ones we love to hate) could afford to invest in AI. Small, new companies won’t be able to afford it.

    (Sorry if this is covered in the article. I haven’t read it yet. It’s late and I’m falling asleep!)

    Edit: okay, now I’ve read it, and the situation is about as bad as I was expecting.

    Meta’s argument that copyright holders wouldn’t get much money anyway makes me want to punch someone. It’s about respecting creators, not just money, you dipshits! Congrats on missing the point!

    To balance things out, we’ve got Andreessen Horowitz crying “won’t someone please think about the billionaires?!?” That one made me laugh.

    Adobe gets points for actually citing case law. I still don’t agree with their reasoning, but I appreciate the effort to keep the discussion professional.

    6
    • It’s about respecting creators

      Is it, though? Copyright holders and creators are completely different things.

      Before you can pay those copyright holders their capital income, you have to know who they are. Which means you can't just download random pictures of the internet. You need pictures with a known provenance. Well, it turns out that there are corporations dedicated to providing just such pictures. How lucky for them if society would choose to "respect creators" in this way. The payment to even a prolific stock photographer may be tiny, but they'd get a cut from each one.

      It may not be about money for you, but the people who pay to push that talking point may have a different attitude.

      1
      • That’s a good point. You’re entirely correct. I had a much simpler idea in mind - I was only thinking of small, independent artists who posted their images online and were the copyright holders of their own work.

        1
  • This is the best summary I could come up with:


    The US Copyright Office is taking public comment on potential new rules around generative AI’s use of copyrighted materials, and the biggest AI companies in the world had plenty to say.

    We’ve collected the arguments from Meta, Google, Microsoft, Adobe, Hugging Face, StabilityAI, and Anthropic below, as well as a response from Apple that focused on copyrighting AI-written code.

    There are some differences in their approaches, but the overall message for most is the same: They don’t think they should have to pay to train AI models on copyrighted work.

    The Copyright Office opened the comment period on August 30th, with an October 18th due date for written comments regarding changes it was considering around the use of copyrighted data for AI model training, whether AI-generated material can be copyrighted without human involvement, and AI copyright liability.

    There’s been no shortage of copyright lawsuits in the last year, with artists, authors, developers, and companies alike alleging violations in different cases.

    Here are some snippets from each company’s response.


    The original article contains 168 words, the summary contains 168 words. Saved 0%. I'm a bot and I'm open source!

    5
You've viewed 28 comments.