• Motavader@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 year ago

    I’m absolutely on the side of the artists here, but I do wonder if the AI company’s defense will be that the software is no different than another artists drawing inspiration from earlier works. Every art student studies the masters and has assignments to produce works in their style, and current artists have absolutely been influenced by contemporaries. No one evolves their creative style in a vacuum: that’s impossible, short of living on a deserted island.

    But this is a fundamentally different problem since the AI can produce millions of tailored works quickly, replacing vast numbers of creatives, threatening their livelihood. That’s not as much of a concern with one-off artists creating things similar to something they saw earlier (although the individual concept may be the same).

    This is going to be a really interesting legal case.

      • Chee_Koala@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        My experience with image AI gave me almost the exact opposite feeling, more like it somehow pinpoints important aspects of a certain style or artist and then it can just jam with that limitlessly (Dall-e AI in this case) . How did you find it closer to tracing? Did you play around with any of the image AIs?

      • LoafyLemon@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        That’s not true at all. AI uses latent noise as a medium to draw images, there’s nothing left of the original image in its dataset.

        • Peanut@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Legitimately, it’s like these people have no understanding of the actual technology.

          The other response you’ve received talked about a very small subset of overtrained images, which makes sense on why they can be replicated. anyone who trained on creating a specific image a million times would be able to replicate that image easily. Even then it takes a lot of luck and effort to accurately replicate the exact image to any degree.

          If you are not specifically trying to recreate an overly popular image, then there is practically no element left from any particular image that you can consider represented to any thieving extent.

          Considering that it is effectively acting on a pareidolia interpretation of static represented by countless possible prompt and setting combinations, the copyright issue should only really be relevant when people use the tool specifically trying to recreate a particular work. Literally any other paint program would be more effective for that style of theft.

          As an artist, in regards to the pareidolia aspect, I do virtually the same thing when illustrating an image. Disney/Warner can already afford as many peasants to learn or recreate whatever styles they want. I can’t afford a team of lackeys. I can however use an open source diffusion model to create entirely unique and personally tailored and designed illustrations that suit my artistic objective.

          Existing concept of copywrite does not work for this scenario, and if people should argue anything, it should be that wealthy businesses specifically have much more restriction and responsibility in use of tools and in excessive control of the artistic market.

          I’m personally excited for a future where peasant artists can also create complex beautiful works using these tools.

          Think about ending up with holodeck level of personal creative freedom, and being able to create things in that experience the you can share with others.

          The current system already robs and suppresses actual art.

          Just like every other aggressive reaction to AI, the focus is misdirected and not actually helpful for anyone in any way.

        • DekkerNSFW@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          There’s usually nothing left of the original image. But sometimes a specific image pops up in the dataset more often and gets overtrained, which is why you can get a pretty close copy of the Starry Night from vanilla SD. But yeah, it’s not tracing.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Those instances are considered a flaw and trainers work hard to prevent them. When they do occur you have to know they’re in there in order to dredge them back out.

      • shuzuko@midwest.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        1 year ago

        Yes, at best the AI works would still be infringing derivative works. If a human made that art and tried to make money off it, courts would almost assuredly say it lacked “sufficient tranformative creative effort” to allow it to be copyrighted itself or protect it from being considered an infringement. There’s a big difference beween “inspired by” and “trying to copy”.

        Further, if all these works were being used for non-commercial purposes, like, just to print and hang up in their homes or something, it would still suck for artists (because they would lose the individual end-sale market) but it wouldn’t be nearly as harmful. The big problem is that people and corporations are currently trying to use AI art to sidestep paying creatives for their work and then using that AI generation for commercial purposes or to loophole the art out of things like Patreon. It’s a deliberate attempt to deprive hardworking creatives of the money they are due for their work.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Looking at it a bit simplified, ask the AI to produce a number of pictures and videos in the style of Disney and you and the AI builder will get slammed by a lawsuit. Copyright still matters if you’re big enough.

      I’m sure they can also develop an AI to analyze the similarities between works and pay a small amount of royalties to the author(s) based on the ratio of that similarly above a certain cutoff but before that happens someone big enough needs to sue first.

    • thebestaquaman@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I would argue that it is not the work produced by the AI, but the trained model itself, which infringes on copyright.

      The model cannot be regarded as an artist, but as a product, commercial or otherwise, that has been created by stealing copyrighted work.