It feels like we have a new privacy threat that’s emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:

  1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols.
  2. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we’re fighting it mostly by avoiding Big Tech (“De-Googling”, switching from social media to communities, etc.).
  3. Now we’re in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it’s all trained on our data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn’t help, since they can access our posts no matter where we post them.

So for that third one…what do we do? Anything that’s online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you’ve provided? If you do care, do you think there’s any reasonable way we can fight back? Can we poison their training data somehow?

    • duncesplayed@lemmy.oneOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I have a similar kind of idea. I think if it had been a free/open source/community project that made the headlines I would have been all like “this is so awesome”.

      I guess what I don’t like is the economic system that makes that impractical. In order to build one of those giant GPTs, you need tonnes of hardware (capital), so the community projects are always going to be playing catchup, and I think quite serious catchup in this arena. So the economic system requires that instead of our posts going to a “collective hive mind” that aid human knowledge, they go to some walled garden owned by OpenAI, which filters and controls it for us, and gives us little bits of access to our own data, as long as we used it only in approved ways (i.e., ways that benefit them).

      • IDe@lemmy.one
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        Most of the data used in training GPT4 has been gathered through open initiatives like Wikipedia and CommonCrawl. Both are freely accessible by anyone. As for building datasets and models, there are many non-profits like LAION and EleutherAI involved that release their models for free for others to iterate on.

        While actually running the larger models at a reasonable scale will always require expensive computational resources, you really only need to do the expensive base model training once. So the cost is not nearly as expensive as one might first think.

        Any headstart OpenAI may have gotten is quickly diminishing, and it’s not like they actually have any super secret sauce behind the scenes. The situation is nowhere as bleak as you make it sound.

        Fighting against the use of publicly accessible data is ultimately as self-sabotaging ludditism as fighting against encryption.

    • Uniquitous@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I agree. I’ve always thought that AI would be our successor species, humanity’s child. I like that I might be some small part of its heritage.

    • OrthoStice@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Interesting point of view, never thought of it that way. While the idea of the “collective hive mind” is really cool, I really despise the idea that a big corporation is going to profit from it.

    • FirstPaladin@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Exactly. If you’re posting something on the internet for the world to see, you can’t get upset when people or in this case AI read it.

  • GreyBeard@lemmy.one
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    I’ve been posting publically for years. I expect when I do, it was viewed and used by anyone any time for anything. AI hasn’t changed that.

  • bpudding@lemmy.one
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Regardless of how anyone feels about their writing being used for model training, there’s definitely nothing anyone can do to prevent it other than just not writing anything visitble to the public.

    • Nero@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Not yet, I think. If AI as regulated more strictly, users might get the chance of putting permission on their data. However that well look like. I hope it’s better than the cookie opt-out or do-not-track setting in your browser though.

  • jonah@lemmy.oneM
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    The biggest problem to me is what I just saw you post in another reply, that these models built upon our knowledge exist almost solely within proprietary ecosystems.

    and maybe even our Mastodon or Lemmy posts!

    The Washington Post published a great piece which allows you to search which websites were included in the “C4” dataset published in 2019. I searched for my personal blog jonaharagon.com and sure enough it was included, and the C4 dataset is practically minuscule compared to what is being compiled for larger models like ChatGPT. If my tiny website was included, Mastodon and Lemmy posts (which are actually very visible and SEO optimized tbh) are 100% being scraped as well, there’s no maybe about it.

    • Schedar@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Thanks for linking to that, I hadn’t seen that article before. Interesting seeing it broken down like that and being able to search for a website to see if it was part of the training data

  • cdiv@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    If they want to train their artificial stupidity model on my posts, go for it. If they’re looking for artificial intelligence, on the other hand, they might want a smarter dataset.

  • mainfrog@beehaw.org
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    It depends on if the data is suitably anonymized or not. If my data isn’t able to be reconstructed word for word in a way to directly links back to me? I don’t know if I mind that anymore then I’d mind someone reading content I wrote and taking inspiration from that.

    On the topic of privacy - how do people feel Lemmy compares to Reddit for privacy? I don’t really like the way Lemmy handles deleted content for example.

  • manitcor@lemmy.intai.tech
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    anything you post publicly is always going to be game, but we also give up way more data than we should. Im worried less about my social posts and more about all the breeched data AIs are going to run with.

  • BacardiT@lemmy.one
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I’m okay with it as long as I’m aware of it. If the platforms are up front about it, then users can choose for themselves whether they want to potentially contribute to training data. It will be interesting to watch the next few years.

    • Sentau@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      How will we able to choose whether we want to give access to the data when we do not own the data in the first place(atleast with how data works now)

  • Kalkaline @lemmy.one
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Do I care? Sure, a little, someone is going to get paid and it’s not going to be me. There’s nothing I can do about it and my boss gets paid for my work too.

  • unfazedbeaver@lemmy.one
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I’m considering using Power Delete Suite to delete my account, overwrite my previous comments, and maybe leaving a couple of my top comments up regarding tech support so people can still find information on troubleshooting

    • curioushom@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The issue is that most of the content posted is archived fairly quickly. Deleting/rewriting only hurts the humans that might have gone looking for it. The way I look at it is, if the data is searchable/indexable by search engines (as a proxy for all other tools) at any point of its life cycle then it’s essentially permanent.

      • unfazedbeaver@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That all true. The idea isn’t to remove yourself from the internet. Once you post to the internet, it’s there forever. No, what I am proposing is to hurt reddits chances of being a viable first party resource to train AI.

        • curioushom@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Unless you’re able to compel a platform to remove your data through something like EU right to be forgotten then the data will remain (in training sets or otherwise). If third parties are able to archive your data, reddit will surely have access to their own archival data and will use the original and edited content for training and let machine learning sort it out.

          I’m not saying this to be a defeatist, we need better data ownership and governance laws. Retroactively obfuscating the data will not serve the purpose and provides a false sense of control, which I contend is worse.

  • CatherineHuffman@burggit.moe
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I mean the internet archive is already scraping this data so if these companies want my data they can get it from there, unfortunately. Although when possible I will set auto-delete for 2 weeks to make it harder to find 😃

  • SirSauceLordtheThird@lemmy.one
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I’m conflicted, because on one hand id like my data left alone. On the other i realize how important reddit posts are in tech issues, or other troubleshooting topics. When I try to fix my linux issues for example the top most helpful results are from reddit.

  • DevCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    GIGO - Garbage In, Garbage Out. I asked ChatGPT to write a short essay and include a bibliography with URL 's. Every URL was a 404, and when looking up the bibliographic entries, they were nonexistent as well.

    • Limivorous@lemmy.one
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      That’s because you don’t understand the tool you are using and use tech-sounding language in the wrong context to look like you do.

      GPT models generate text based on the patterns of the tokens it learned during training. The URL it gives you doesn’t work because they have to only look legit. It’s all statistical patterns.

      It’s not because they fed it garbage during the semi-supervised training, it’s because that literally is what the tool is meant for. Use the right tool like google scholar if what you need are sources.