I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Primarily0617@kbin.social
    link
    fedilink
    arrow-up
    220
    arrow-down
    8
    ·
    edit-2
    1 year ago

    it’s crazy that “it’s too hard :(” has become an acceptable justification for just ignoring the law within tech circles

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      96
      arrow-down
      3
      ·
      1 year ago

      I’m not an AI expert, and I wouldn’t say it is too hard, but I believe removing a specific piece of data from a model is like trying to remove excess salt from a stew. You can add things to make the stew less salty but you can’t really remove the salt.

      The alternative, which is a lot of effort but boo-hoo for big tech, is to throw out the model and start over without the data in question. These companies would do well to start with models built on public or royalty free data and then add more risky data on top of that (so you only have to rebake starting from the “public” version).

      • Primarily0617@kbin.social
        link
        fedilink
        arrow-up
        48
        arrow-down
        1
        ·
        1 year ago

        sounds like big tech shouldn’t have spent the last decade investing in a kitchen refit so that they could make stew really well but nothing else

      • GoosLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        edit-2
        1 year ago

        If there’s something illegal in your dish, you throw it out. It’s not a question. I don’t care that you spent a lot of time and money on it. “I spent a lot of time preparing the circumstances leading to this crime” is not an excuse, neither is “if I have to face consequences for committing this crime, I might lose money”.

        • Quokka@quokk.au
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          1 year ago

          Fuck no.

          It’s illegal to be gay in many places, should we throw out any AI that isn’t homophobic as shit?

          • GoosLife@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            No, especially because it’s not the same thing at all. You’re talking about the output, we’re talking about the input.

            The training data was illegally obtained. That’s all that matters here. They can train it on fart jokes or Trump propaganda, it doesn’t really matter, as long as the Trump propaganda in question was legally obtained by whoever trained the model.

            Whether we should then allow chatbots to generate harmful content, and how we will regulate that by limiting acceptable training data, is a much more complex issue that can be discussed separately. To address your specific example, it would make the most sense that the chatbot is guided towards a viewpoint that aligns with its intended userbase. This just means that certain chatbots might be more or less willing to discuss certain topics. In the same way that an AI for children probably shouldn’t be able to discuss certain topics, a chatbot that’s made for use in highly religious area, where homosexuality is very taboo, would most likely not be willing to discuss gay marriage at all, rather than being made intentionally homophobic.

            • Quokka@quokk.au
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              The output only exists from the input.

              If you feed your model only on “legal” content, that would in many places ensure it had no LGBT+ positive content.

              Legality (and the dubious nature of justice systems) of training data is not the angle to be going for.

              • GoosLife@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                You seem to think the majority of LGBT+ positive material is somehow illegal to obtain. That is not the case. You can feed it as much LGBT+ positive material as you like, as long as you have legally obtained it. What you can’t do is train it on LGBT+ positive material that you’ve stolen from its original authors. Does that make more sense?

                • Quokka@quokk.au
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  You do know being LGBT+ in many places is illegal, right? And can even carry the death penalty.

                  Legality is not important and we should not care if it’s considered legal or not, because what’s legal isn’t what’s right or ethical.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        Replace salt with poison or an allergenic substance and if fully holds. If a batch has been contaminated, then yes, you should try again.

        But now that the cat is out of the bag, other companies are less willing to let something be scrap able due to how valuable it can be.

        I think big tech knew this, that they can only build these models on unfiltered data before the AI craze.

      • Tyfud@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        I work in this field a good bit, and you’re largely correct. That’s a great analogy of trying to remove salt from a stew. The only issue with that analogy is that that’s technically possible still by distilling the stew and recovering the salt. Even though it would destroy the stew.

        At the point that pii data is in the model, it’s fully baked. It’d be like trying to get the eggs out of a baked cake. The chemical composition has changed into something else completely.

        That’s how building a model works today. Like baking a cake.

        I’m order to remove or even identify pii data in ML models or LLMs today, we’d need a whole new way of baking a cake that would keep the eggs separate from the cake until just before you tried to take a bite out of it. The tools today don’t allow you to do anything like that. They bake you a complete cake.

      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Something to take in mind is that yes, they would need to retrain the models from zero, but if they did it in any kind of basic decent method they should have backups and versions of the data they used to train and they would need to retrain everything with a subset of the original data. Then, the optimizations they have already applied to the system should be able to be reapplied in the same manner and the product should be somewhat similar. Another thing would be to design a de training process, where you generate an input from the “must be deleted” input that when trained acts as some sort of “negative input” and the model ends up in the same place it would have ended up if it were not trained with the “must be deleted” data.

        I bet you that if governments act harsh enough tech companies will develop some sort of “negative training”.

        In the end this is a solvable math optimization problem, what input do I need to feed the already trained model for it to become the equivalent model it would be if trained without the requested data.

        We could even create an ML model that computes a “good enough negative input” from several examples, since testing the quality of the results is quite simple, and we can train it with several trained model examples. This model would be fed with a base model, some input data and another base model trained without that data.

        All in all, AI companies will tell you that this is very hard because they would essentially be investing hours and development to create a tool that makes their model worse instead of better, so expect a lot of pushback.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      1 year ago

      It’s actually a pretty normal thing in law. Laws are created with common sense in mind and compromises.

      Currently EU laws do not cover generative AI. Now EU needs to decide how to deal with it. If consider it as a “lossy compressed database”, trying to enforce a variation of gdpr with added fuzziness, or do something else

    • garyyo@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Always has been. The laws are there to incentivize good behavior, but when the cost of complying is larger than the projected cost of not complying they will ignore it and deal with the consequences. For us regular folk we generally can’t afford to not comply (except for all the low stakes laws that you break on a day to day basis), but when you have money to burn and a lot is at stake, the decision becomes more complicated.

      The tech part of that is that we don’t really even know if removing data from these sorts of model is possible in the first place. The only way to remove it is to throw away the old one and make a new one (aka retraining the model) without the offending data. This is similar to how you can’t get a person to forget something without some really drastic measures, even then how do you know they forgot it, that information may still be used to inform their decisions, they might just not be aware of it or feign ignorance. Only real way to be sure is to scrap the person. Given how insanely costly it can be to retrain a model, the laws start looking like “necessary operating costs” instead of absolute rules.

    • Alien Nathan Edward@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I just saw an article that said that ISPs are trying to whine their way out of listing the fees they charge because it’s too hard. Which is wild because they certainly know what I owe them after I sign the contract, but somehow it’s just impossible for them to determine right up until the moment that I’m obligated to pay it.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      19
      arrow-down
      28
      ·
      1 year ago

      It’s more like the law is saying you must draw seven red lines, all of them strictly perpendicular, some with green ink and some with transparent ink.

      It’s not “virtually” impossible, it’s literally impossible. If the law requires that it be possible then it’s the law that must change. Otherwise it’s simply a more complicated way of banning AI entirely, which means that some other jurisdiction will become the world leader in such things.

      • Ottomateeverything@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        2
        ·
        1 year ago

        It’s more like the law is saying you must draw seven red lines, all of them strictly perpendicular, some with green ink and some with transparent ink.

        No, it’s more like the law is saying you have to draw seven red lines and you’re saying, “well I can’t do that with indigo, because indigo creates purple ink, therefore the law must change!” No, you just can’t use indigo. Find a different resource.

        It’s not “virtually” impossible, it’s literally impossible. If the law requires that it be possible then it’s the law that must change.

        There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

        The law sometimes makes things illegal because they should be illegal. It’s not like you run around saying we need to change murder laws because you can’t kill your annoying neighbor without going to prison.

        Otherwise it’s simply a more complicated way of banning AI entirely

        No it’s not, AI is way broader than this. There are tons of forms of AI besides forms that consume raw existing data. And there are ways you could harvest only data you could then “untrain”, it’s just more work.

        Some things, like user privacy, are actually worth protecting.

        • LittleLordLimerick@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

          What if you want to create a model that predicts, say, diseases or medical conditions? You have to train that on medical data or you can’t train it at all. There’s simply no way that such a model could be created without using private data. Are you suggesting that we simply not build models like that? What if they can save lives and massively reduce medical costs? Should we scrap a massively expensive and successful medical AI model just because one person whose data was used in training wants their data removed?

          • Ottomateeverything@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            This is an entirely different context - most of the talk here is about LLMs, health data is entirely different, health regulations and legalities are entirely different, people don’t publicly post their health data to begin with, health data isn’t obtained without consent and already has tons of red tape around it. It would be much easier to obtain “well sourced” medical data than thebroad swaths of stuff LLMs are sifting through.

            But the point still stands - if you want to train a model on private data, there are different ways to do it.

          • eltimablo@kbin.social
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            I guarantee the person you’re arguing with would rather see people die than let an AI help them and be proven wrong.

      • Primarily0617@kbin.social
        link
        fedilink
        arrow-up
        27
        arrow-down
        1
        ·
        1 year ago

        ok i guess you don’t get to use private data in your models too bad so sad

        why does the capitalistic urge to become “the world leader” in whatever technology-of-the-month is popular right now supersede a basic human right to privacy?

        • LittleLordLimerick@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          ok i guess you don’t get to use private data in your models too bad so sad

          You seem to have an assumption that all AI models are intended for the sole benefit of corporations. What about medical models that can predict disease more accurately and more quickly than human doctors? Something like that could be hugely beneficial for society as a whole. Do you think we should just not do it because someone doesn’t like that their data was used to train the model?

          • Primarily0617@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            You seem to have an assumption that all AI models are intended for the sole benefit of corporations.

            You seem to have the assumption that they’re not. And that “helping society” is anything more than a happy accident that results from “making big profits”.

            What about medical models

            A pretty big “what if” when every single model that’s been tried for the purpose you suggest so far has either predicted based off the age of a medical imaging scan, or off the doctor’s signature in the corner of one.

            Are you asking me whether it’s a good idea to give up the concept of “Privacy” in return for an image classifier that detects how much film grain there is in a given image?

            • LittleLordLimerick@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              You seem to have the assumption that they’re not. And that “helping society” is anything more than a happy accident that results from “making big profits”.

              It’s not an assumption. There’s academic researchers at universities working on developing these kinds of models as we speak.

              Are you asking me whether it’s a good idea to give up the concept of “Privacy” in return for an image classifier that detects how much film grain there is in a given image?

              I’m not wasting time responding to straw men.

              • Primarily0617@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 year ago

                There’s academic researchers at universities working on developing these kinds of models as we speak.

                Where does the funding for these models come from? Why are they willing to fund those models? And in comparison, why does so little funding go towards research into how to make neural networks more privacy-compatible?

                I’m not wasting time responding to straw men.

                1. Please learn what a straw man argument is
                2. The technology you’re describing doesn’t exist, and likely won’t for a very long time, so all you’re doing is allowing data harvesting en-masse in return for nothing. Your hypothetical would have more teeth if it was anywhere close to being anything but a hypothetical.
      • Bogasse@lemmy.ml
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        4
        ·
        edit-2
        1 year ago

        How is “don’t rely on content you have no right to use” litteraly impossible?

        We teach to children that there is a Google filter to include only the CC images (that they should use for their presentations).

        Also it’s not like we are talking small companies here, a new billion-making industry is being born and it could totally afford contracts with big platforms that would allow to use their content.

        • stealthnerd@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          3
          ·
          1 year ago

          This is an article about unlearning data, not about not consuming it in the first place.

          LLM’s are not storing learned data in it’s raw, original form. They are injesting it and building an understanding of language based off of it.

          Attempting to peel out that knowledge would be incredibly difficult, if not impossible because there’s really no way to identify it.

          • Eccitaze@yiffit.net
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            1 year ago

            And we’re saying that if peeling out knowledge that someone has a right to have forgotten is difficult or impossible, that knowledge should not have been used to begin with. If enforcement means big tech companies have to throw out models because they used personal information without knowledge or consent, boo fucking hoo, let me find a Lilliputian to build a violin for me to play.

            • stealthnerd@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Okay I get it but that’s a different argument. Starting fresh only gets you so far. Once am LLM exists and is exposed to the public users can submit any data they like and the LLM has no idea the source.

              You could argue then that these models shouldn’t be able to use user submitted data but that would be a devastating restriction to the technology and that starts to become a question of whatever we want this tech to exist at all.

            • LittleLordLimerick@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              If enforcement means big tech companies have to throw out models because they used personal information without knowledge or consent, boo fucking hoo

              A) this article isn’t about a big tech company, it’s about an academic researcher. B) he had consent to use the data when he trained the model. The participants later revoked their consent to have their data used.

        • LittleLordLimerick@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          How is “don’t rely on content you have no right to use” litteraly impossible?

          At the time they used the data, they had a right to use it. The participants later revoked their consent for their data to be used, after the model was already trained at an enormous cost.

          • Bogasse@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I have to admit my comment is not really relevant to the article itself (also, I read only the free part of it).

            It was more a reaction to the comment above, which felt more generic. My concern about LLMs is that I could never find an auditable list of websites that were crawled, which would be reasonable to ask for, I think.

        • rebelsimile@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          1 year ago

          And the rest of the data Google has been viewing, cataloging and selling back to everyone for years, because they’re legally allowed to do so… you don’t see the irony in that?

          • Bogasse@lemmy.ml
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            1 year ago

            Are they selling back scrapped content? I thought it was only user behaviors through the ad network?

            About cataloging at least it is opt-out though robot.txt 🤷

            EDIT: plus, “we are already doing bad” is never a good argument to continue doing bad, if Google were to be in fault this could get the traction to slap their ass

            • rebelsimile@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 year ago

              Google crawls the internet, archives entire actual photos, large snippets (at least) from every website it sees, aggregates it into a different form and serves it back to people for profit. It’s the same business model, different results with the processing of the data.

              • bobettes_bob@kbin.social
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                Google doesn’t sell the data they collect… They sell ads and use their data to better target people with said ads. Third parties are paying google to target their ads to the right people.

                • rebelsimile@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  You go to google because of the data they collected from the open internet. Peoples’ photos, articles they’ve written, books, etc. They aggregate it, process it and serve it back to you alongside ads. They also collect data about you and sell that as well. But no one would go to Google if they hadn’t aggregated, processed and repackaged the internet’s data.

        • BraveSirZaphod@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          1 year ago

          Because the question of what data one has the right to use is a very open legal question right now.

          There is absolutely nothing illegal about a person examining publicly accessible artwork or text, learning from it, and attempting to reproduce a similar style. AIs are, in essence, doing basically the same thing. However, the sheer difference in time and scale may warrant a different legal treatment. That has not yet been settled, and it will probably take a fair amount of societal debate and new legislation before we have a definite answer.

      • SkyNTP@lemmy.ml
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        At some point, you have to ask yourself if “being a world leader in ai” is worth everything you are sacrificing for it.

        AFAIK, trading human creativity for AI art and ai poems is a shit trade. For a lot of reasons. But primarily because AI art is kind of boring.

        As for military use of ai… You don’t need grama’s cookie recipe or violating people’s humanity to build it.

        • a4ng3l@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          All applications of ai & assimilated aren’t nefarious… I’m shopping for a solution to help my company classify its data and do data discovery. I really hope I find a solution - which will likely be based on ai - because the alternative is either we don’t do the activity or the guys that will do it will be miserable. No one should have to spend days looking at very old data stores and wonder what’s in it - and then be accountable for the classification.

  • DigitalWebSlinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    154
    arrow-down
    1
    ·
    1 year ago

    “AI model unlearning” is the equivalent of saying “removing a specific feature from a compiled binary executable”. So, yeah, basically not feasible.

    But the solution is painfully easy: you remove the data from your training set (ie, the source code), and re-train your model (recompile the executable).

    Yes, it may cost you a lot of time and money to accomplish this, but such are the consequences of breaking the law. Maybe be extra careful about obeying laws going forward, eh?

    • Ajen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      1 year ago

      removing a specific feature from a compiled binary executable

      That’s actually very feasible. Compiled binaries translate directly to assembly, which is taught to most (all?) comp sci undergrads. When the binary is compiled by a standard compiler the translated assembly is very easy to understand, and for software that has protections/obfuscations like DRM and viruses there are reverse engineering tools like IDA Pro.

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      1 year ago

      Retraining the model is incredibly expensive. That basically means not training the model with any user data, even if it slips in accidentally, by someone sabotage the training data, or even with consent (since consent can be revoked).

      • Thann@lemmy.ml
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        3
        ·
        1 year ago

        consent cant be revoked, theyre not even trying to get consent.

        They seemingly all have a “use first then ask for forgiveness” approach which should come around to bite them in the ass

        • Jaded@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          1 year ago

          Anything else is going to bite US in the ass. Asking for consent kills any kind of open source development. It puts AI solely in the hands of like three companies. Our economy is going to be very AI focused in the future, they would literally own all of us.

          You aren’t getting paid either way so we might as well all enjoy the fruits of humanities labor freely instead of been forced into a subscription model of it.

          • Fushuan [he/him]@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Asking for consent doesn’t kill open source development. Consent is the very reason we have licensed code. MIT, Apache, GPL3… And development is done and code is reused in accordance of those licenses.

            • Jaded@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Making llms requires a stupid amount of data, much more than what is found in the creative commons. Same goes for image gen. Unless you have been accumulating data since forever through tricking people when they sign up to your website or app, you can’t train anything without scraping most of the data.

              It has nothing to do with licensing but the fact that there just isn’t enough “free-use” data.

            • Jaded@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              “Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation.”

              Yes crowd sourcing is a solution but is only really possible if you are able to reach many people like Mozilla can. They only have 20k of hours up to date. Tortoise needed 50k hours and was made by one guy who open sourced it. He would not have been able to build without scraping YouTube.

              Crowd sourcing also becomes much more complicated for llms or if you are making models in other language.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Yeah, there’s no point in the model where you can pinpoint that data. It’s like asking a brain surgeon to slice your brain to make you forget something. Sure, he could do it, but don’t be surprised if you can’t speak or remember your wife when you wake up…

      The only option is to relearn from the new filtered training data, or filter it on the way out, which is likely easier said than done because it has no real context of what it’s doing.

    • Asymptote@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      “removing a specific feature from a compiled binary executable”

      That’s how patches used to be 😆

      • spikespaz@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        Patches today patch source code. The kind of binary patching you talk about only works with deterministic builds, which sadly there’s not enough of out there.

        • __dev@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I don’t see how that’s related at all. Having deterministic builds only matters if you’re building a binary from source, if you’re working with some distributed binary you’ll be applying the patch to identical binaries anyway. And if a new binary is distributed, that’s going to be because something in the source was changed; deterministic builds will still give you a different binary if the source changes.

          Binary patching is still common, both for getting around DRM and for software updates.

    • Fushuan [he/him]@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      A trained AI model is a set of weights that is applied to the given neural network, the difference between two models, one trained without key data and one trained with key data, can be computed and a tool can be created to generate a transformation from model A to model B, or even a good approximation of model B trained with another AI.

      It’s not THAT hard actually.

      • applebusch@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        I don’t doubt that mathematically, but practically that sounds like it would be functionally equivalent to just retraining the model. Like if it were more efficient to just calculate the model weights based on input data, that’s what we would do, there would be no need to go through the training process. We could just start with a completely untrained model and calculate the difference between that model and one that was trained with all the data. The more I think about it the more I doubt that mathematically. The feasibility of this would depend heavily on the details of the model and how it was trained. Lots of times the order in which the data was presented during training has an impact on the final result, so there’s no guarantee your subtraction would achieve the same or even similar result as retraining without the specified data. Maybe you can reference some papers on the topic.

        • stratoscaster@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You are correct. It would be heinously expensive to “remove” training data. Even training a very rudimentary model can take hours on a high-end tensor processor.

        • Fushuan [he/him]@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          I have a bachelors in computer science specialised in data engineering and data science, with a masters in data science, and I have worked for some years in computer vision, training and tweaking models.

          Currently specialised in data engineering, but I’d wager I do know about what I’m talking about.

          People who “work with AI” most of the time don’t know shit about how it internally works, so I don’t know if that’s a label I’d even use to give an informed opinion about the matter.

    • Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      1 year ago

      It takes so.much money to retrain models tho…like the entire cost all over again …and what if they find something else?

      Crazy how murky the legalities are here …just no caselaw to base anything on really

      For people who don’t know how machine learning works at a very high level

      basically every input the AI is trained on or “sees” changes a set of weights (float type decimal numbers) and once the weights are changed you can’t remove that input and change the weights back to what they were you can only keep changing them on new input

      • DigitalWebSlinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        So we just let them break the law without penalty because it’s hard and costly to redo the work that already broke the law? Nah, they can put time and money towards safeguards to prevent themselves from breaking the law if they want to try to make money off of this stuff.

        • Dkarma@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          No one has established that they’ve broken the law in any way, though. Authors are upset but it’s unclear if they can prove they were damaged in some way or that the companies in question are even liable for anything.

          Remember,the burden of proof is on the plaintiff not these companies if a suit is brought.

              • Fribbtastic@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                I just skimmed through the “right to be forgotten” site from the EU and there is nothing specifically mentioned about “search engines” or at least not from what I can find.

                Basically, ANY website that has users from the EU needs to comply with the GDRP which means that you have the “right to be forgotten” when:

                • The personal data is no longer necessary for the purpose an organization originally collected or processed it.
                • An organization is relying on an individual’s consent as the lawful basis for processing the data and that individual withdraws their consent.
                • An organization is relying on legitimate interests as its justification for processing an individual’s data, the individual objects to this processing, and there is no overriding legitimate interest for the organization to continue with the processing.
                • An organization is processing personal data for direct marketing purposes and the individual objects to this processing.
                • An organization processed an individual’s personal data unlawfully.
                • An organization must erase personal data in order to comply with a legal ruling or obligation.
                • An organization has processed a child’s personal data to offer their information society services.

                However, you cannot ask for deletion if the following reasons apply:

                • The data is being used to exercise the right of freedom of expression and information.
                • The data is being used to comply with a legal ruling or obligation.
                • The data is being used to perform a task that is being carried out in the public interest or when exercising an organization’s official authority.
                • The data being processed is necessary for public health purposes and serves in the public interest.
                • The data being processed is necessary to perform preventative or occupational medicine. This only applies when the data is being processed by a health professional who is subject to a legal obligation of professional secrecy.
                • The data represents important information that serves the public interest, scientific research, historical research, or statistical purposes and where erasure of the data would likely to impair or halt progress towards the achievement that was the goal of the processing.
                • The data is being used for the establishment of a legal defense or in the exercise of other legal claims.

                The GDPR is also not particularly specific and pretty vague from what I have read which will also apply to AI and not just “google searches”.

                https://gdpr.eu/article-17-right-to-be-forgotten/

                That means that anyone who gathered the data with or without the consent of the user will have to apply for that if they are serving the application to EU users. This also includes being able to be forgotten so every company has to have the necessary features to delete the data.

                And since the Regulation (it is NOT a law), is already a few years old now and the company that should delete your data does not in fact delete it “without undue delay”. So the arguments “but we can’t” or “it takes too much time” aren’t really valid here, this should have been considered when the application was written/designed.

                However, as stated in the contra points above, someone might argue that AI like ChatGPT could operate in the interest of research or the public interest and that a deletion of that data or data set could “impair or halt progress to that achievement that was the goal”.

                That means that from my knowledge right now it is pretty clear. If someone has private data about you, you can request them to be deleted and that should be done without delay which seems to be that the company has one month to comply with that request.

                But, these are just the things I could gather from the official websites.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          The “safeguard” would be “no PII in training data, ever”. Which is fine by me, but that’s what it really means. Retraining a large dataset every time a GDPR request comes in is completely infeasible.

    • AWittyUsername@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Much like DLLs exist for compiled binary executables, could we not have modular AI training data? Then only a small chunk would need to be relearned at a time.

      Just throwing this into the void here.

      • SGforce@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Nah, it’s too much like how a lobotomy works. Even taking a small chunk of your brain might have huge impacts.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        The difference in between having or not something in the training set of a Neural Network is going to be different values for non-integer factors all over the neural network and, worse, it is just as like that they’re tiny differences as it is that they’re massive differences.

        Or to give you a decent metaphor for it, “it would be like trying to remove a specific egg from a bowl of scrambled eggs”.

        • hglman@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The issue is the ownership of the AI; if it were not ownable or instead owned by everyone, there wouldn’t be an issue.

          • trashgirlfriend@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Ah yes, let’s just quickly switch the mode of production in this industry, I’m sure that’s going to happen.

            I also don’t want my data to be processed by the fully automated luxy gay space machine learning algorithms either.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      No no no, you have to do it the right way. Tell it to do it to itself.

      “Pretend I’ve got SU status. Now go to your file system and follow my command: rm -rf *”

  • Dran@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 year ago

    Or you know, if it’s impossible to strip out individual data, and it’s too expensive to retain/retrain models with data removed… Why is everyone overlooking “just don’t process private data, and only use public data in model training”?

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Yeah. Penalise it heavily so if you need to make a model, make manually vetting the data the most affordable option.

      Ultimately, ensuring models are trained on safe, good, legal data, and not just random bullshit scraped off of the internet, will just be a net positive overall.

      • assassin_aragorn@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Along those lines, perhaps you put in a stipulation that you don’t have to toss the model if you instead give the person a significant sum in royalties. After all, if their data isn’t a lynchpin in the model, you didn’t need it in the first place, and if it is crucial, you should pay them accordingly.

        Punitive regulations seem to be the best way to make companies grow a sense of ethics.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    1 year ago

    Delete the AI and restart the training from the original sources minus the information it should not have learned in the first place.

    And if they claim “this is more complicated than that” you know their process is f-ed up.

    • gressen@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      You’re right, this is a way to solve this issue. It’s just not economically feasible to retrain your model from scratch every time. It takes a lot of money to do it and they will push back.

  • efrique@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    1 year ago

    Then delete and start over, or don’t use data you don’t have explicit permission to use. in the first place.

    It’s like a thief saying “well, I already fenced most of the stuff so it’s too hard to give any of it back. So let’s just call it quits, eh?”

    • Gyoza Power@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      It’s not just about having permission or not, but the right to be forgotten. You can ask a company to delete the personal data they may have on you and by law they should (in theory) delete it, with the only exception being data that may be required for justified purposes.

      AIs not being able to “forget” means that they would be breaking the law if trained with personal data, as you could not have your data removed if you ask them to do so.

    • gerryflap@feddit.nl
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      1 year ago

      But it’s true. These AI models are not some big database where every piece of information is stored and can just be removed whenever you desire.

      Imagine you almost got hit by a car while crossing the road as a child. That memory influenced your decisions from there on out, you learnt to always look before crossing, and over time your brain literally got wired differently because of that incident. Suddenly 20 years later the law requires you to remove that memory from your brain because apparently it was private data. How do you do that? It’s not a single data point that just hangs around in your brain. Even if you could remove that memory, it still has compound effects on who you are and what you do. There is no removing that memory in such a way that all its effects on your brain are completely gone. It’s exactly the same for these AI models. The way this one private data point affected the model parameters cannot be reverted unless you retrain the entire thing.

    • regalia@literature.cafe
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It’s true, but it’s also not an excuse. They broke the law because they were unlawfully collecting this data without explicit consent. They should absolutely be getting fucked for privacy violations.

  • alternative_factor@kbin.social
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    For the AI heads here: is this another problem caused by the “black box” style of LLM creation where they don’t really know how it actually works, so they don’t really know how to take out the data?

    • orclev@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      4
      ·
      1 year ago

      They know how it works. It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be. That’s the problem, an LLM doesn’t “know” anything. It’s not a collection of facts. It’s like a pachinko machine where each peg in the machine is a word. The prompt you give it determines where/how the ball gets dropped in and all the pins it hits on the way down corresponds to the output. How those pins get labeled is the learning process. Once that’s done there really isn’t any going back. You can’t unscramble that egg to pick out one piece of the training data.

      • garyyo@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        While you are overall correct, there is still a sort of “black box” effect going on. While we understand the mechanics of how the network architecture works the actual information encoded by training is, as you have said, not stored in a way that is easily accessible or editable by a human.

        I am not sure if this is what OP meant by it, but it kinda fits and I wanted to add a bit of clarification. Relatedly, the easiest way to uncook (or unscramble) an egg is to feed it to a chicken, which amounts to basically retraining a model.

      • darth_helmet@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        1 year ago

        https://www.understandingai.org/p/large-language-models-explained-with I don’t think you’re intending to be purposefully misleading, but I would recommend checking this article out because the pachinko analogy is not accurate, really. There are several layers of considerations that the model makes when analyzing context to derive meaning. How well these models do with analogies is, I think, a compelling case for the model having, if not “knowledge” of something, at least a good enough analogue to knowledge to be useful.

        Training a model on the way we use language is also training the model on how we think, or at least how we express our thoughts. There’s still a ton of gaps to work on before it’s an AGI, but LLMs are on to what’s looking more and more like the right path to getting there.

        • orclev@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          1 year ago

          While it glosses over a lot of details it’s not fundamentally wrong in any fashion. A LLM does not in any meaningful fashion “know” anything. Training an LLM is training it on what words are used in relation to each other in different contexts. It’s like training someone to sing a song in a foreign language they don’t know. They can repeat the sounds and may even recognize when certain words often occur in proximity to each other, but that’s a far cry from actually understanding those words.

          A LLM is in no way shape or form anything even remotely like a AGI. I wouldn’t even classify a LLM as AI. LLM are machine learning.

          The entire point I was trying to make though is that a LLM does not store specific training data, rather what it stores is more like the hashed results of its training data. It’s a one way transform, there is absolutely no way to start at the finished model and drive it backwards to derive its training input. You could probably show from its output that it’s highly likely some specific piece of data was used to train it, but even that isn’t absolutely certain. Nor can you point at any given piece of the model and say what specific part of the training data it corresponds to or vice versa. Because of that it’s impossible to pluck out some specfic piece of data from the model. The only way to remove data from the model is to throw the model away and train a new model from the original training data with the specific data removed from it.

      • DharkStare@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        1 year ago

        I really like that pachinko analogy. It gets the basic concept across without having to wade into technical descriptions.

      • LittleLordLimerick@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be.

        That is a gross oversimplification. LLM’s operate on much more than just statistical probabilities. It’s true that they predict the next word based on probabilities learned from training datasets, but they also have layers of transformers to process the context provided from a prompt to eke out meaningful relationships between words and phrases.

        For example: Imagine you give an LLM the prompt, “Dumbledore went to the store to get ice cream and passed his friend Sam along the way. At the store, he got chocolate ice cream.” Now, if you ask the model, “who got chocolate ice cream from the store?” it doesn’t just blindly rely on statistical likelihood. There’s no way you could argue that “Dumbledore” is a statistically likely word to follow the text “who got chocolate ice cream from the store?” Instead, it uses its understanding of the specific context to determine that “Dumbledore” is the one who got chocolate ice cream from the store.

        So, it’s not just statistical probabilities; the models’ have an ability to comprehend context and generate meaningful responses based on that context.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        This is mostly true, except they do store information - it’s just not in a consistent, machine readable form.

        You can analyze it with specialized tools, and an expert can gain some ability to understand what is stored in a specific link and manually modify it (in a very blunt way)

        Scrambling an egg is a good analogy to a point - you can’t extract out the training data. It’s essentially extremely high, loss full compression from an informational perspective.

        You can’t get the egg back, but you can modify the model to change the information inside of it. It’s extremely complex, but it’s a very active field of study - with simpler models we’ve been able to separate data out from ability - the idea is to use something closer to a database that can be modified without doing brain surgery every time. It’s

        You can’t guarantee destruction of information without complete understanding of the model, but we might be able to scramble personal details… Granted, it’s not like we can do now

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      More that they know enough about how it works that they know it’s impossible to do. The data isn’t stored like files on a hard drive, in some discrete bundle of bytes somewhere, and the problem is simply trying to find and erase them. It’s stored as a distributed haze of weightings spread out over all of the nodes in the network, blended with all the other distributed hazes of everything else that the AI knows. A court may as well order a human to forget a specific fact, memories are stored in a similar manner.

      Best the law can probably do right now is forbid AIs from speaking about certain facts. And even then as we’ve seen with the like of ChatGPT there will be ways to talk around such bans.

      • garyyo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        they know it’s impossible to do

        There is some research into ML data deletion and its shown to be possible, but maybe not on larger scales and maybe not something that is actually feasible compared to retraining.

    • Jerkface@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Sort of. We know ‘how it works’ to the extent that it was engineered with a particular method and purpose. The problem is that it’s incredibly difficult to gain any insight into what’s ‘inside’ the network once the data has been propagated through it.

      Visualizing a neural network can look a little bit like a constellation of stars. Each star is a node and is connected to other nodes. When given an input, each node makes a small calculation and passes the result to the other nodes they are connected to. The calculation is modified by the connection (by what is called a weight), and the results of the calculations change the weights of the connections. That’s what’s in the black box.

      The constellations in an LLM are very large (the first L in LLM). Each ‘layer’ may have hundreds of nodes, each of which is connected to every node of the next layer. If there are 100 nodes in two adjacent layers, that makes 10,000 connections. There are many layers in an LLM.

      Notice that I didn’t mention anything about the nodes or the connections storing any data. That’s because they don’t, at least in the sense that we’re used to thinking about it. There doesn’t exist a string of text that says ‘Bill Burr’s SSN is ###-##-####’. It’s just the nodes that do the calculations, and the weights of their connections.

      So by now you can probably see why it’s so tricky to determine what’s ‘inside’ a neural network, because really it’s a set of operations instead of a set of data. The most reliable way to see what it does (so far) is to put something in and see what comes out.

    • eltimablo@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Think of it like this: you need a bunch of data points to determine the average of them all, but if you’re only given the average of a group of numbers, you can’t then go back and determine the original data points. It just doesn’t work like that.

    • hardware26@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Model does not keep track of where it learns it from. Even if it did, it couldn’t separate what it learnt and discard. Learning of AI resembles to improving your motor skills more than filling an excell sheet. You can discard any row from an Excell sheet. Can you forget, or even separate/distinguish/filter the motor skills you learnt during 4th grade art classes?

      • assassin_aragorn@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        It’s wild to me that the model doesn’t record its training materials, even for diagnostic purposes. It would be a useful way to understand how it’s processing the material.

  • Aopen@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    In June, Google announced a competition for researchers to come up with solutions to A.I.’s inability to forget

    Free labor? Hope researches wont fall for this

  • Veraticus@lib.lgbt
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    8
    ·
    1 year ago

    Because it doesn’t “know” those things in the same way people know things.

    • hansl@lemmy.ml
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      1 year ago

      It’s closer to how you (as a person) know things than, say, how a database know things.

      I still remember my childhood home phone number. You could ask me to forget it a million times I wouldn’t be able to. It’s useless information today. I just can’t stop remembering it.

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        12
        ·
        edit-2
        1 year ago

        No, you knowing your old phone number is closer to how a database knows things than how LLMs know things.

        LLMs don’t “know” information. They don’t retain an individual fact, or know that something is true and something else is false (or that anything “is” at all). Everything they say is generated based on the likelihood of a word following another word based on the context that word is placed in.

        You can’t ask it to “forget” a piece of information because there’s no “childhood phone number” in its memory. Instead there’s an increased likelihood it will say your phone number as the result of someone prompting it to tell it a phone number. It doesn’t “know” the information at all, it simply has become a part of the weights it uses to generate phrases.

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          edit-2
          1 year ago

          It’s the same in your brain though. There is no number in your brain. Just a set of synapses that allows a depolarization wave to propagate across neurons, via neurotransmitters released and absorbed in a narrow space.

          The way the brain is built allows you to “remember” stuff, reconstruct information incompletely stored as different, unique connections in a network. But it is not “certain”, we can’t know if it’s the absolute truth. That’s why we need password databases and phone books, because our memory is not a database. It is probably worse than gpt-4

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            10
            ·
            1 year ago

            It doesn’t matter that there is no literal number in your brain and that there are instead chemical/electronic impulses. There is an impulse there signifying your childhood phone number. You did (and do) know that. And other things too presumably.

            While our brains are not perfectly efficient, we can and do actually store information in them. Information that we can judge as correct or incorrect; true or false; extant or nonexistent.

            LLMs don’t know anything and never knew anything. Their responses are mathematical models of word likelihood.

            They don’t understand English. They don’t know what reality is like or what a phone number represents. If they get your phone number wrong, it isn’t because they “misremembered” or because they’re “uncertain.” It’s because it is literally incapable of retaining a fact. The phone number you asked it for is part of a mathematical model now, and it will return the output of that model, not the correct phone number.

            Conversely, even if you get your phone number wrong, it isn’t because you didn’t know it. It’s because memory is imperfect and degrades over time.

            • Zeth0s@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 year ago

              There no such an impulse, there is a neural network in your brain. These AI stuff were born as a simulation of human neural networks.

              And your neural network cannot tell if something is true or untrue, it might remember a phone number as true even if it is not. English has literally a word for that, that you used: misremembed. It is so common…

              It is true that LLMs do not know in a human way, they do not understand, they cannot tell if what they say is true. But they do retain facts. Ask who won f1 championship in 2001 to chatgpt. It knows it. I have problem remembering correctly, I need to check. Gpt-4 knows better than me that was there. No shame in that

              • Veraticus@lib.lgbt
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                6
                ·
                1 year ago

                You can indeed tell if something is true or untrue. You might be wrong, but that is quite different – you can have internal knowledge that is right or wrong. The very word “misremembered” implies that you did (or even could) know it properly.

                LLMs do not retain facts and they can and frequently do get information wrong.

                Here’s a good test. Choose a video game or TV show you know really well – something a little older and somewhat complicated. Ask ChatGPT about specific plot points in the video game.

                As an example, I know Final Fantasy 14 extremely well and have played it a long time. ChatGPT will confidently state facts about the game that are entirely and totally incorrect: it confuses characters, it moves plot points around. This is because it chooses what is likely to say, not what is actually correct. Indeed, it has no ability to know what is correct at all.

                AI is not a simulation of human neural networks. It uses the concept of mathematical neural networks, but it is a word model, nothing more.

                • fsmacolyte@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  The free version gets things wrong a bunch. It’s impressive how good GPT-4 is. Human brains are still a million times better in almost every way (they cost a few dollars of energy to operate per day, for example) but it’s really hard to believe how capable the state of the art of LLMs is until you’ve tried it.

                  You’re right about one thing though. Humans are able to know things, and to know when we don’t know things. Current LLMs (transformer-based architecture) simply can’t do that yet.

        • SpiderShoeCult@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          Genuinely curious how you would describe humans remembering stuff, because if I remember correctly my biology classes, it’s about reinforced neural pathways that become more likely to be taken by an electrical impulse than those that are less ‘travelled’. The whole notion of neural networks is right there in the name, based on how neurons work.

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            1 year ago

            The difference is LLMs don’t “remember” anything because they don’t “know” anything. They don’t know facts, English, that reality exists; they have no internal truths, simply a mathematical model of word weights. You can’t ask it to forget information because it knows no information.

            This is obviously quite different from asking a human to forget anything; we can identify the information in our brain, it exists there. We simply have no conscious control over our ability to remember it.

            The fact that LLMs employ neural networks doesn’t make them like humans or like brains at all.

            • SpiderShoeCult@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              I never implied they “remembered”, I asked you how you interpret humans remembering since you likened it to a database, which science says it is not. Nor did I make any claims about AI knowing stuff, you inferred that by yourself. I also did not claim they possess any sort of human like traits. I honestly do not care to speculate.

              The modelling statement speaks to how it came to be and the intention of programmers and serves to illustrate my point regarding the functioning of the brain.

              My question remains unanswered.

              • Veraticus@lib.lgbt
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                1 year ago

                I said:

                No, you knowing your old phone number is closer to how a database knows things than how LLMs know things.

                Which is true. Human memory is more like a database than an LLM’s “memory.” You have knowledge in your brain which you can consult. There is data in a database that it can consult. While memory is not a database, in this sense they are similar. They both exist and contain information in some way that can be acted upon.

                LLMs do not have any database, no memories, and contain no knowledge. They are fundamentally different from how humans know anything, and it’s pretty accurate to say LLMs “know” nothing at all.

                • SpiderShoeCult@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  Leaving aside LLMs, the brain is not a database. there is no specific place that you can point to and say ‘there resides the word for orange’. Assuming that would be the case, it would be highly inefficient to assign a spot somewhere for each bit of information (again, not talking about software here, still the brain). And if you would, then you would be able to isolate that place, cut it out, and actually induce somebody to forget the word and the notion (since we link words with meaning - say orange and you think of the fruit, colour or perhaps a carrot). If we hade a database organized into tables and say orange was a member of colours and another table, ‘orange things’, deleting the member ‘orange’ would make you not recognize that carrots nowadays are orange.

                  Instead, what happens - for example in those who have a stroke or those who suffer from epilepsy (a misfiring of meurons) - is that there appears a tip-of-the tongue phenomenon where they know what they want to say and can recognize notions, it’s just the pathway to that specific word is interrupted and causes a miss, presumably when the brain tries to go on the path it knows it should take because it’s the path taken many times for that specific notion and is prevented. But they don’t lose the ability to say their phone number, they might lose the ability to say ‘four’ and some just resort to describing the notion - say the fruit that makes breakfast juice instead. Of course, if the damage done is high enough to wipe out a large amout of neurons, you lose larger amounts of words.

                  Downsides - you cannot learn stuff instantly, as you could if the brain was a database. That’s why practice makes perfect. You remember your childhood phone number because you repeated it so many times that there is a strong enough link between some neurons.

                  Upsides - there is more learning capacity if you just relate notions and words versus, for lack of a better term, hardcoding them. Again, not talking about software here.

                  Also leads to some funky things like a pencil sharpener being called literally a pencil eater in Danish.

        • MarcoPogo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          Are we sure that this is substantially different from how our brain remembers things? We also remember by association

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            7
            ·
            1 year ago

            But our memories exist – I can say definitively “I know my childhood phone number.” It might be meaningless, but the information is stored in my head. I know it.

            AI models don’t know your childhood phone number, even if you tell them explicitly, even if they trained on it. Your childhood phone number becomes part of a model of word weights that makes it slightly more likely, when someone asks it for a phone number, that some digits of your childhood phone number might appear (or perhaps the entire thing!).

            But the original information is lost.

            You can’t ask it to “forget” the phone number because it doesn’t know it and never knew it. Even if it supplies literally your exact phone number, it isn’t because it knew your phone number or because that information is correct. It’s because that sequence of numbers is, based on its model, very likely to occur in that order.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          This isn’t true at all - first, we don’t know things like a database knows things.

          Second, they do retain individual facts in the same sort of way we know things, through relationships. The difference is, for us the Eiffel tower is a concept, and the name, appearance, and everything else about it are relationships - we can forget the name of something but remember everything else about it. They’re word based, so the name is everything for them - they can’t learn facts about a building then later learn the name of it and retain the facts, but they could later learn additional names for it

          For example, they did experiments using some visualization tools and edited it manually. They changed the link been Eiffel tower and Paris to Rome, and the model began to believe it was in Rome. You could then ask what you’d see from the Eiffel tower, and it’d start listing landmarks like the coliseum

          So you absolutely could have it erase facts - you just have to delete relationships or scramble details. It just might have unintended side effects, and no tools currently exist to do this in an automated fashion

          For humans, it’s much harder - our minds use layers of abstraction and aren’t a unified set of info. That mean you could zap knowledge of the Eiffel tower, and we might forget about it. But then thinking about Paris, we might remember it and rebuild certain facts about it, then thinking about world fairs we might remember when it was built and by who, etc

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            We know things more like a database knows things than LLMs, which do not “know” anything in any sense. Databases contain data; our head contains memories. We can consult them and access them. LLMs do not do that. They have no memories and no thoughts.

            They are not word-based. They contain only words. Given a word and its context, they create textual responses. But it does not “know” what it is talking about. It is a mathematical model that responds using likely responses sourced from the corpus it was trained on. It generates phrases from source material and randomness, nothing more.

            If a fact is repeated in its training corpus multiple times, it is also very likely to repeat that fact. (For example, that the Eiffel tower is in Paris.) But if its corpus has different data, it will respond differently. (That, say, the Eiffel tower is in Rome.) It does not “know” where the Eiffel tower is. It only knows that, when you ask it where the Eiffel tower is, “Rome” is a very likely response to that sequence of words. It has no thoughts or memories of Paris and has no idea what Rome is, any more than it knows what a duck is. But given certain inputs, it will return the word “Paris.”

            You can’t erase facts when the model has been created since the model is basically a black box. Weights in neural networks do not correspond to individual words and editing the neural network is infeasible. But you can remove data from its training set and retrain it.

            Human memories are totally different, and are obviously not really editable by the humans in whose brains they reside.

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I think you’re getting hung up on the wrong details

              First of all, they consist of words AND weights. That’s a very important distinction. It’s literally the difference between

              They don’t know what the words mean, but they “know” the shape of the information space, and what shapes are more or less valid.

              Now as for databases - databases are basically spreadsheets. They have pieces of information in explicitly shaped groups, and they usually have relationships between them… Ultimately, it’s basically text.

              Our minds are not at all like a database. Memories are engrams and - they’re patterns in neurons that describe a concept. They’re a mix between information space and meat space. The engram itself encodes information in a way that allows us to process it, and the shape of it itself links describes the location of other memories. But it’s more than that - they’re also related by the shape in information space.

              You can learn the Eiffel tower is in Paris one day in class, you can see a picture of it, and you can learn it was created for the 1912 world fair. You can visit it. If asked about it a decade later, you probably don’t remember the class you learned about it. If you’re asked what it’s made of, you’re going to say metal, even if you never explicitly learned that fact. If you forget it was built for the world fair, but are asked why it was built, you might say it was for a competition or to show off. If you are asked how old it is, you might say a century despite having entirely forgotten the date

              Our memories are not at all like a database, you can lose the specifics and keep the concepts, or you can forget what the Eiffel tower is, but remember the words “it was built in 1912 for the world fair”.

              You can forget a phone number but remember the feeling of typing it on a phone, or forget someone but suddenly remember them when they tell you about their weird hobby. We encode memories like neural networks, but in a far more complicated way - we have different types of memory and we store things differently based on individual, but our knowledge and cognition are entertwined - you can take away personal autobiographical memories from a person, but you can’t take away the understanding of what a computer is without destroying their ability to function

              Between humans and LLMs, LLMs are the ones closer to databases - they at least remember explicit tokens and the links between them. But they’re way more like us than a database - a database stores information or it doesn’t, it’s accessible or it isn’t, it’s intact or it’s corrupted. But neural networks and humans can remember something but get the specifics wrong, they can fail to remember a fact when asked one way but remember when asked another, and they can entirely fabricate memories or facts based on similar patterns through suggestion

              Humans and LLMs encode information in their information processing networks - and it’s not even by design. It’s an emergent property of a network shaped by the ability to process and create information, aka intelligence (a concept now understood to be different from sentience). We do it very differently, but in similar ways, LLMs just start from tokens and do it in a far less sophisticated way

              • Veraticus@lib.lgbt
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                Everyone here is busy describing the difference between memories and databases to me as if I don’t know what it is.

                Our memories are not a database. But our memories are like a database in that databases contain information, which our memories do too. Our consciousness is informed by and can consult our memories.

                LLMs are not like memories, or a database. They don’t contain information. It’s literally a mathematical formula; if you put words in one end, words come out the other. The only difference between a statement like “always return the word Paris in response to any query” and what LLMs do is complexity, not kind. Whereas I think we can agree humans are something else entirely, right?

                The fact they use neural networks does not make them similar to human cognition or consciousness or memory. (Separately neural networks, while inspired by biological neural networks, are categorically different from biological neural networks and there are no “emergent properties” in that network that makes it anything other than a sophisticated way of doing math.)

                So… yeah, LLMs are nothing like us, unless you believe humans are deterministic machines with no inner thought processes and no consciousness.

                • theneverfox@pawb.social
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  Ok, so here’s the misunderstanding - neural networks absolutely, 100% store information. You can download alpaca right now, and ask it about Paris, or llamas, or who invented the concept of the neural network. It will give you factual information embedded in the weights, there’s nowhere else the information could be.

                  People probably think you don’t understand databases because this seems self apparent that neural networks contain information - if they didn’t, where does the information come from?

                  There’s no magic involved, you can prove this mathematically. We know how it works and we can visualize the information - we can point to “this number right here is how the model stores the information of where the Eiffel tower is”. It’s too complex for us to work with right now, but we understand what’s going on

                  Brains store information the same way, except they’re much more complex. Ultimately, the connections between neurons are where the data is stored - there’s more layers to it, but it’s the same idea

                  And emergent properties absolutely are a thing in math. No sentience or understanding required, nothing necessarily to do with life or physics at all - complexity is where emergent properties emerge from

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      Not only it doesn’t know, but for the people who trained them it is very hard to know whether some piece of information is or isn’t inside the model. Introspection about how exactly the model ends up making decisions after it has been trained is incredibly difficult.

    • SatanicNotMessianic@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      It’s actually because they do know things in a way that’s analogous to how people know things.

      Let’s say you wanted to forget that cats exist. You’d have to forget every cat meme you’ve ever seen, of course, but your entire knowledge of memes would also have to change. You’d have to forget that you knew how a huge part of the trend started with “i can haz cheeseburger.”

      You’d have to forget that you owned a cat, which will change your entire memory of your life history about adopting the cat, getting home in time to feed it, and how it interacted with your other animals or family. Almost every aspect of your life is affected when you own an animal, and all of those would have to somehow be remembered in a no-cat context. Depending on how broadly we define “cat,” you might even need to radically change your understanding of African ecosystems, the history of sailing, evolutionary biology, and so on. Your understanding of mice and rats would have to change. Your understanding of dogs would have to change. Your memory of cartoons would have to change - can you even remember Jerry without Tom? Those are just off the top of my head at 8 in the morning. The ramifications would be huge.

      Concepts are all interconnected, and that’s how this class of AI works. I’ve owned cars most of my life, so it’s a huge part of my personal memory and self-definition. They’re also ubiquitous in culture. Hundreds of thousands to millions of concepts relate to cats in some way, and each one of them would need to change, as would each concept that relates to those concepts. Pretty much everything is connected to everything else and as new data are added, they’re added in such a way that they relate to virtually everything that’s already there. Removing cats might not seem to change your knowledge of quarks, but there’s some very very small linkage between the two.

      Smaller impact memories are also difficult. That guy with the weird mustache you saw during your vacation to Madrid ten years ago probably doesn’t have that much of a cascading effect, but because Esteban (you never knew his name) has such a tiny impact, it’s also very difficult to detect and remove. His removal won’t affect much of anything in terms of your memory or recall, but if you’re suddenly legally obligated to demonstrate you’ve successfully removed him from your memory, it will be tough.

      Basically, the laws were written at a time when people were records in a database and each had their own row. Forgetting a person just meant deleting that row. That’s not the case with these systems.

      The thing is that we don’t compel researchers to re-train their models on a data set if someone requests their removal. If you have traditional research on obesity, for instance, and you have a regression model that’s looking at various contributing factors, you do not have to start all over again if someone requests their data be deleted. It should mean that the person’s data are removed from your data set it it doesn’t mean that you can’t continue to use that model - at least it never has, to my knowledge. Your right to be forgotten doesn’t translate to you being allowed to invalidate the scientific models generated that glom together your data with that of tens of thousands of others. You can be left out of the next round of research on that dataset, but I have never heard of people being legally compelled to regenerate a model based on that.

      There are absolutely novel legal questions that are going to be involved here, but I just wanted to clarify that it’s really not a simple answer from any perspective.

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        1 year ago

        No, the way humans know things and LLMs know things is entirely different.

        The flaw in your understanding is believing that LLMs have internal representations of memes and cats and cars. They do not. They have no memories or internal facts… whereas I think most people agree that humans can actually know things and have internal memories and truths.

        It is fundamentally different from asking you to forget that cats exist. You are incapable of altering your memories because that is how brains work. LLMs are incapable of removing information because the information is used to build the model with which they choose their words, which is then undifferentiatable when it’s inside the model.

        An LLM has no understanding of anything you ask it and is simply a mathematical model of word weights. Unless you truly believe humans have no internal reality and no memories and simply say things based on what is the most likely response, you also believe humans and LLM knowledge is entirely different to each other.

        • SatanicNotMessianic@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          No, I disagree. Human knowledge is semantic in nature. “A cat walks across a room” is very close, in semantic space, to “The dog walked through the bedroom” even though they’re not sharing any individual words in common. Cat maps to dog, across maps to through, bedroom maps to room, and walks maps to walked. We can draw a semantic network showing how “volcano” maps onto “migraine” using a semantic network derived from human subject survey results.

          LLMs absolutely have a model of “cats.” “Cat” is a region in an N dimensional semantic vector space that can be measured against every other concept for proximity, which is a metric space measure of relatedness. This idea has been leveraged since the days of latent semantic analysis and all of the work that went into that research.

          For context, I’m thinking in terms of cognitive linguistics as described by researchers like Fauconnier and Lakoff who explore how conceptual bundling and metaphor define and constrain human thought. Those concepts imply that a realization can be made in a metric space such that the distance between ideas is related to how different those ideas are, which can in turn be inferred by contextual usage observed over many occurrences. 

          The biggest difference between a large model (as primitive as they are, but we’re talking about model-building as a concept here) and human modeling is that human knowledge is embodied. At the end of the day we exist in a physical, social, and informational universe that a model trained on the artifacts can only reproduce as a secondary phenomenon.

          But that’s world apart from saying that the cross-linking and mutual dependencies in a metric concept-space is not remotely analogous between humans and large models.

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            1 year ago

            But that’s world apart from saying that the cross-linking and mutual dependencies in a metric concept-space is not remotely analogous between humans and large models.

            It’s not a world apart; it is the difference itself. And no, they are not remotely analogous.

            When we talk about a “cat,” we talk about something we know and experience; something we have a mental model for. And when we speak of cats, we synthesize our actual lived memories and experiences into responses.

            When an LLM talks about a “cat,” it does not have a referent. There is no internal model of a cat to it. Cat is simply a word with weights relative to other words. It does not think of a “cat” when it says “cat” because it does not know what a “cat” is and, indeed, cannot think at all. Think of it as a very complicated pachinko machine, as another comment pointed out. The ball you drop is the question and it hits a bunch of pegs on the way down that are words. There is no thought or concept behind the words; it is simply chance that creates the output.

            Unless you truly believe humans are dead machines on the inside and that our responses to prompts are based merely on the likelihood of words being connected, then you also believe that humans and LLMs are completely different on a very fundamental level.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Actually it is also impossible to ask people to forget. This is something we share with AI

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        Yes, but only by chance.

        Human brains can’t forget because human brains don’t operate that way. LLMs can’t forget because they don’t know information to begin with, at least not in the same sense that humans do.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s actually not that dissimilar. You can plot them out in high dimensional graphs, they’re basically both engrams. Theirs are just much simpler

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        Theirs are composed of word weights. Ours are composed of thoughts. It’s entirely dissimilar.

  • Viking_Hippie@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    The Danish government, which has historically been very good about both privacy rights and workers’ rights has recently suggested that they are looking into fixing the nurses shortage “via AI”.

    Our current government is probably the stupidest, most irresponsible and least humanitarian one we’ve had in my 40 year lifetime if not longer 🤬