Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

    • killerinstinct101@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      3
      ·
      1 year ago

      This is what was addressed at the start of the comment, you can just roll back to a previous version. It’s heavily ingrained in CS to keep every single version of your software forever.

      • CaptainAniki@lemmy.flight-crew.org
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        8
        ·
        edit-2
        1 year ago

        I don’t think it’s that easy. These are vLLMs that feed back on themselves to produce “better” results. These models don’t have single point release cycles. It’s a constantly evolving blob of memory and storage orchestrated across a vast number of disk arrays and cabinets of hardware.

        [e]I am wrong the models are version controlled and do have releases.

        • drspod@lemmy.ml
          link
          fedilink
          English
          arrow-up
          30
          ·
          1 year ago

          That’s not how these LLMs work. There is a training phase which takes a large amount of compute power, and the training generates a model which is a set of weights and could easily be backed up and version-controlled. The model is then used for inference which is a less compute-intensive process and runs on much smaller hardware than the training phase.

          The inference architecture does use feedback mechanisms but the feedback does not modify the model-weights that were generated at training time.

            • drspod@lemmy.ml
              link
              fedilink
              English
              arrow-up
              13
              ·
              1 year ago

              They list the currently available models that users of their API can select here:

              https://platform.openai.com/docs/models/overview

              They even say that while the main models are being continuously updated (read: re-trained) there are snapshots of previous models that will remain static.

              So yes, they are storing and snapshotting the models and they have many different models available with which to perform inference at the same time.

            • Lukecis@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Makes me wonder how exactly they curate said data, its such an insane amount even teams of thousands of human programmers sifting through all of it 24/7 all day everyday wouldn’t be able to fact check or assess all the data for years. Presumably they use ai to go over the data scraped and thrown into the model, since I cant imagine any human being able to curate it all.

              I’ve heard from various videos detailing the topic that many of the developers have little to no clue as to what’s going on inside the LLM once it’s assembled and set about its work on training itself and what not- and I’m inclined to believe them, the human programmers simply set the params, and system up and then the system eats all the data loaded into it and immediately becomes a sort of black box which nobody knows exactly whats going on inside of it to produce the output it does.

        • agent_flounder@lemmy.one
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          Even so, surely they can take snapshots. If they’re that clueless about rudimentary practices of IT operations then it is just a matter of time before an outage wipes everything. I find it hard to believe nobody considered a way to do backups, rollbacks, or any of that.