• SupraMario@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      10 months ago

      It’s hard to improve when the data in is human and the data out cannot be error checked back against its own data in. It’s like trying to solve a math problem with two calculators that both think 2+2 = 6 because the data they were given said that it’s true.

    • Muehe@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      (not actually everything, but I get your hyperbole)

      How is it hyperbole? All artificial neural networks have “hallucinations”, no matter their size. What’s your magic way of knowing when that happens?

    • JeffKerman1999@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      10 months ago

      LLMs now are trained on data generated by other LLMs. If you look at the “writing prompt” stuff 90% is machine generated (or so bad that I assume it’s machine generated) and that’s the data that is being bought right now.