• silence7@slrpnk.netOPM
    link
    fedilink
    arrow-up
    1
    ·
    21 hours ago

    That’s still not into the realm where I trust it; the underlying model is a language model. What you’re describing is a recipe for ending up with paltering a significant fraction of the time.

    • jkintree@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      19 hours ago

      Did you even try diffy.chat to test how factually correct it is and how well it cites its sources? How good does it have to be to be useful? How bad does it have to be to be useless?

      • silence7@slrpnk.netOPM
        link
        fedilink
        arrow-up
        1
        ·
        19 hours ago

        I tried it. It produces reasonably accurate results a meaningful fraction of the time. The problem is that when it’s wrong, it still uses authoritative language, and you can’t tell the difference without underlying knowledge.

        • jkintree@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          18 hours ago

          There does need to be a mechanism to keep the human in the loop to correct the knowledge base by people who have the underlying knowledge. Perhaps notification needs to be sent to people who have previously viewed the incorrect information when a correction is made.