“The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.”

It’s getting worse. And because it’s a black box model they don’t know why. The computer science professor here likens it to how human students make mistakes… but human students make mistakes because they don’t have perfect recall, mishear things being told to them, are tired and/or not paying attention… A bunch of reason that basically relate to having a human body that needs food, rest and water. A thing a computer does not have.

The only reason ChatGPT should be getting math wrong is that it’s getting inputs that are wrong, but without view into it they can’t figure out where it’s getting it wrong and who told it the wrong info.

  • CarrieForle@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    And there are more and more offline GPT AIs available for free. Now everyone with an above average computer can have their own chatGPT.

    • BarbecueCowboy@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      It’s still pretty rough to selfhost an LLM. You can get one that’s kind of okay on an average computer, but to get a really competitive one running locally at a good speed, you need a huge amount of RAM that is still beyond most average users (VRAM for GPU based projects).

      I’ve been trying to get Vicuna going and the RAM usage is rough, 60gb is suggested, and I’ve got 64 and I think I need a lot more honestly.