Outside of English, ChatGPT makes up words, fails logic tests, and can’t do basic information retrieval.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    ChatGPT is actually able to translate the information it learns in one language into other languages, so if it’s having trouble speaking Bengali and such it must simply not know the language very well. I recall a study being done where an LLM was trained up on some new information using English training data and then was asked about it in French, and it was able to talk about what it had learned in French.

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        That’s an important point you raise. I feel like a big problem with the LLM projects we see today, including ChatGPT, Bard, etc., is that the developers have tunnel vision. Rather than using the LLM as one component of a system with many well-researched traditional algorithms doing what they do best, they want to do everything within the network.

        This makes sense from a research perspective. It doesn’t make sense from an end-product perspective.

        The more I play with LLMs, the more I feel like their true value is as something like “regular expressions on crack”.

    • Dudewitbow@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Of course. But with translations, brings mistranslations, especially with tier 3+ languages to learn as a english speaker. The data is subject to the accuracy of the translation, and Chat GPT translation is still pretty far from perfect.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Ah, I had interpreted your comment to mean that you thought ChatGPT wouldn’t know how to answer a question in Bengali unless the information it needed to solve the problem had been part of its Bengali training set. My bad.