• §ɦṛɛɗɗịɛ ßịⱺ𝔩ⱺɠịᵴŧ@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 years ago

    The fact sources aren’t provided with ChatGPT is detrimental. We’re in the early stages where errors are not uncommon, sourcing the info used to generate the paraphrases with ChatGPT would help instill confidence in its responses. This is a primary reason I prefer using perplexity.ai over ChatGPT, plus there’s no need to create an account with perplexity as well!

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I mean, I just looked at one example: https://www.perplexity.ai/?s=u&uuid=aed79362-b763-40a2-917a-995bf2952fd7

      While it does provide sources, it partially did not actually take information from them (the second source contradicts with the first source) or lost lots of context on the way, like the first source only talking about black ants, not all ants.

      That’s not the point I’m trying to make here. I just didn’t want to dismiss your suggestion without even looking at it.

      Because yeah, my understanding is that we’re not in “early stages”. We’re in the late stages of what these Large Language Models are capable of. And we’ll need significantly more advances in AI before they truly start to understand what they’re writing.
      Because these LLMs are just chaining words together that seem to occur together in the wild. Some fine-tuning of datasets and training techniques may still be possible, but the underlying problem won’t go away.

      You need to actually understand what these sources say to compare them and determine that they are conflicting and why. It’s also not enough to pick one of these sources and summarize it. As a human, it’s an essential part of research to read multiple sources to get a proper feel for the ‘correct’ answer and why there is not just one correct answer.