You must log in or register to comment.
Is there a non-AI enhanced one that would be less prone to random hallucinations?
Non-AI options can also have “hallucinations” i.e. false positives and false negatives, so if the AI one has a lower false positive/false negative rate, I’m all for it.
Non-AI options are comprehensible so we can understand why they’re failing. When it comes to AI systems we usually can’t reason about why or when they’ll fail.
But people are getting dumb and wolud prefer a magic box they don’t understand that a method that they can know when it is wrog
Are hallucinations a problem outside of LLMs?
I wouldn’t trust a program that can’t accurately count fingers to detect diseases.
But that’s just me.