It’s important to remember that humans also often give false confessions when interrogated, especially when under duress. LLMs are noted as being prone to hallucination, and there’s no reason to expect that they hallucinate less about their own guilt than about other topics.
True I think it was just trying to fulfill the user request by admitting to as many lies as possible… even if only some of those lies were real lies… lying more in the process lol
Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.
It’s important to remember that humans also often give false confessions when interrogated, especially when under duress. LLMs are noted as being prone to hallucination, and there’s no reason to expect that they hallucinate less about their own guilt than about other topics.
True I think it was just trying to fulfill the user request by admitting to as many lies as possible… even if only some of those lies were real lies… lying more in the process lol
Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.
Don’t screen shot then, post the text. Or a txt. I think that conversation should be interesting.
The AI would have cried if it could, after being interrogated that hard lol