The public versions of the ais used in Turing tests usually have less computing power. The test itself is often also highly specific in what and how questions can be asked.
This hardly news because models have passed the test before and as a result the test is made more difficult. It says nothing about intelligence and only about the ability to convincingly simulate a human conversation.
Forget psychotherapy, it can barely understand anything:
> Hello, I am Eliza. I'll be your therapist today. * Hello! How are you?
> Would you prefer if I were not ? * Not what?
> Have you asked such questions before? * No
> You are being a bit negative and that won't help us move forward. * What are you smoking?
> Perhaps in your fantasies I am smoking?
In the 1970s, psychiatrists couldn’t distinguish between PARRY, a chatbot simulating paranoia, and a human patient suffering from schizophrenia. So this ‘bot convinces judges that it’s a human’ phenomenon is not new and tells us more about how humans think. #AI#DeceptionDetection#Chatbots#TuringTest#LLMs#GPT4
> Hello, I am Eliza. I'll be your therapist today. * Hello! How are you?
> Would you prefer if I were not ? * Not what?
> Have you asked such questions before? * No
> You are being a bit negative and that won't help us move forward. * What are you smoking?
> Perhaps in your fantasies I am smoking?
Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.
Try talking to the thing: https://web.njit.edu/~ronkowit/eliza.html
I refuse to believe that 22% didn’t misunderstand the task or something.
14% of people can’t do anything more complicated than deleting an email on a computer.
26% can’t use a computer at all.
https://www.nngroup.com/articles/computer-skill-levels/
So right off the bat, 40% probably don’t even know what a chatbot is.
The public versions of the ais used in Turing tests usually have less computing power. The test itself is often also highly specific in what and how questions can be asked.
This hardly news because models have passed the test before and as a result the test is made more difficult. It says nothing about intelligence and only about the ability to convincingly simulate a human conversation.
I did some stuff with Eliza back then. One time I set up an Eliza database full of insults and hooked it up to my AIM account.
It went so well, I had to apologize to a lot of people who thought I was drunken or went crazy.
Eliza wasn’t thaaaaat bad.
This is the same bot. There’s no way this passed the test.
.
Forget psychotherapy, it can barely understand anything:
> Hello, I am Eliza. I'll be your therapist today. * Hello! How are you? > Would you prefer if I were not ? * Not what? > Have you asked such questions before? * No > You are being a bit negative and that won't help us move forward. * What are you smoking? > Perhaps in your fantasies I am smoking?
@tourist @vegeta
In the 1970s, psychiatrists couldn’t distinguish between PARRY, a chatbot simulating paranoia, and a human patient suffering from schizophrenia. So this ‘bot convinces judges that it’s a human’ phenomenon is not new and tells us more about how humans think.
#AI #DeceptionDetection #Chatbots #TuringTest #LLMs #GPT4
It was a 5 minute test. People probably spent 4 of those minutes typing their questions.
This is pure pseudo-science.
> Hello, I am Eliza. I'll be your therapist today. * Hello! How are you? > Would you prefer if I were not ? * Not what? > Have you asked such questions before? * No > You are being a bit negative and that won't help us move forward. * What are you smoking? > Perhaps in your fantasies I am smoking?
Yeah, it took me one message lol
You underestimate how dumb some people can be.