with early “grieftech” entepreneur Helena Blavatsky

  • ShakingMyHead@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    TBF what’s the difference between a “AI” seance and a “real” one? It’s not like one is less of a grift than the other.

      • ShakingMyHead@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        That’s the problem.

        Maybe I’m just reading the room wrong, but the consensus of the comment section seems to be “haha he thinks AI can replace psychics” but all psychics do is take what people said, reframe it and maybe add some nonsense after that can sound vaguely correct. For the most part, that is LLMs. LLMs could replace the psychic industry because unlike other professions there’s zero obligation to be correct about anything.

        Is it unethical? Absolutely. But psychics are already frauds so it’s not like a legitimate profession is being replaced.

        Of course, I could just be misreading the room.

  • barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Neat little vignette about a vile sociopath scamming people in mourning. And, of course, zero consideration for the grieving people being scammed. “ChatGPT please help me feel like he’s still here,” “actually he is already in hell.”

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    All my homies hate Helena Blavatsky. Her grifty bullshit has caused so much human misery.

    This Rohrer character is a worthy successor.

  • luciole (he/him)@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Oh no. I mainly knew Jason Rohrer for his video games. For a few minutes I hoped this was some elaborate prank, but apparently he drank the Kool-Aid. He wrote that Project December’s chatbot was arguably the first machine with a soul. I preferred the playful minimalist existentialism of Passage.

    • 200fifty@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      ngl his stuff always felt a bit cynical to me, in that it seemed to exist more to say “look, video games can have a deep message!” than it did to just have such a message in the first place. Like it existed more to gesture at the concept of meaningfulness rather than to be meaningful itself.

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Obviously they should have trained the AI on text from John Edwards’ Crossing Over. /s

    But seriously, I worry when one of these hacks figures out how to make the word multipliers approximate a cold reading.

  • FermiEstimate@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Addressing the “in hell” response that made headlines at Sundance, Rohrer said the statement came after 85 back-and-forth exchanges in which Angel and the AI discussed long hours working in the “treatment center,” working with “mostly addicts.”

    We know 85 is the upper bound, but I wonder what Rohrer would consider the minimum number of “exchanges” acceptable for telling someone their loved one is in hell? Like, is 20 in “Hey, not cool” territory, but it’s all good once you get to 50? 40?

    Rohrer says that when Angel asked if Cameroun was working or haunting the treatment center in heaven, the AI responded, “Nope, in hell.”

    “They had already fully established that he wasn’t in heaven,” Rohrer said.

    Always a good sign when your best defense of the horrible thing your chatbot says is that it’s in context.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      it’s very telling that 85 messages is considered a lot. your grief better resolve quick before the model loses coherency and starts digging quotes out of a plagiarized horror movie script

      fuck it’s gross how one of the common use cases for LLMs is targeting vulnerable people with the hope they’ll develop a parasocial relationship with your service, so you can keep charging them forever