• saltesc@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    18 hours ago

    “Terrifying” is not the word to describe a wrong numbered scenario. And the rest of the conversation was with someone that doesn’t understand LLMs, shared with other people that don’t understand LLMs.

    Theres a very thin curtain hiding the “magic”, just go take a peek behind it so you’re more aware and less upset when it lets you down.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    18 hours ago

    Lots of anthropomorphism going on here. Most of the time you can’t ask a chatbot where it’s got a number from. And pressing on does nothing. Unlike a human who knows something about his or her workplace, and what database they used or whether they made it up… An LLM does not. It’s just embedded into some framework. But I seriously doubt Meta taught it about the internal structures and what kinds of databases there are. And why would they? At best the AI can tell what tool it used, if there are any. But I’d say in this case it likely just made up a number and that happened to belong to someone. If it’s on some website, it could have been scraped an be in the training data as well. And since he demanded the AI explain itself, it just went ahead and made up some random excuses.