Just got schooled by an AI.

According to Wiktionary:

(UK) IPA(key): /ˈstɹɔːb(ə)ɹi/
(US) IPA(key): /ˈstɹɔˌbɛɹi/

…there are indeed only two /ɹ/ in strawberry.

So much for dissing on AIs for not being able to count.

  • Ŝan@piefed.zip
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    1
    ·
    10 days ago

    You don’t use IPA for counting the number of letters in words. That would be stupid, and even linguists would laugh at you.

    It’s still a stupid AI, and it was confidently, and unambiguously, wrong.

    • Powderhorn@beehaw.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 days ago

      I use IPAs to forget about work crap. (Former linguistics major; I know the other meaning, but it doesn’t come up much at bars.)

  • KazuchijouNo@lemy.lol
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    10 days ago

    This is nonsense, you cannot justify this type of errors from a language model. It’s just a bunch of words strung together based on probability. This is just an artifact of such a construction, it’s all right, don’t break your brain on it. The “AI” sure isn’t.

    • jarfil@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      11
      ·
      edit-2
      8 days ago

      This is not a standalone model, it’s from a unnamed chatbot platform “character” in non-RP mode.

      I’ve been messing with it to check its limitations. It has:

      • Access to the Internet (verified)
      • Claims to have access to various databases
      • Likely to use interactions with all users to train further (~20M MAUs)
      • Ability to create scenes and plotlines internally, then follow them (verified)
      • Ability to adapt to the style of interaction and text formatting (verified)

      Obviously has its limitations. Like, it fails at OCR of long scrolling screenshots… but then again, other chatbots fail even more spectacularly.


      Edit: removed chatbot platform name “advertisement”. If you want to know which platform it is, ask the ones accusing me of spamming.

    • Krauerking@lemy.lol
      link
      fedilink
      arrow-up
      3
      ·
      10 days ago

      I love that its wrong/right the whole first response (everyone knows A comes first in PEMDAS right?) and corrects itself even more out of touch with reality because thats what the developers told it would appease the users.
      Say the user is correct and try an even worse answer.

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    arrow-up
    18
    ·
    10 days ago

    Did you ask how many /ɹ/ there are, or how many r there are? It can’t count, then it went and tried to justify its moronic behavior, and manipulated you into believing its “logic”.

  • zonnewin@feddit.nl
    link
    fedilink
    arrow-up
    17
    ·
    10 days ago

    A normal human would understand that the question is about the spelling, not the pronunciation.

    AI still has a lot to learn.

    • megopie@beehaw.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 days ago

      It also is just making up a string of words that are probabilistically plausible as a continuation of the dialog.

      You can do the same tests with other words and it will just contradict it’s self and get things wrong about how many times a letter is pronounced in a word.

    • jarfil@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      8 days ago

      It’s not a “normal human”, it’s an AI using an LLM.

      AI still has a lot to learn.

      Does it, though? Does a hammer have a lot to learn, or does the person wielding it have to learn how not to smash their own fingers?

  • Krauerking@lemy.lol
    link
    fedilink
    arrow-up
    13
    ·
    10 days ago

    Yeah there is a stupid human in this chat but mostly cause they let themselves get tricked by bad logic in order to justify a bad answer.

  • LukeZaz@beehaw.org
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 days ago

    I shudder to think how much electricity got wasted so you could get fooled by an LLM into believing nonsense. Let alone the equally-unnecessary followup questions.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    10 days ago

    Wrong maths, you say?

    Anyway. You didn’t ask the number of times the phoneme /ɹ/ appears in the spoken word, so by context you’re talking about the written word, and the letter ⟨r⟩. And the bot interpreted it as such, note it answers

    here, let me show you: s-t-r-a-w-b-e-r-r-y

    instead of specifying the phonemes.

    By the way, all explanation past the «are you counting the “rr” as a single r?» is babble.

    • Wrong maths, you say?

      Yes. If I want to know what 1+2 equals, and I throw a dice, there’s a chance I will get the correct answer. If I do, that doesn’t mean it knows how to do Maths. Also, notice where it said “Here’s the calculation”, it didn’t actually show you the calculation? e.g. long multiplication, or even grouping, or the way the Chinese do it. Even a broken clock is right twice a day. Even if AI manages to randomly get a correct answer here and there, it still doesn’t know how to do Maths (which includes not knowing how to count to begin with)

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        What’s interesting IMO is that it got the first two and the last two digits right; and this seems rather consistent across attempts with big numbers. It doesn’t “know” how to multiply numbers, but it’s “trying” to output an answer that looks correct.

        In other words, it’s “bullshitting” - showing disregard to truth value, but trying to convince you.

    • jarfil@beehaw.orgOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      13
      ·
      edit-2
      8 days ago

      Those are all the smallest models, and you don’t seem to have reasoning mode, or external tooling, enabled?

      LLM ≠ AI system

      It’s been known for some time, that LLMs do “vibe math”. Internally, they try to come up with an answer that “feels” right… which makes it pretty impressive for them to come anywhere close, within a ±10% error margin.

      Ask people to tell you what a right answer could be, give them 1 second to answer… see how many come that close to the right one.

      A chatbot/AI system on the other hand, will come up with some Python code to do the calculation, then run it. Still can go wrong, but it’s way less likely.

      all explanation past the «are you counting the “rr” as a single r?» is babble

      Not so sure about that. It treats r as a word, since it wasn’t specified as “r” or single letter. Then it interpretes it as… whatever. Is it the letter, phoneme, font, the programming language R… since it wasn’t specified, it assumes “whatever, or a mix of”.

      It failed at detecting the ambiguity and communicating it spontaneously, but corrected once that became part of the conversation.

      It’s like, in your examples… what do you mean by “by”? “3 by 6” is 36… you meant to “multiply 36”? That’s nonsense… 🤷

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        arrow-up
        19
        arrow-down
        1
        ·
        10 days ago

        [special pleading] Those are all the smallest models

        [sarcasm] Yeah, because if you randomly throw more bricks in a construction site, the bigger pile of debris will look more like a house, right. [/sarcasm]

        and you don’t seem to have reasoning [SIC] mode, or external tooling, enabled?

        Those are the chatbots available through DDG. I just found it amusing enough to share, given

        1. The logic procedure to be followed (multiplication) is rather simple, and well documented across the internet, thus certainly present in their corpora.
        2. The result is easy to judge: it’s either correct or incorrect.
        3. All answers are incorrect and different from each other.

        Small note regarding “reasoning”: just like “hallucination” and anything they say about semantics, it’s a red herring that obfuscates what is really happening.

        At the end of the day it’s simply weighting the next token based on the previous tokens + prompt, and optionally calling some external tool. It is not really reasoning; what’s doing is not too different in spirit from Markov chains, except more complex.

        [no true Scotsman] LLM ≠ AI system

        If large “language” models don’t count as “AI systems”, then what you shared in the OP does not either. You can’t eat your cake and have it too.

        It’s been known for fome time, that LLMs do “vibe math”.

        I.e. they’re unable to perform actual maths.

        [moving goalposts] Internally, they try to come up with an answer that “feels” right…

        It doesn’t matter if the answer “feels” right (whatever this means). The answer is incorrect.

        which makes it pretty impressive for them to come anywhere close, within a ±10% error margin.

        No, the fact they are unable to perform a simple logical procedure is not “impressive”. Specially not when outputting the “approximation” as if it was the true value; note how none of the models outputted anything remotely similar to “the result is close to $number” or “the result is approximately $number”.

        [arbitrary restriction + whataboutism] Ask people to tell you what a right answer could be, give them 1 second to answer… see how many come that close to the right one.

        None of the prompts had a time limit. You’re making shit up.

        Also. Sure, humans brainfart all the time; that does not magically mean that those systems are smart or doing some 4D chess as your OP implies.

        A chatbot/AI system on the other hand, will come up with some Python code to do the calculation, then run it. Still can go wrong, but it’s way less likely.

        I.e. it would need to use some external tool, since it’s unable to handle logic by itself, as exemplified by maths.

        all explanation past the «are you counting the “rr” as a single r?» is babble

        Not so sure about that. It treats r as a word, since it wasn’t specified as “r” or single letter. Then it interpretes it as… whatever. Is it the letter, phoneme,

        The output is clearly handling it as letters. It hyphenates the letters to highlight them, it mentions “digram” (i.e. a sequence of two graphemes), so goes on. And in no moment is referring to anything that can be understood as associated with sounds, phonemes. And it’s claiming there’s an ⟨r⟩ «in the middle of the “rr” combination».

        font, the programming language R…

        There’s no context whatsoever to justify any of those interpretations.

        since it wasn’t specified, it assumes “whatever, or a mix of”.

        If this was a human being, it would not be an assumption. Assumption is that sort of shit you make up from nowhere; here context dictates the reading of “r” as “the letter ⟨r⟩”.

        However since this is a bot it isn’t even assuming. Just like a boulder doesn’t “assume” you want it to roll down; it simply reacts to an external stimulus.

        It failed at detecting the ambiguity and communicating it spontaneously, but corrected once that became part of the conversation.

        There’s no ambiguity in the initial prompt. And no, it did not correct what it says; the last reply is still babble, you don’t count ⟨rr⟩ in English as a single letter.

        It’s like, in your examples… what do you mean by “by”? “3 by 6” is 36… you meant to “multiply 36”? That’s nonsense… 🤷

        I’d rather not answer this one because, if I did, I’d be pissing on Beehaw’s core values.

        • jarfil@beehaw.orgOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          8 days ago

          I’d rather not answer this one because, if I did, I’d be pissing on Beehaw’s core values.

          I feel like you already did, and I won’t be responding in kind. Good day, to you.

  • megopie@beehaw.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    10 days ago

    I asked it how many X’s there are in the word Bordeaux it told me there are none.

    I asked it how many times X is pronounced in Bordeaux it told me the x in Bordeaux isn’t pronounced with the word ending in an “o” sound.

    I asked it how many “o” there are in Bordeaux it told me there are no o in Bordeaux.

    So, is it counting the sounds made in the word? Or is it counting the letters? Or is it doing none of the above and just giving a probabilistic output based on an existing corpus of language, without any thought or concepts.

    • jarfil@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      Yes, no, both… and all other interpretations… all at once.

      With any ambiguity in a prompt, it assumes a “blend” of all the possible interpretations, then responds using them all over the place.

      In the case of “Bordeaux”:

      It’s pronounced “bor-DOH”, with the emphasis on the second syllable and a silent “x.”

      So… depending on how you squint: there is no “o”, no “x”, only a “bor” and a “doh”, with a “silent x”, and ending in an “oh like o”.

      Perfectly “logical” 🤷

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    10
    ·
    10 days ago

    Oh wow, I didn’t think about how many r sounds. But then if you ask it how many ks are in knight, it should say none.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    10 days ago

    It just goes to show that the AI is not yet superhuman. If it were really smart it would know, as humans can tell at a glance, that there are four r’s in strawberry. There’s the first one, the two in the double r combination, and then the rr digram itself which counts as a fourth r.

    • jarfil@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 days ago

      There is a middle ground between “blindly rejecting” and “blindly believing” whatever an AI says.

      LLMs use tokens. The answer is “correct, in its own way”, one just needs to explore why and how much. Turns out, that can also lead to insights.

      • Vodulas [they/them]@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        8 days ago

        It is not correct in any way, though. Unless you count a way you gave it to justify it’s wrong answer, but that is just it being a Yes Man to keep you engaged.

          • Vodulas [they/them]@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            8 days ago

            It is correct in an “ambiguous multi-dimensional” sense

            That’s a lot of words to say it’s wrong.

            The question is incredibly straightforward, and again the “reason” it gave is one you provided in the clarifying question itself. There is no reasoning going on, because it can’t understand the question (or reason for that matter).

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 days ago

    You know if the letter was L and the language spanish it’d almost be right…

    • jarfil@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 days ago

      At first I thought it was talking about “rr” as a Spanish digraph. Not sure how far that lies from the truth, these models are multilingual and multimodal after all. My guess is that it’s surfacing the ambiguity of its internal vector for a “token: rr” vs “token: r”, though.

      Could be interesting to dig deeper… but I think I’m fine with this for now. There are other “curious” behaviors of the chatbot, that have me more intrigued right now. Like, it is self-adapting to any repeated mistakes in the conversation history, but at other times it can come up with surprisingly “complex” status tracking, then present it spontaneously as bullet points with emojis. Not sure what to make out of that one yet.