• /home/pineapplelover@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    7 hours ago

    Ima be honest, I’m not surprised. Introduce AI in to your critical systems go ahead. Don’t be surprised when it fucks shit up

  • orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    29
    ·
    1 day ago

    If working with AI has taught me anything, ask it absolutely NOTHING involving numbers. It’s fucking horrendous. Math, phone numbers, don’t ask it any of that. It’s just advanced autocomplete and it does not understand anything. Just use a search engine, ffs.

    • Mustakrakish@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      7 hours ago

      You’d think itd be able to do math right, since ya know, we’ve kinda had calculators woeking for a long time

    • jim3692@discuss.online
      link
      fedilink
      arrow-up
      9
      arrow-down
      5
      ·
      edit-2
      21 hours ago

      What models have you tried? I used local Llama 3.1 to help me with university math.

      It seemed capable of solving differential equations and doing LaPlace transform. It did some mistakes during the calculations, like a math professor in a hurry.

      What I found best, was getting a solution from Llama, and validating each step using WolframAlpha.

      • Chais@sh.itjust.works
        link
        fedilink
        arrow-up
        17
        arrow-down
        6
        ·
        edit-2
        21 hours ago

        Or, and hear me out on this, you could actually learn and understand it yourself! You know? The thing you go to university for? What would you say if, say, it came to light that an engineer had outsourced the statical analysis of a bridge to some half baked autocomplete? I’d lose any trust in that bridge and respect for that engineer and would hope they’re stripped of their title and held personally responsible.

        These things currently are worse than useless, by sometimes being right. It gives people the wrong impression that you can actually rely on them.

        • Ansis100@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          15 hours ago

          Now, I’m not saying you’re wrong, but having AI explain a complicated subject in simple terms can be one of the best ways to learn. Sometimes the professor is just that bad and you need a helping hand.

          Agreed on the numbers, though. Just use WolframAlpha.

          • pinkapple@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            Anyone being patronizing about “not fully learning and understanding” subjects that calls neural networks “autocomplete” is an example of what they preach against. Even if they’re the crappiest AI around (they can be), they still have literally nothing to do with n-grams (autocomplete basically), Markov chains, regex parsers etc and I guess people just lazily read “anti-AI hype” popular articles and mindlessly parrot them instead of bothering with layered perceptrons, linear algebra, decoders etc.

            The technology itself is promising. It shouldn’t be gatekept by corporations. It’s usually corporate fine-tuning that makes LLMs incredibly crappier than they can be. There’s math-gpt (unrelated with openAI afaik, double check to be sure) and customizable models on huggingface besides wolfram, ideally a local model is preferable for privacy and customization.

            They’re great at explaining STEM related concepts, that’s unrelated to trying to use generic models for computation, getting bad results and dunking on the entire concept even though there are provers and reasoning models for that task that do great at it. Khan academy is also customizing an AI because they can be great for democratizing education, but it needs work. Too bad they’re using openAI models.

            And like, the one doing statics for a few decades now is usually a gentleman called AutoCAD or Revit so I don’t know, I guess we all need to thank Autodesk for bridges not collapsing. It would be very bizarre if anyone used non-specialized tools like random LLMs but people thinking that engineers actually do all the math by hand on paper especially for huge projects is kinda hilarious. Even more hilarious is that Autodesk has incorporated AI automation to newer versions of AutoCAD so yeah, not exactly but they kinda do build bridges lmao.

          • Chais@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            Getting an explanation is one thing, getting a complete solution is another. Even if you then verify with a more suited tool. It’s still not your solution and you didn’t fully understand it.

        • jim3692@discuss.online
          link
          fedilink
          arrow-up
          6
          arrow-down
          8
          ·
          21 hours ago

          It was the last reaming exam before my deletion from university. I wish I could attend the lectures, but, due to work, it was impossible. Also, my degree is not fully related to my work field. I work as a software developer, and my degree is about electronics engineering. I just need a degree to get promoted.

      • Squizzy@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        20 hours ago

        Copilot and chatgpt suuuuck at basic maths. I ws doing coupon discount shit, it failed everyone of them. It presented the right formula sometimes but still fucked up really simple stuff.

        I asked copilot to reference an old sheet, take column A find its percentage completion in column B and add ten percent to it in the new sheet. I ended up with everything showing 6000% completion.

        Copilot is inegrated to excel, its woeful.

    • Flames5123@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      1 day ago

      I asked my work’s AI to just give me a comma separated list of string that I gave it, then it returned a list of strings with all the strings being “CREDIT_DEBIT_CARD_NUMBER”. The numbers were 12 digits, not 16. I asked 3 times to give me the raw numbers and had to say exactly “these are 12 digits long not 16. Stop obfuscating it” before it gave me the right things.

      I’ve even had it be wrong about simple math. It’s just awful.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        24 hours ago

        Yeah because it’s a text generator. You’re using the wrong tool for the job.

        • Flames5123@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          21 hours ago

          Exactly. But they tout this as “AI” instead of an LLM. I need to improve my kinda ok regex skills. They’re already better than almost anyone else on my team, but I can improve them.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        8
        ·
        1 day ago

        It’s really crappy at trying to address its own mistakes. I find that it will get into an infinite error loop where it hops between 2-4 answers, none of which are correct. Sometimes it helps to explicitly instruct it to format the data provided and not edit it in any way, but I still get paranoid.

      • kameecoding@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        18 hours ago

        Either you are bad at chatgpt, or I am a machine whisperer but I have a hard time believing copilot couldnt handle that, I am regularly having it rewrite sql code, reformatting java code, etc

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    74
    arrow-down
    2
    ·
    edit-2
    1 day ago

    which it turned out belonged to James […] whose number appears on his company website.

    When Smethurst challenged that, it admitted: “You’re right,” and said it may have been “mistakenly pulled from a database”.

    but the overreach of taking an incorrect number from some database it has access to is particularly worrying.

    I really love this new style of journalism where they bash the AI for hallucinating and making clear mistakes, to then take anything it says about itself at face value.

    It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      16 hours ago

      It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.

      Right. There’s nothing terrifying about the technology.

      What is terrifying is how people treat it.

      LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.

      This often creates trust for the computer that it doesn’t deserve yet.

      Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.

      Today, most chat bots don’t do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn’t.

    • davel [he/him]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      29
      ·
      1 day ago

      It’s as if some people will believe any grammatically & semantically intelligible text put in front of their faces.

        • davel [he/him]@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          edit-2
          1 day ago

          Some eat up pro-AI drivel, some others anti-AI drivel. Tech bubbles are a wild ride. At least it’s not a bullshit bubble like crypto or web3/nft/metaverse.

    • Dran@lemmy.world
      cake
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Also, the first five digits were the same between the two numbers. Meta is guilty, but they’re guilty of grifting, not of giving a rogue AI access to some shadow database of personal details… yet? Lol

  • letrasetOP
    link
    fedilink
    arrow-up
    48
    ·
    1 day ago

    Waiting on the platform for a morning train that was nowhere to be seen, he asked Meta’s WhatsApp AI assistant for a contact number for TransPennine Express. The chatbot confidently sent him a mobile phone number for customer services, but it turned out to be the private number of a completely unconnected WhatsApp user 170 miles away in Oxfordshire.

    Ah yes, what else to expect from »the most intelligent AI assistant that you can freely use«.