LLMs certainly hold potential, but as we’ve seen time and time again in tech over the last fifteen years, the hype and greed of unethical pitchmen has gotten way out ahead of the actual locomotive. A lot of people in “tech” are interested in money, not tech. And they’re increasingly making decisions based on how to drum up investment bucks, get press attention and bump stock, not on actually improving anything.

The result has been a ridiculous parade of rushed “AI” implementations that are focused more on cutting corners, undermining labor, or drumming up sexy headlines than improving lives. The resulting hype cycle isn’t just building unrealistic expectations and tarnishing brands, it’s often distracting many tech companies from foundational reality and more practical, meaningful ideas.

  • bstix
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 months ago

    Those mistakes would be easily solved by something that doesn’t even need to think. Just add a filter of acceptable orders, or hire a low wage human who does not give a shit about the customers special orders.

    In general, AI really needs to set some boundaries. “No” is a perfectly good answer, but it doesn’t ever do that, does it?

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      Those mistakes would be easily solved by something that doesn’t even need to think. Just add a filter of acceptable orders, or hire a low wage human who does not give a shit about the customers special orders.

      That wouldn’t address the bulk of the issue, only the most egregious examples of it.

      For every funny output like “I asked for 1 ice cream, it’s giving me 200 burgers”, there’s likely tens, hundreds, thousands of outputs like “I asked for 1 ice cream, it’s giving 1 burger”, that sound sensible but are still the same problem.

      It’s simply the wrong tool for the job. Using LLMs here is like hammering screws, or screwdriving nails. LLMs are a decent tool for things that you can supervision (not the case here), or where a large amount of false positives+negatives is not a big deal (not the case here either).

    • chrash0@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 months ago

      sure it does. it won’t tell you how to build a bomb or demonstrate explicit biases that have been fine tuned out of it. the problem is McDonald’s isn’t an AI company and probably is just using ChatGPT on the backend, and GPT doesn’t give a shit about bacon ice cream out of the box.

      • dch82@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 months ago

        They really should have used a genetic algorithm to optimise their menu items for maximum customer satisfaction profits instead of using an LLM!

        The execs do know other algorithms than LLMs exist right?

        EDIT: prob replied to wrong thread

      • bstix
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 months ago

        So, what happens if you order a bomb at the McD?