As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • Orphie Baby@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    17
    ·
    edit-2
    1 year ago

    Good. It’s not even AI. That word is just used because ignorant people eat it up.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      30
      arrow-down
      13
      ·
      1 year ago

      It is indeed AI. Artificial intelligence is a field of study that encompasses machine learning, along with a wide variety of other things.

      Ignorant people get upset about that word being used because all they know about “AI” is from sci-fi shows and movies.

      • Orphie Baby@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        12
        ·
        1 year ago

        Except for all intents and purposes that people keep talking about it, it’s simply not. It’s not about technicalities, it’s about how most people are freaking confused. If most people are freaking confused, then by god do we need to re-categorize and come up with some new words.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          17
          arrow-down
          1
          ·
          1 year ago

          “Artificial intelligence” is well-established technical jargon that’s been in use by researchers for decades. There are scientific journals named “Artificial Intelligence” that are older than I am.

          If the general public is so confused they can come up with their own new name for it. Call them HALs or Skynets or whatever, and then they can rightly say “ChatGPT is not a Skynet” and maybe it’ll calm them down a little. Changing the name of the whole field of study is just not in the cards at this point.

          • pexavc@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            Never really understood the gatekeeping around the phrase “AI”. At the end of the day the general study itself is difficult to understand for the general public. So shouldn’t we actually be happy that it is a mainstream term? That it is educating people on these concepts, that they would otherwise ignore?

          • Orphie Baby@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            7
            ·
            edit-2
            1 year ago

            If you haven’t noticed, the people we’re arguing with— including the pope and James Cameron— are people who think this generative pseudo-AI and a Terminator are the same thing. But they’re not even remotely similar, or remotely-similarly capable. That’s the problem. If you want to call them both “AI”, that’s technically semantics. But as far as pragmatics goes, generative AI is not intelligent in any capacity; and calling it “AI” is one of the most confusion-causing things we’ve done in the last few decades, and it can eff off.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              5
              arrow-down
              3
              ·
              1 year ago

              The researchers who called it AI were not the ones who are the source of the confusion. They’ve been using that term for this kind of thing for more than half a century.

              I think what’s happening here is that people are panicking, realizing that this new innovation is a threat to their jobs and to the things they had previously been told were supposed to be a source of unique human pride. They’ve been told their whole lives that machines can’t replace that special spark of human creativity, or empathy, or whatever else they’ve convinced themselves is what makes them indispensable. So they’re reduced to arguing that it’s just a “stochastic parrot”, it’s not “intelligent”, not really. It’s just mimicking intelligence somehow.

              Frankly, it doesn’t matter what they call it. If they want to call it a “stochastic parrot” that’s just mindlessly predicting words, that’s probably going to make them feel even worse when that mindless stochastic parrot is doing their job or has helped put out the most popular music or create the most popular TV show in a few years. But in the meantime it’s just kind of annoying how people are demanding that we stop using the term “artificial intelligence” for something that has been called that for decades by the people who actually create these things.

              Rather than give in to the ignorant panic-mongers, I think I’d rather push back a bit. Skynet is a kind of artificial intelligence. Not all artificial intelligences are skynets. It should be a simple concept to grasp.

              • Orphie Baby@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                3
                ·
                edit-2
                1 year ago

                You almost had a good argument until you started trying to tell us that it’s not just a parrot. It absolutely is a parrot. In order to have creativity, it needs to have knowledge. Not sapience, not consciousness, not even “intelligence” as we know it— just knowledge. But it doesn’t know anything. If it did, it wouldn’t put 7 fingers on a damn character. It doesn’t know that it’s looking at and creating fingers, they’re just fucking pixels to it. It saw pixel patterns, it created pixel patterns. It doesn’t know context to know when the patterns don’t add up. You have to understand this.

                So in the end, it turns out that if you draw something unique and purposeful, with unique context and meaning— and that is preeeetty easy— then you’ll still have a drawing job. If you’re drawing the same thing everyone else already did a million times, AI may be able to do that. If it can figure out how to not add 7 fingers and three feet.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  2
                  ·
                  1 year ago

                  As I said, call it a parrot if you want, denigrate its capabilities, really lean in to how dumb and mindless you think it is. That will just make things worse when it’s doing a better job than the humans who previously ran that call center you’re talking to for assistance with whatever, or when it’s got whatever sort of co-writer byline equivalent the studios end up developing to label AI participation on your favourite new TV show.

                  How good are you at drawing hands? Hands are hard to draw, you know. And the latest AIs are actually getting pretty good at them.

                  • ZagTheRaccoon@reddthat.com
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    1 year ago

                    It is accurate to call it a parrot in the context of it essentially being used as ambiguated plagiarism machines to avoid paying workers.

                    Yes it is capable of that. Yes that word means something else in the actual field. But you need to understand people are talking about this technology as it’s political relationships with power, and pretending prioritizing that form of analysis is well thats just people being uninformed about the REAL side and that’s their fault is yourself missing the point. This isn’t about pride and hurt feelings that a robot is doing something human do. It’s about the fact it’s a tool to undermine the entire value of the creative sector. And these big companies aren’t calling it AI because it’s an accurate descriptor. It could also be called a generative language model. They are calling it that because the common misunderstanding of the term is valuable to hype culture and VC investment. Like it or not, the average understanding of the term carries different weight than it does inside the field. And it turns the conversation into a pretty stupid one about sentience and humanity, as well as legitimizing the practice by trying to argue this is fundamentally unenforceable from the regulations we have on plagiarism, which it really isn’t.

                    People who are trying to rebrand it aren’t doing it because they misunderstand the technical usage of the word AI. They are arguing the terminology is playing into the goals of our (hopefully shared) political enemies, who are trying to bulldoze a technology that they think should get special privileges: by implying the technology is something it isn’t. This is about optics and social power, and the term “AI” is contributing to further public misunderstand how it actually works, which is something we should oppose.

          • shy@reddthat.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            7
            ·
            1 year ago

            We should call them LLMAIs (la-mize, like llamas) to really specify what they are.

            And to their point, I think the ‘intelligence’ in the modern wave of AI is severely lacking. There is no reasoning or learning, just a brute force fuzzy training pass that remains fixed at a specific point in time, and only approximates what an intelligent actor would respond with through referencing massive amounts of “correct response” data. I’ve heard AGI being bandied about as the thing people really thought when you said AI a few years ago, but I’m kind of hoping the AI term stops being watered down with this nonsense. ML is ML, it’s wrong to say that it’s a subset of AI when AI has its own separate connotations.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              LLaMA models are already a common type of large language model.

              but I’m kind of hoping the AI term stops being watered down with this nonsense.

              I’m hoping people will stop mistaking AI for AGI and quit complaining about how it’s not doing what they imagined that they were promised it would do. I also want a pony.

              • shy@reddthat.com
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                4
                ·
                1 year ago

                You appear to have strong opinions on this, so probably not worth arguing further, but I disagree with you completely. If people are mistaking it then that is because the term is being used improperly, as the very language of the two words do not apply. AGI didn’t even gain traction as a term until recently, when people who were actually working on strong AI had to figure out a way to continue communicating about what they were doing, because AI had lost all of its original meaning.

                Also, LLaMA is one of the LLMAIs, not a “common type” of LLM. Pretty much confirms you don’t know what you’re talking about here…

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Also, LLaMA is one of the LLMAIs, not a “common type” of LLM. Pretty much confirms you don’t know what you’re talking about here…

                  Take a look around Hugging Face, LLaMA models are everywhere. They’re a very popular base model because they’re small and have open licenses.

                  You’re complaining about ambiguous terminology, and your proposal is to use LLMAIs (pronounce like llamas) as the general term for the thing that LLaMAs (pronounced llamas) are? That’s not particularly useful.

        • Prager_U@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          5
          ·
          1 year ago

          The real problem is folks who know nothing about it weighing in like they’re the world’s foremost authority. You can arbitrarily shuffle around definitions and call it “Poo Poo Head Intelligence” if you really want, but it won’t stop ignorance and hype reigning supreme.

          To me, it’s hard to see what cowtowing to ignorance by “rebranding” this academic field would achieve. Throwing your hands up and saying “fuck it, the average Joe will always just find this term too misleading, we must use another” seems defeatist and even patronizing. Seems like it would instead be better to try to ensure that half-assed science journalism and science “popularizers” actually do their jobs.

    • Not_Alec_Baldwin@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      5
      ·
      1 year ago

      I’ve started going down this rabbit hole. The takeaway is that if we define intelligence as “ability to solve problems”, we’ve already created artificial intelligence. It’s not flawless, but it’s remarkable.

      There’s the concept of Artificial General Intelligence (AGI) or Artificial Consciousness which people are somewhat obsessed with, that we’ll create an artificial mind that thinks like a human mind does.

      But that’s not really how we do things. Think about how we walk, and then look at a bicycle. A car. A train. A plane. The things we make look and work nothing like we do, and they do the things we do significantly better than we do them.

      I expect AI to be a very similar monster.

      If you’re curious about this kind of conversation I’d highly recommend looking for books or podcasts by Joscha Bach, he did 3 amazing episodes with Lex.

      • Orphie Baby@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        14
        ·
        edit-2
        1 year ago

        Current “AI” doesn’t solve problems. It doesn’t understand context. It can’t see fingers and say “those are fingers, make sure there’s only five”. It can’t tell the difference between a truth and a lie. It can’t say “well that can’t be right!” It just regurgitates an amalgamation of things humans have showed it or said, with zero understanding. “Consciousness” and certainly “sapience” aren’t really relevant factors here.

        • magic_lobster_party@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          You’re confusing AI with AGI. AGI is the ultimate goal of AI research. AI are all the steps along the way. Step by step, AI researchers figure out how to make computers replicate human capabilities. AGI is when we have an AI that has basically replicated all human capabilities. That’s when it’s no longer bounded by a particular problem.

          You can use the more specific terms “weak AI” or “narrow AI” if you prefer.

          Generative AI is just another step in the way. Just like how the emergence of deep learning was one step some years ago. It can clearly produce stuff that previously only humans could make, which in this case is convincing texts and pictures from arbitrary prompts. It’s accurate to call it AI (or weak AI).

          • Orphie Baby@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            5
            ·
            1 year ago

            Yeah, well, “AGI” is not the end result of this generative crap. You’re gonna have to start over with something different one way or another. This simply is not the way.

          • Orphie Baby@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            11
            ·
            edit-2
            1 year ago

            No? There’s a whole lot more to being human than being able to separate one object from another and identify it, recognize that object, and say “my database says that there should only be two of these in this context”. Current “AI” can’t even do that much-- especially not with art.

            Do you know what “sapience” means, by the way?