• simple@lemm.ee
    link
    fedilink
    arrow-up
    23
    arrow-down
    4
    ·
    edit-2
    2 months ago

    Outside of benchmarks it’s really not as big of a deal as openAI wants you to think it is. In most cases it’s slightly better than Claude, except it uses 50x the tokens repeating info to itself and is way slower. There are a lot of people that tried o1 online and are posting screenshots of it making basic mistakes or gaslighting itself in its chain of thought. Not to mention you only get 30 messages PER WEEK since it’s such a waste of energy.

    It’s a desperate attempt by openAI to stay relevant now that competitors and even free models are catching up.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      I’ve learned all AI marketing is greatly embellished, focusing only on best cases. Sure, it can solve a coding problem, if you work on prompt for an hour, give excruciating detail, and run it a few dozen times

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    4
    ·
    2 months ago

    Eh. If it what they say it is, it’s still not going to replace a solid engineer. This is also still not capable of actually developing novel reasoning and logic solutions, so the threat of not being able to own the IP is still a thing. There’s also still this bullshit of OpenAI hinting that they want to retain rights to created works through certain licenses somehow?

    Fuck these assholes up and down the aisle.

    • Lauchs@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      2 months ago

      Why?

      A half decade ago we would’ve laughed at a machine passing the Turing test…

        • Lauchs@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          2 months ago

          Why could it not replace an engineer?

          The previous limits of technology exploded less than half a decade ago, seems wild to assume that’s the end of that kind of growth.

          • vinnymac@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            2 months ago

            Eventually, we might get there, sure. But I don’t see any reason to believe this is it, and I use AI to assist in my programming every day.

            If you instead said, some engineers will be replaced by AI. I’d definitely agree, and without a doubt they’ll try, repeatedly.

          • lemmydividebyzero@reddthat.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            2 months ago

            In its current state?

            Writing code is a small part of being a software engineer and compared to those coding tasks with very detailed instructions about the input, constraints and the output (even with examples), actual tasks are usually missing lots of information you need to find out from different people and there is a huge code base that can’t be transfered to the model.

            If it can fully replace a software developer, it can replace almost anyone’s job.

          • Floey@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            2 months ago

            Technology is always progressing but nobody can say what the next big thing will be, if you really think you are that prescient you can make loads of cash predicting things. Companies are hungry for the next big thing though and will do everything to convince us that they have it, AI is an enticing grift because it’s so misunderstood. The next big thing wasn’t AR or VR or the metaverse, and I don’t think it’s going to be generative AI either, it’s already plateauing and not profitable, even with billions of dollars behind it.

          • Valmond@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            2 months ago

            Sure, make it drive a car first, a thing 99% of the population can do before attempt coding ^^

            Coding is actually quile complicated, especially in old existing codebases. Add that they train them on any crap code they can find…

            • stephen01king@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              2 months ago

              No way are you going to convince me 99% of the population can drive. Go get a more accurate statistics before trying to use it to dismiss something.

              • Valmond@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                3
                ·
                edit-2
                2 months ago

                It’s an example, and most adults (if I have to explicit it) can drive, or can learn how to.

                Coding not so much.

                So AI that can’t even drive, can code suddenly? I don’t think so.

                Better like that?

                • stephen01king@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  2 months ago

                  Most adults can also learn to code, if they actually tried. If you’re gonna add the argument that most people can’t code proficiently, most people can’t drive proficiently, either.

                  Also, driving and coding are completely different set of skills that it’s kinda worthless to compare them. Some people can code just fine but might never learn how to drive because they didn’t need to, so to consider driving as a prerequisite skill to coding doesn’t make sense.