• Akatsuki Levi@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    1 day ago

    I still don’t get it, like, why tf would you use AI for this kind of thing? It can barely make a basic python script, let alone actually handle a proper codebase or detect a vulnerability, even if it is the most obvious vulnerability ever

    • emzili@programming.dev
      link
      fedilink
      English
      arrow-up
      29
      ·
      22 hours ago

      It’s simple actually, curl has a bug bounty program where reporting even a minor legitimate vulnerability can land you at a minimum $540

      • Taleya@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        If they ever actually identify one, make a very public post stating that as this was identified using AI there will be no bounty paid.

      • zygo_histo_morpheus@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        12 hours ago

        What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.

        Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.

    • kadup@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      23 hours ago

      We have several scientific articles being published and later found to have been generated via AI.

      If somebody is willing to ruin their academic reputation, something that takes years to build, don’t you think people are also using AI to cheat at a job interview and land a high paying IT job?

    • milicent_bystandr@lemm.ee
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      24 hours ago

      I think it might be the developers of that AI, letting their system make bug reports to train it, see what works and what doesn’t (as is the way with training AI), and not caring about the people hurt in the process.