What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.
Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.
What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.
Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.