Constantly, unfortunately.
I work in Cyber Security and you can’t swing a Cat-5 'o Nine Tails without hitting some vendor talking up the “AI tools” in their products. Some of them are kinda OK. Mostly, this is language models providing relevant documentation or code snippets, stuff which was previously found by a bit of googling. The problem is that AI has been stuffed into network and system analysis, looking for anomalous activity. And every single one of those models is complete shit. While they do find anomalies, it’s mostly because they alert of so much stuff, generating so many false positives, that they get one right by blind chance. If you want to make money on a model, sell it to a security vendor. Those of us who have to deal with the tools will hate you, but CEOs and CISOs are eating that shit up right now. If you want to make something actually useful, make a model which identifies and tunes out false positives from other models.
Constantly, unfortunately.
I work in Cyber Security and you can’t swing a Cat-5 'o Nine Tails without hitting some vendor talking up the “AI tools” in their products. Some of them are kinda OK. Mostly, this is language models providing relevant documentation or code snippets, stuff which was previously found by a bit of googling. The problem is that AI has been stuffed into network and system analysis, looking for anomalous activity. And every single one of those models is complete shit. While they do find anomalies, it’s mostly because they alert of so much stuff, generating so many false positives, that they get one right by blind chance. If you want to make money on a model, sell it to a security vendor. Those of us who have to deal with the tools will hate you, but CEOs and CISOs are eating that shit up right now. If you want to make something actually useful, make a model which identifies and tunes out false positives from other models.