• Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.

    • OpenStars@kbin.social
      link
      fedilink
      arrow-up
      24
      ·
      1 year ago

      There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.

    • Biran@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      I’ve seen many where the captchas are generated by an AI…
      It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?

        • Bizarroland@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          So what you’re saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.

          It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it’s a lot easier to run a large bot farm from a data center than it is from 1,000 different people’s houses.

          You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.

          After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.

          I guess the next step would be giving people an opportunity to prove that they’re not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that’s its own can of worms.

      • Unaware7013@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.

    • dani@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….