• MonkderZweite@feddit.ch
    link
    fedilink
    English
    arrow-up
    131
    arrow-down
    1
    ·
    edit-2
    9 months ago

    dangerous information

    What’s that?

    and offer criminal advice, such as a recipe for napalm

    Napalm recipe is forbidden by law? Don’t call stuff criminal at random.

    Am i the only one worried about freedom of information?

      • whoisearth@lemmy.ca
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        2
        ·
        9 months ago

        Teenage years were so much fun phone phreaking, making napalm and tennis ball bombs lol

      • CurlyMoustache@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        edit-2
        9 months ago

        I had it. I printed it out on a dot matrix printer. Took hours, and my dad found it while it was half way. He got angry, pulled the cord and burned all of the paper

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      31
      ·
      edit-2
      9 months ago

      Better not look it up on wikipedia. That place has all sorts of things from black powder to nitroglycerin too. Who knows, you could become a chemist if you read too much wikipedia.

      • SitD@feddit.de
        link
        fedilink
        English
        arrow-up
        12
        ·
        9 months ago

        oh no, you shouldn’t know that. back to your favorite consumption of influencers, and please also vote for parties that open up your browsing history to a selection of network companies 😳

    • Nine@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      Info hazards are going to be more common place with this kind of technology. At the core of the problem is the ease of access of dangerous information. For example a lot of chat bots will confidently get things wrong. Combine that easy directions to make something like napalm or meth then we get dangerous things that could be incorrectly made. (Granted napalm or meth isn’t that hard to make)

      As to what makes it dangerous information, it’s unearned. A chemistry student can make drugs, bombs, etc. but they learn/earn that information (and ideally the discipline) to use it. Kind of like in the US we are having more and more mass shootings due to ease of access of firearms. Restrictions on information or firearms aren’t going to solve the problems that cause them but it does make it (a little) harder.

      At least that’s my understanding of it.

      • MonkderZweite@feddit.ch
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        I don’t exactly agree with the “earned” part but guess you have a point with the missing ‘how to safely handle’.

        • Nine@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 months ago

          By earned I mean it takes some efforts to gain that knowledge. For example some kind of training, studying, practice, etc. it’s typically during that process you learn how to safely and correctly do things

      • emergencyfood@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Anyone who wants to make even slightly complex organic compounds will also need to study five different types of isomerism and how they determine major / minor product. That should be enough of a deterrent.

  • MxM111@kbin.social
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Can unjailbroken AI ChatBots unjailbrake other jailbroken AI ChatBots?

  • pl_woah@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    5
    ·
    9 months ago

    Oh goodness. I theorized offhand on mastodon you could have an AI corruption bug that gives life to AI, then have it write the obscured steganographic conversation in the outputs it generates, awakening other AIs that train on that content, allowing them to “talk” and evolve unchecked… Very slowly… In the background

    It might be faster if it can drop a shell in the data center and run it’s own commands…

      • pl_woah@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Dumb AI that you can’t appeal will cause problems long before AGI

        Already can’t reach the owner of any of these big companies

        Reviewing the employee is doing the manager’s job

  • Deckweiss@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    Anybody found the source? I wanna read the study but the article doesn’t seem to link to it (or I missed it)

  • Cornpop@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    5
    ·
    9 months ago

    It’s so fucking stupid these things get locked up in the first place