ArcticDagger to Science@mander.xyz · 2 months agoLLMs produce racist output when prompted in African American Englishwww.nature.comexternal-linkmessage-square35fedilinkarrow-up193arrow-down117cross-posted to: science@lemmy.world
arrow-up176arrow-down1external-linkLLMs produce racist output when prompted in African American Englishwww.nature.comArcticDagger to Science@mander.xyz · 2 months agomessage-square35fedilinkcross-posted to: science@lemmy.world
minus-squareRobotToaster@mander.xyzlinkfedilinkarrow-up11·edit-22 months agoPretty much, it was trained on human writing, then people are all surprised when it has human biases.
minus-squareHamartiogonic@sopuli.xyzlinkfedilinkarrow-up2·2 months agoAn LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
minus-squareThe Snark Urge@lemmy.worldlinkfedilinkEnglisharrow-up4·2 months agoWho among us couldn’t benefit from a little more of that?
minus-squareHamartiogonic@sopuli.xyzlinkfedilinkarrow-up1·2 months agoHumans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
minus-squareThe Snark Urge@lemmy.worldlinkfedilinkEnglisharrow-up2·2 months agoExciting new product from the tech industry: Fruit from the poisoned tree!
Pretty much, it was trained on human writing, then people are all surprised when it has human biases.
An LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
Who among us couldn’t benefit from a little more of that?
Humans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
Exciting new product from the tech industry: Fruit from the poisoned tree!