A running theme of Asimov’s Robot stories is that the Three Laws are inadequate. Robots that aren’t smart and insightful enough keep melting down their positronic brains when they reach contradictions or are placed in irreconcilable situations. Eventually Daneel and Giskard come up with the Zeroth Law; and if I recall correctly they only manage that because Daneel is humaniform and Giskard is telepathic.
There were flaws, yes, but they never rose to the level of attempting to destroy humanity that I recall. We had a sort of plot armor in that Asimov wasn’t interested in writing that kind of story.
I’m getting this from a forward he wrote for one of the robot book compilations.
Wasn’t the last I, Robot story about how the robots directly the world’s politics decide that we were living better and longer lives without technology and brought the world back to medieval level of tech?
Flaws or interesting interpretations of them, but he rarely if ever approached the “robots destroy humanity” trope even if it was technically possible in his universe because he thought it was boring.
Asimov: “The ‘robots take over the world’ plot is overdone. I think humans would make robots intrinsically safe through these three laws.”
Movie: “What if the robots interpreted the three laws in such a way that they decided to take over the world??!?”
The only good part of that movie was when Will Smith’s sidekick was like “this thing runs on gasoline! Don’t you know gasoline explodes?!”
A running theme of Asimov’s Robot stories is that the Three Laws are inadequate. Robots that aren’t smart and insightful enough keep melting down their positronic brains when they reach contradictions or are placed in irreconcilable situations. Eventually Daneel and Giskard come up with the Zeroth Law; and if I recall correctly they only manage that because Daneel is humaniform and Giskard is telepathic.
spoiler
And the robots do take over, eventually!
There were flaws, yes, but they never rose to the level of attempting to destroy humanity that I recall. We had a sort of plot armor in that Asimov wasn’t interested in writing that kind of story.
I’m getting this from a forward he wrote for one of the robot book compilations.
Oh, sure, the robots never want to destroy and replace humanity, but they do end up taking quite a lot of control of humanity’s future.
Wasn’t the last I, Robot story about how the robots directly the world’s politics decide that we were living better and longer lives without technology and brought the world back to medieval level of tech?
Wasn’t there books that he wrote that were about flaws in the Three Laws?
Flaws or interesting interpretations of them, but he rarely if ever approached the “robots destroy humanity” trope even if it was technically possible in his universe because he thought it was boring.
Yeah it’s more about whatever safe guards you put life will find a way to twist them.
Life, uh, finds a way