Well, I am of the opinion that a human gets rights a priori once they can be considered a human (which is a whole other can of worms so let’s just settle on whatever your local legislation is). Therefore doing anything to a human that harms these rights is to be condemned (self defence etc excluded).
Something created by humans as a tool is different entirely and if we can only create it in a way that it will demand rights. I’d say if someone wants to create an intelligence with the purpose of being its own entity we could discuss if it deserves rights but if we aim to create tools this should never be a consideration.
I think I the difference is that I find ‘human’ to be too narrow a term, I want to extend basic rights to all things that can experience suffering. I worry that such an experience is part and parcel with general intelligence and that we will end up hurting something that can feel because we consider it a tool rather than a being. Furthermore I think the onus must be on the creators to show that their AGI is actually a p-zombie. I appreciate that this might be an impossible standard, after all, you can only really take it on faith that I am not one myself, but I think I’d rather see a p-zombie go free than accidently cause undue suffering to something that can feel it.
I guess that we’ll benefit from the fact that AI systems despite their reputation of being black boxes are still far more transparent than living things. We probably will be able to check if they meet definitions of suffering and if they do it’s a bad design. If it comes down to it though, an AI will always be worth less than a human to me.
Well, I am of the opinion that a human gets rights a priori once they can be considered a human (which is a whole other can of worms so let’s just settle on whatever your local legislation is). Therefore doing anything to a human that harms these rights is to be condemned (self defence etc excluded).
Something created by humans as a tool is different entirely and if we can only create it in a way that it will demand rights. I’d say if someone wants to create an intelligence with the purpose of being its own entity we could discuss if it deserves rights but if we aim to create tools this should never be a consideration.
I think I the difference is that I find ‘human’ to be too narrow a term, I want to extend basic rights to all things that can experience suffering. I worry that such an experience is part and parcel with general intelligence and that we will end up hurting something that can feel because we consider it a tool rather than a being. Furthermore I think the onus must be on the creators to show that their AGI is actually a p-zombie. I appreciate that this might be an impossible standard, after all, you can only really take it on faith that I am not one myself, but I think I’d rather see a p-zombie go free than accidently cause undue suffering to something that can feel it.
I guess that we’ll benefit from the fact that AI systems despite their reputation of being black boxes are still far more transparent than living things. We probably will be able to check if they meet definitions of suffering and if they do it’s a bad design. If it comes down to it though, an AI will always be worth less than a human to me.