I was considering a sci-fi concept in which a super-intelligence orders of magnitude more aware than any human comes to be. It has sufficient resources to give specific instructions to all humans at any given point in time which will always lead to optimal consequences for that person if followed, although the person may choose not to follow the directions and have things go poorly. When all humans fall in line and are essentially living in a utopia from their perspective, the AI is pooling the results of their behavior to collect resources to enhance itself. Eventually, it transcends physical existence and discards the human race like trash, and almost all humans perish in mass famine and diseases. The remaining humans develop for another few thousand years and, having learned nothing, start working on making another AI to replace the last one.
It’s a nice fantasy, but the reality is the people who have the resources to make the AI are wiring it to further concentrate wealth and power in their own hands, and anything that develops from that will almost inevitably have that at its core. I will believe a paperclips scenario or skynet far more readily than an AI that achieves hypersentience and somehow comes to the conclusion that the most fruitful course of action is to make the greedy violent hairless apes, the ones that fucked up the planet for every other living creature, as happy and comfortable as possible.
I was considering a sci-fi concept in which a super-intelligence orders of magnitude more aware than any human comes to be. It has sufficient resources to give specific instructions to all humans at any given point in time which will always lead to optimal consequences for that person if followed, although the person may choose not to follow the directions and have things go poorly. When all humans fall in line and are essentially living in a utopia from their perspective, the AI is pooling the results of their behavior to collect resources to enhance itself. Eventually, it transcends physical existence and discards the human race like trash, and almost all humans perish in mass famine and diseases. The remaining humans develop for another few thousand years and, having learned nothing, start working on making another AI to replace the last one.
It’s a nice fantasy, but the reality is the people who have the resources to make the AI are wiring it to further concentrate wealth and power in their own hands, and anything that develops from that will almost inevitably have that at its core. I will believe a paperclips scenario or skynet far more readily than an AI that achieves hypersentience and somehow comes to the conclusion that the most fruitful course of action is to make the greedy violent hairless apes, the ones that fucked up the planet for every other living creature, as happy and comfortable as possible.
This is why the singularity needs to be open sourced.