I think you’re making a lot more assumptions there than I am. In my case there’s really only two and neither involves magic. First is that general intelligence is not substrate dependent meaning that what ever our brains can do can also be done in silicon. The other is that we keep making technological advancements and don’t destroy ourselves before we develop AGI.
Now since our brains are made of matter and are capable of general intelligence I don’t see a reason to assume a computer couldn’t do this aswell. It’s just a matter of time untill we get there. That can be 5 or 500 years from now but unless something stops us first we’re going to get there eventually one way or another. After all our brains are basically just a meat computer. Even if it wasn’t any smarter than us it would still be million times faster at processing information. It effectively would have decades to think and research each reply it’s going to give.
My assumptions are based in science. Yours is paranoia. You are also making far more assumptions than you’re letting on. Your assumption that ai could perform substantially more energy efficiently for example, than an energy constrained highly optimized processor… Yikes.
The efficient coding hypothesis also helps these exact ai, because it’s being used to justify research into neutral networks and emulating brain function is a huge goal.
My arguments have nothing to do with substrate dependence, but with observable energy issues. You meanwhile are just vaguely waving your hands and saying in a long time maybe somehow magically an ai could exist which magically has all these problems you’re paranoid about.
Also human ai are categorically, observably, much much much slower than organoids. 30 minutes per prompt at human power levels proves that that issue is just “solved” by dumping more energy at the problem.
You need to do more legwork than just saying “substrate independence”, addressed by my organoid thought experiment or “maybe we get Clarke tech or something technology crazy right” which is wholly unconvincing. Maybe we make a The Thing organism in 5 years and none of this matters, ooooh no! Except of course that’s also thermodynamically impossible. Maybe we set the atmosphere on fire, maybe the LHC suddenly creates a black hole after all, maybe nif creates fusion but it turns out to summon demons from hell who eat souls.
Waving your hands and being paranoid about something when you have essentially no reason to expect it is even feasible, if possible at all, is just absurd.
If human brains can do it then it can be done. And it can probably be done better too. I don’t see any reason to assume our brains are the most energy efficient computer that can be created.
Also, my original argument is not about wether AGI can be created or not but wether we could keep it in a box.
Anyway, it’s just a philosophical thought experiement and I’ll rather discuss it with someone that’s a bit less of an dick.
I think you’re making a lot more assumptions there than I am. In my case there’s really only two and neither involves magic. First is that general intelligence is not substrate dependent meaning that what ever our brains can do can also be done in silicon. The other is that we keep making technological advancements and don’t destroy ourselves before we develop AGI.
Now since our brains are made of matter and are capable of general intelligence I don’t see a reason to assume a computer couldn’t do this aswell. It’s just a matter of time untill we get there. That can be 5 or 500 years from now but unless something stops us first we’re going to get there eventually one way or another. After all our brains are basically just a meat computer. Even if it wasn’t any smarter than us it would still be million times faster at processing information. It effectively would have decades to think and research each reply it’s going to give.
My assumptions are based in science. Yours is paranoia. You are also making far more assumptions than you’re letting on. Your assumption that ai could perform substantially more energy efficiently for example, than an energy constrained highly optimized processor… Yikes.
The efficient coding hypothesis also helps these exact ai, because it’s being used to justify research into neutral networks and emulating brain function is a huge goal.
My arguments have nothing to do with substrate dependence, but with observable energy issues. You meanwhile are just vaguely waving your hands and saying in a long time maybe somehow magically an ai could exist which magically has all these problems you’re paranoid about.
Also human ai are categorically, observably, much much much slower than organoids. 30 minutes per prompt at human power levels proves that that issue is just “solved” by dumping more energy at the problem.
You need to do more legwork than just saying “substrate independence”, addressed by my organoid thought experiment or “maybe we get Clarke tech or something technology crazy right” which is wholly unconvincing. Maybe we make a The Thing organism in 5 years and none of this matters, ooooh no! Except of course that’s also thermodynamically impossible. Maybe we set the atmosphere on fire, maybe the LHC suddenly creates a black hole after all, maybe nif creates fusion but it turns out to summon demons from hell who eat souls.
Waving your hands and being paranoid about something when you have essentially no reason to expect it is even feasible, if possible at all, is just absurd.
If human brains can do it then it can be done. And it can probably be done better too. I don’t see any reason to assume our brains are the most energy efficient computer that can be created.
Also, my original argument is not about wether AGI can be created or not but wether we could keep it in a box.
Anyway, it’s just a philosophical thought experiement and I’ll rather discuss it with someone that’s a bit less of an dick.