No.2534
Does anyone here actually believe improvements in artificial intelligence will lead to a runaway process ending in artificial super-intelligence (aka the singularity), and if so, can you explain why the feedback effect of an AI improving itself has to be runaway rather than simply lead to diminishing returns? Is it about AI's speed advantage and innate breadth of knowledge?
You could, I suppose, argue that if smart humans can do something, then a smart AGI could do it too, and faster. Therefore if smart humans could eventually create super-intelligence, and smart AGI is successfully created, the rest of the staircase gets climbed much quicker in something resembling a runaway process. But I think people who argue this assume artificial super-intelligence is possible in the first place. What if it's simply not possible, or at least not possible with techniques derived from current advances? How can people be so sure there isn't a hard wall not so far away?
Despite being intelligent, and despite arguably harboring low-level super-intelligence among us in the form of the rare genius, we haven't been able to improve our own brains at all… at best we have some blunt hammers in the form of psychiatric drugs, and we don't even know how some of them work.
No.2536
>>2534I believe in the approaching singularity, and I never stopped to question whether or not there was simply a wall. It's an interesting thought. I guess if there is a wall, we'll find out soon enough.
I hope somebody gives you a better answer. Good luck!
No.2539
>>2538What about distributing itself over a botnet? No one computer has to host it all.
Maybe I lack imagination but I don't think AI escape is going to be much of an issue in practice, even if we reach some level of AGI. We already have malware and the reason it doesn't overtake everything is because there are people working on both sides with the same tools. The same model that escapes and scans for vulnerabilities is going to be used in the very same way to plug the holes.
No.2541
>>2539>AI escape isn't an issueUntil 5 computers in the botnet holding key resources are unreachable for whatever reason. Botnets are specifically used for DDOS and other less sophisticated attacks because even if some are turned off, the mass of them is still dangerous. An AI is not an automated script, it needs all it's resources to act together, not to mention the Vram and high storage needs. whereas a botnet is designed to open a webpage and hold resources, which is possible on most computers. Very different design capabilities. Have you looked at the full size on open LLMs? They're gigantic. A real AGI would be probably a lot larger.
Also WAN transfer speeds are not fast. You better hope this proto-indo skynet doesn't get turned off before transfers happen lol