[ kaitensushi ] [ lounge / arcade / kawaii / kitchen / tunes / culture / silicon ] [ otaku ] [ yakuza ] [ hell ] [ ? / chat ] [ lewd / uboa / lainzine ] [ x ]

/silicon/ - technology

from the trenches
[catalog]

Name
Email
Subject
Comment
File
Embed
Password (For file deletion.)

• Files Supported: webm, swf, flv, mkv, mp4, torrent, 7z, zip, pdf, epub, & mobi.
• Embeds Supported: youtube, vimeo, dailymotion, metacafe, & vocaroo.
• Max. post size is 10MB / 4 files.

Remember to keep it cozy!

File: 1760565039402.jpg (144.31 KB, 1080x1080, k.jpg)

 No.2534

Does anyone here actually believe improvements in artificial intelligence will lead to a runaway process ending in artificial super-intelligence (aka the singularity), and if so, can you explain why the feedback effect of an AI improving itself has to be runaway rather than simply lead to diminishing returns? Is it about AI's speed advantage and innate breadth of knowledge?

You could, I suppose, argue that if smart humans can do something, then a smart AGI could do it too, and faster. Therefore if smart humans could eventually create super-intelligence, and smart AGI is successfully created, the rest of the staircase gets climbed much quicker in something resembling a runaway process. But I think people who argue this assume artificial super-intelligence is possible in the first place. What if it's simply not possible, or at least not possible with techniques derived from current advances? How can people be so sure there isn't a hard wall not so far away?

Despite being intelligent, and despite arguably harboring low-level super-intelligence among us in the form of the rare genius, we haven't been able to improve our own brains at all… at best we have some blunt hammers in the form of psychiatric drugs, and we don't even know how some of them work.

 No.2536

>>2534
I believe in the approaching singularity, and I never stopped to question whether or not there was simply a wall. It's an interesting thought. I guess if there is a wall, we'll find out soon enough.

I hope somebody gives you a better answer. Good luck!

 No.2537

File: 1760705783950.png (361.91 KB, 600x600, c69cee6bacfd0e0827a35077c7….png)

Are we not already at the point of diminishing returns? Don't get me wrong, AI tools are useful and are going to replace a lot of routine tasks, but it doesn't seem like it's on the path to becoming a "super-intelligence". If anything I'd say it's going to lead to a more uniform and consistent mediocrity.

 No.2538

File: 1760752084258.jpg (369.59 KB, 650x938, sudokillall -9.jpg)

we haven't reached the point where the intelligence can improve itself/fix itself when it errs. so no AGI still isn't an issue.
could improvements lead to a runaway process? i doubt it. computers don't do well when most/all of their resources are being used. an AI would have to rely on datacenter infrastructure, and high performance network infrastructure. I doubt anything massive enough to be considered AGI could move through the WAN like a worm could. it's too large. most loads are virtualized and tests could be performed to kill anything that thinks of breaking out of a container/VM. barring that there is a final solution

 No.2539

>>2538
What about distributing itself over a botnet? No one computer has to host it all.

Maybe I lack imagination but I don't think AI escape is going to be much of an issue in practice, even if we reach some level of AGI. We already have malware and the reason it doesn't overtake everything is because there are people working on both sides with the same tools. The same model that escapes and scans for vulnerabilities is going to be used in the very same way to plug the holes.

 No.2541

File: 1760880431673.jpg (114.85 KB, 1089x613, 24f8259ab44ed26cbbfa922e53….jpg)

>>2539
>AI escape isn't an issue

Until 5 computers in the botnet holding key resources are unreachable for whatever reason. Botnets are specifically used for DDOS and other less sophisticated attacks because even if some are turned off, the mass of them is still dangerous. An AI is not an automated script, it needs all it's resources to act together, not to mention the Vram and high storage needs. whereas a botnet is designed to open a webpage and hold resources, which is possible on most computers. Very different design capabilities. Have you looked at the full size on open LLMs? They're gigantic. A real AGI would be probably a lot larger.

Also WAN transfer speeds are not fast. You better hope this proto-indo skynet doesn't get turned off before transfers happen lol



[Return][Go to top] Catalog [Post a Reply]
Delete Post [ ]
[ kaitensushi ] [ lounge / arcade / kawaii / kitchen / tunes / culture / silicon ] [ otaku ] [ yakuza ] [ hell ] [ ? / chat ] [ lewd / uboa / lainzine ] [ x ]