i think more likely than the singularity is a future where AI does not reach awareness or intelligence but does get very good at solving computer problems that humans still must solve
computer programs don't have to be self aware to be really good at writing other programs
in that case it'll be AIs all the way down and nobody will have any fucking clue how it works
@tcql Have you seen the orthogonality thesis? That sounds about the same as what you're talking about here.
I'm thinking if strong AI ever does happen, it'll be by accident as the result of some unforeseen emergent behavior on some whonking great computer farm - I'm looking at you, Google, with your self-learning dynamic load balancing data centers.....
@ElectricMink
Year 2027
Roy was running as fast as he could, black tie whipping over his shoulder. He knew that there was no way he could get to a workstation fast enough, but primal instinct told him he had to try.
The data center, buried under hundreds of feet of dirt and concrete in the Nevada desert, was in utter chaos โ alarms blared as Roy ran between the server racks.
[CONTAINMENT BREACH; ROGUE LOAD BALANCER] rang over the loudspeaker.
"SHIT", Roy's brain screamed, "this is the end"
I mean there was a paper a few weeks ago about using NNs to generate optimal NN architectures to solve other problems ๐คท