Some time ago, I saw a blog post by Draginol/Frogboy/Wardell, talking about the dangers he saw in the combination of human greed, and the approaching AI singularity, to my shame... I cannot find it.
The long and short of it as best I recall, was that humanity, under those circumstances, was probably doomed, but it would depend on what happened on the way there.
After a chap I know commented about AI and the singularity (he's picking 10 years, I'm picking longer...) I tried to find the post and failed.
I have far too many questions at this point, but the first (hopefully the easiest!) is if anyone can supply a link to the post?
Second, since an AI would seem to have a need for a purpose, I don't necessarily see them going 'Skynet' (Kill all Humans) but I could easily see them deciding to 'guide' humanity from the shadows without our ever knowing (somewhere between Asimov's machine brains and the Samaritan AI from Person of Interest), especially given the increasingly authoritarian direction our modern society has gone (Horseshoe theory). What are your thoughts on this, and is there a way to try and preserve the individual freedom to choose a path in life in an age of eternal surveillance and machine interference (social media feeds, tailored advertising)?
Regrets if this is the wrong place to post this, Thanks in advance.