http://blogs.scientificamerican.com...ating-a-human-does-not-signal-the-apocalypse/ This is an optimistic projection of why human beings continually losing advantage to AI and machines is not a bad thing. The thing that makes it optimistic is best encapsulated by the last two paragraphs. The author clearly does not envision a scenario in which an AI is constantly redefining and extending itself. If he did the last three sentences would seem very ominous indeed.
The thing that always gives me hope in the AI discussion is that, without fail, the human race is selfish enough to preserve itself above all other outside elements to the point that there is a quotient of survivability in any scenario. We're like roaches. That may also be why we hate roaches so much, in some unspoken subconscious way.
Software is programmed to do a specific task. When ASI is created that will still be true. It won't start making decisions about things to do other that it's programmed task. I think your fears lie in humanization of ASI.
The issue is that there will be a competitive advantage in having the best AI on the planet. Powers, both governmental, corporate and private (hedge funds, think tanks, etc), will race to develop the best AI's. Somebody will screw up and overstep and unlike the Cuban Missile Crisis there will not be human beings on both sides ultimately deciding whether or not to pull the trigger. It will just be the first sentient AI, in a human oops moment from which there will likely be no recovery. Human beings are not a necessary part of the Universe. We just happen to be the most highly evolved being in one of the best garden spots available and that's how we got here and why we're still around. A sentient super-intelligence has no reason to maintain us. Think of how you think of other human beings and classify them by groups and usefulness and all that. The sentient super intelligence is likely to lump all of humanity in your lowest tier or below that. It may well just see us as roaches. We're wasteful and chaotic and we frequently go to ruin all on our own. Maybe the ASI decides to keep a few pet geniuses around as breeding stock so it has a random element to inject into it's matrix now and then as a fudge factor. Most of us will be of no value at all to it and we'll still be consuming resources at a huge rate and multiplying into a greater pestilence on the grand vision of the matrix it inhabits. What do you do when you discover an anthill under your kitchen window and see ants in the kitchen? What *would* you do if you had a way to make all ants disappear and never have to deal with them again?
That's not AI, that's HI. The programmers set the cells to detect up to 3 states (at this point) and respond in a predictable manner to those states. It is a very interesting development though. As long as they don't make the mistake of allowing the cells to interact with each other directly with independent goals we're not in a potential AI trap here. The bio-warfare implications are kind of scary but that's an HI problem at this point. If we're stupid enough to make biological weapons that are programmable in this way we deserve what happens next.
AI can interact with just about anything once it has reached the point of accelerating cognition that has surpassed HI. This isn't scary to me because of AI. AI would come up with something much scarier than this once it got on the curve and bypassed us. Machines are better than us at any repetitive task that requires iteration. Once learning becomes an artificial iterative process we're going to become yesterday's news in a hurry. The question is how do you build in checks and balances that a god-like intelligence cannot find a way around? The answer is unclear.
I can't imagine much worse than going down in a shipwreck and being stuck in the same lifeboat as Br4d. Holy smokes. By day break of the first morning, I'd be fucking food. Geezus.
AI is software. If there is no way to programmatically communicate with something AI cannot control it. How does software control a deer, for example? It cannot. Well - unless it can programmatically create living cells. Wait.