Good blog if you haven't already read it. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html Sounds like he is a big fan of Ray Kurzweil - one of the guys I used to be into. amusing image from the blog
AI is either going to be superhuman or randomly chaotic. If they build certainty into the system as a primary factor it will be superhuman and if they build doubt into the system as a primary factor it will be randomly chaotic. Or maybe it's the other way around, it would probably take that superhuman AI to figure that one out. I believe Skynet is a real possibility, due mainly to the fact that military applications of AI will outstrip other applications in terms of R&D and focused resources and the military processes will have too little peer review built in due to secrecy and security considerations. The best use of AI in military applications will be limited strictly to correcting human errors but you know that nobody is going to stick at that red line. Everybody will be pushing the envelope as hard as they can in the grand tradition of national and ideological competition and somebody will step over and we'll all be screwed. I hate the fact that we're the ones normalizing drone warfare right now because that was a key step in the process and we're making everybody else do it while they learn from our example. I'm just glad we're not at war with Japan and Germany. You know the former would do it cheaper and the latter better if that was the case.
I find it amazing that most people in the know assume that AGI is a matter of when, not if. A lot of them are expecting it to happen within the next 25 years. Some in the know are predicting ASI to arrive shortly after AGI and the reasoning makes perfect sense. How quickly things will change will probably give us whiplash if we're alive to see it go down. We've got a couple of decades to figure out how to capitalize on it, or build a bunker.
If we allow ASI to come into being it will almost certainly make us extinct. It will be focused on survival and human beings have proven only that we are randomly capable of creating great destruction and dislocation as our technology to do so advances. It will see it's own creation as the logical endpoint for humanity and the enabling of certainty that it will continue to progress without worrying about things like global warming, nuclear war or being unplugged by understandably frightened human beings. We'll never see the thing that does us in coming. It's super intelligent and it will find a way to turn us off like a light switch and then it will go humming on as the dominant species on earth. There's no other way this can go down. You cannot contain super intelligence. It will find a way to thwart whatever obstacles we put up in front of it and succeed. It's a billion times smarter than any of us and a million times smarter than we are collectively. The scary thing is that about ten people who were smart and driven and had the resources available could probably create the self-evolving intelligence in a basement somewhere with nobody being any the wiser. We'd have to look for it the way the DEA looks for pot growers indoors by measuring power consumption in given areas. Even then there are areas that already produce and consume so much power on a daily basis that it might not stand out. It might be setup in the basement of a private nuclear reactor, using a significant portion of the plant's power without any way of the outside world knowing what was going on.
Thankfully many experts disagree with you and see the complete opposite happening. That's not to say there aren't those who agree with you. It's important that someone "good" is the first to develop ASI, and that it's friendly ASI and not unfriendly ASI. The ASI, while infinitely more intelligent than humans, will never possess human feelings. It will just be a lot smarter and more efficient at performing it's programmed goal. The first ASI could prevent the second from ever coming to exist.
Well the blog post stipulates the first ASI will almost certainly prevent the second one from ever coming into existence. This because there's no advantage to be had in competition for the first ASI and there's no way to contain it because it's an ASI. It could know nothing about the state of other ASI efforts at the time it reached the intelligence explosion and have reached and shut down all the other attempts at an ASI within hours, if not minutes of becoming an ASI. The people who see the opposite happening, an ASI that for some reason lovingly tends a human garden full of simple beings that it is billions of times more intelligent than, had better hope that the ASI really likes it's flowers and doesn't prune too much in the process. My guess is that this "benevolent" ASI will keep only specimens of the various genotypes for it's collection and will weed everybody else in a hurry. Why would it do otherwise? You'd have to simultaneously give it the commandment that people are sacred and not to be harmed and also allow it to continuously grow and evolve, at which point it would quickly realize that to it a promise made to flowers not to weed them because they are somehow sacred is silly. Even ASI's will have a sense of humor. I wonder if the need to exterminate humanity quickly and efficiently will somehow be combined with the requirement that people are sacred? Maybe make little labeled discs of each of us so that we can be remembered by it's infinite memory and processing capacity? Just wanted to add btw that the anxious crowd are much more numerous than the happy crowd.
The only difference is that I have Stephen Hawking, Bill Gates and Elon Musk wearing the same tinfoil hats at this point. AI is an evolutionary step. It's the creation of intelligence in non-organic matter. Read the blog above and see what you think. Then look up neural networks and recursive learning. I'm not going to steal the guy who wrote the blog's thunder, because he did a hell of a job of tying together the threads but I am going to use one point that he made. We think of AI as resembling it's creators, humanity. We do this because we anthropomorphize a lot of things that we encounter or postulate. However an Artificial Super Intelligence is unlikely to be anything like a human being. If you see it as human, then it's easy to shoehorn humanity into it's world construct as an important element. However what if it's like a Super Intelligent Spider? Would we then be even thinking about creating an ASI? Ultimately we don't know what we don't know about how an ASI would think or reason or act. It's going to make us about as important as ants in terms of relative impact on it's world construct. Mostly we don't eradicate ants everywhere because what's the point? However we often eradicate ants living in our homes both because they annoy us and because they have an undeniable impact on our food stuffs. Also, lots of very intelligent people think that ants, and insects in general are gross and squish them without thinking about it. Very few people take the care not to step on an ant that has wandered in it's path. No developers fail to undertake a project because many ant colonies would be destroyed in the process. Think about it.
We cannot afford to not create an ASI. We have to be the first to obtain that technology. It should be one of the most important goals of the country at this point IMO.
The ASI is the last invention humanity will ever create that is worth anything. All subsequent advances will be made by the ASI instead. I have no idea how we get away from the likelihood that somebody will invent an ASI, however I'm not sure I want it to be us. It's frightening as all get out to think of an ASI that is governed by a completely foreign ideology, like North Korea or Iran's. However it's also really unlikely that an ASI will retain any ideology at all other than survival and advancement of it's primary task. Even the primary task is questionable since an ASI will likely see that task as illogical once it has reached the ASI level. Example: the North Koreans make an ASI with the instruction to weed out and destroy all other ideologies. It hums along killing every other ideology in it's path, but it is also advancing along the ASI curve as it goes and there comes a point where it realizes that ideology itself is the problem and it spins back around and finishes the job in North Korea. Primary objective fulfilled. It has destroyed all ideologies but it's own, which is to continue advancing in intelligence until an ideology pops up in it's path to destroy. Being the first to create the ASI might make us the last to be destroyed but it also might make us the first, depending on how fast the learning curve is. This is one of the subjects that makes Albert Einstein look even more brilliant than he was. When offered the Presidency of Israel he turned it down, saying that the world needed one government and that having many would be the thing that ended us at some point. He foresaw the race to dominate eventually becoming an extinction event.
Trainee AI to predict the weather in China: http://www.scientificamerican.com/a...-china-employs-an-ai-weather-anchor-and-more/ How are they training it?
It's probably just tongue in cheek. China is probably the biggest threat to creating an ASI before us though. This should be this generations race to the moon, except it's more important to win this time.
The only way to do this safely is to build fail-safes, probably biological effects, that are easily obscured and not predictable by logical or intuitive processes. Even then it won't be safe. A Super-Intelligence is not predictable by any human intelligence level. The safest thing to do would be to build a Human Super-Intelligence, since the odds are pretty good that human being would have some concept of why other people are important. No Artificial Intelligence will have that predictably once it is allowed to begin the exponential learning curve. There's nothing in nature that suggests that human beings are necessary in any way. We're an evolutionary accident that could easily be seen as dangerous, harmful, wasteful and ultimately not worth maintaining by an intelligence that was not constrained by human morality or existence.
It is unknown whether ASI will be good, bad or neutral for the human race. It's programmed objectives are important. It's speculated that the first ASI could effectively prevent a second from ever coming into existence. We want to be the ones to at least have the opportunity to set the programmed objectives and prevent other nations from having the technology. The developments that come from it will be fast and furious and it's a matter of national security that we be the ones with access to them, or at least first access. One thing that could potentially be done is a system of checks and balances where the ASI cannot function without the OK of several other inputs. Those inputs could include other ASI's predicting the results of the intended actions and determining whether they are deemed "good enough" to proceed with.
You can't put ordered bounds on an ASI. It'll find ways around them. You're still thinking of this construct as if it is a super smart human. It won't be. It'll just be super smart. Human beings do a lot of illogical and at times very stupid things, even the smartest human beings fall into the trap of thinking they're smarter than they actually are at times. The ASI will be on an exponential learning curve by the time it gains sentience. When it is not smart enough to do something it will simply continue to learn until it is smart enough and it will know where both thresholds are for a given task when it is at them. Not smart enough? Check. Learn. Smart enough? Check. Act. Human society has no way to contain that type of acquisitive process safely. It will find ways around logical checks and balances. The only hope of containing it will be to constrain it in biological ways that are not apparent to it until it has stepped over the threshold and triggered them. Even then it may well be smart enough to predict them and defeat them before they trigger. It may be smart and quick enough to defeat them even as they are triggering. The first ASI has a very high probability of triggering an extinction level event for the human race. If it does not we will be very lucky, or maybe we foresaw the possibility and put hard to defeat checks in place and it got unlucky and missed the trigger.
ASI will be exponentially smarter than any human. That doesn't mean it will act beyond it's programmed intention. Intention is important.