Programmed Intention is a human concept. How do we know that it will stay within those bounds once it has begun to learn? We don't know what an intelligence that is exponential in capability will do with the initial bounds and structures that are placed on it. Even in the best case it may well go haywire as in the excellent blog post you linked above. Clearly a robotic arm that is designed only to sign thank you notes to customers of a business has underlying it's very purpose the fact that customers must exist for it to fulfill it's function. However the AI in question exterminates humanity because it must do this task to the best of it's capability and it needs the resources to do so, which conflicts with human needs for those resources also. It winds up creating an untold number of things to sign in pursuit of perfection as it effectively signs the whole Universe in the end. You know what's a little bit scary at this point? There's a star about 1500 light years from Sol that has been dimming in a strange way for the last century in which it has been observed. The theories about why it is dimming and then brightening and then dimming again, but always a bit dimmer in the end each cycle have ranged from a huge swarm of comets in orbit around it to a civilization slowly enclosing the star in a Dyson construct to we're not sure what else it could be. However if the pattern is regular it could also be an ASI expanding exponentially and grabbing power from the star and slowly eating it on it's way to the rest of the galaxy. Obviously the possibility is absurd at this point but the Universe is a biiiiiiiig place and it's not so absurd to think this might be happening somewhere out there at this very moment. Also, the fact that the star is 1500 light years away means that what we're seeing happened 1500 years ago. If something was developing light speed travel on an exponential intelligence curve 1500 years ago it could be here tomorrow.
If you haven't read any of the Kurzweil books, I'd recommend Age of Spiritual Machines and The Singularity is Near. I think you'll really dig them. I've only finished the first and am in the process of reading the second. He's written several others as well. He's a brilliant guy, sometimes a little out there, but this is his wheel house.
Kurzweil is in the Star Trek mode of futurism. The world will be a wonderful place and money will go away and we'll all live forever and etc. Although I love Star Trek I'm more of a Firefly mode guy. I think the world will go to hell and we'll do a huge amount of damage to ourselves in the process and then we'll go extinct. It's just a matter of time given human nature, nationalism, ideology and technology. 100,000 years ago there were at least four and maybe as many as seven species of human beings on earth. As far as we know there are only genetic remnants of a few of the other species alongside Homo Sapiens Sapiens. When biological diversity crashes it inevitably takes the dominant species along with it because the reason they're dominant is that the biological diversity present at the time they became dominant favored their rise. Listening to guys like Kurzweil is like listening to Woodrow Wilson on the League of Nations. Great ideas there and we got Adolf Hitler and Josef Stalin out of them.
http://arstechnica.co.uk/information-technology/2016/10/google-ai-neural-network-cryptography/ paging @Br4d ...