How AI Could Ruin Humanity, According to Smart Humans

Discussion in 'BS Forum' started by mute, Feb 11, 2015.

  1. mute

    mute Well-Known Member

    Joined:
    Aug 25, 2010
    Messages:
    9,113
    Likes Received:
    3,142
    [​IMG]
    For the past 24 hours, scientists have been lining up to sign this open letter. Put simply, the proposal urges that humanity dedicate a portion of its AI research to "aligning with human interests." In other words, let's try to avoid creating our own, mechanized Horsemen of the Apocalypse.

    While some scientists might roll their eyes at any mention of a Singularity, plenty of experts and technologists—like, say, Stephen Hawking and Elon Musk—have warned of the dangers AI could pose to our future. But while they might urge us to pursue our AI-related studies with caution, they're a bit less clear on what exactly it is we're being cautious against. Thankfully, others have happily filled in those gaps. Here are five of the more menacing destruction-by-singularity prophecies our brightest minds have warned against.

    Machines Will Take Our Jobs
    According to Stuart Armstrong, a philosopher and Research Fellow at the Future of Humanity Institute at Oxford:

    The first impact of [Artificial Intelligence] technology is near total unemployment. You could take an AI if it was of human-level intelligence, copy it a hundred times, train it in a hundred different professions, copy those a hundred times and you have ten thousand high-level employees in a hundred professions, trained out maybe in the course of a week. Or you could copy it more and have millions of employees… And if they were truly superhuman you'd get performance beyond what I've just described.

    Humans Will Just Get in The Way
    Daniel Dewey, a research fellow at the Future of Humanity Institute, builds on Armstrong's train of thought in Aeon Magazine. After all, when and if humans do become obsolete, we'll become little more than pebbles in a robot's metaphorical shoes.

    "The difference in intelligence between humans and chimpanzees is tiny," [Armstrong] said. "But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it's possible for a relatively small intelligence advantage to quickly compound and become decisive."

    .... "The basic problem is that the strong realization of most motivations is incompatible with human existence," Dewey told me. "An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we go to construct a building."

    You could give it a benevolent goal — something cuddly and utilitarian, like maximizing human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximize your happiness.

    Artificial Intelligence Won't Actually Be All That Intelligent
    AI doesn't need the explicit intent of exterminating us to be scary. As Mark Bishop, professor of cognitive computing at Goldsmiths, University of London, told The Independent:

    I am particularly concerned by the potential military deployment of robotic weapons systems – systems that can take a decision to militarily engage without human intervention – precisely because current AI is not very good and can all too easily force situations to escalate with potentially terrifying consequences," Professor Bishop said.

    "So it is easy to concur that AI may pose a very real 'existential threat' to humanity without having to imagine that it will ever reach the level of superhuman intelligence," he said.We should be worried about AI, but for the opposite reasons given by Professor Hawking, he explained.


    Wall-E Syndrome
    Or maybe we'll see the end coming long before it makes its way over. Except that by then, we'll be too incompetent to survive even attempting to shut it down. Bill Joy, cofounder and Chief Scientist of Sun Microsystems, writes in Wired:

    What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.


    The Robots Will Effectively Eat Us
    There's something called the "grey goo" scenario, which essentially postulates that if robots start perpetually reproducing, we'll essentially just get squeezed out amidst the massive mecha expansion. And if they need humans to power their out-of-control masses—as Discovery points out, we're screwed.

    If nanotechnology machines — which can be a hundred thousand times smaller than the diameter of a human hair — figure out how to spontaneously replicate themselves, it would naturally have dire consequences for humanity [source:Levin]. Especially if the research funded by the U.S. Defense Department gets out of control: Researchers there are attempting to create an Energetically Autonomous Tactical Robot (EATR) that would fuel itself by consuming battlefield debris, which could include human corpses [source: Lewinski].

    If nanotechnology did develop an appetite for human flesh — or some of the other things we rely on for survival, like forests or machinery — it could decimate everything on the planet in a matter of days. These hungry mini-robots would relegate our blue and green home to "grey goo," a term that describes the unidentifiable particles left behind after the nanocritters eat buildings, landscapes and, well, everything else.

    http://gizmodo.com/how-ai-could-ruin-humanity-according-to-smart-humans-1679025876
     
  2. JStokes

    JStokes Well-Known Member

    Joined:
    Apr 27, 2013
    Messages:
    20,735
    Likes Received:
    9,196
    When I read the thread title I was like "Who is this Al dude, he sounds bad ass".

    _
     
    NYJalltheway likes this.
  3. Dierking

    Dierking Well-Known Member

    Joined:
    Apr 4, 2006
    Messages:
    16,327
    Likes Received:
    15,275
  4. joe

    joe Well-Known Member

    Joined:
    Mar 30, 2009
    Messages:
    8,993
    Likes Received:
    5,632
    I was thinking AI, "the Answer."
     
  5. IDFjet

    IDFjet Well-Known Member

    Joined:
    Sep 8, 2014
    Messages:
    3,452
    Likes Received:
    2,502
    I was thinking it was Al Gore.

    Anyway, recently saw Automata on Netflix and its related to this subject--recommend it but don't be too critical of some plot aspects.
     
  6. JStokes

    JStokes Well-Known Member

    Joined:
    Apr 27, 2013
    Messages:
    20,735
    Likes Received:
    9,196
    Saw it last week. Didn't like the ending but it was thought provoking.

    _
     
  7. Br4d

    Br4d 2018 Weeb Ewbank Award

    Joined:
    Apr 22, 2004
    Messages:
    36,670
    Likes Received:
    14,472
    Machines are already taking jobs in manufacturing and have been for decades now. Not sure I believe most of the cases up top are valid but machines turning people into low-paid drones is already happening and the pace is accelerating.
     
  8. JetBlue

    JetBlue Well-Known Member

    Joined:
    Nov 24, 2004
    Messages:
    11,626
    Likes Received:
    5,837
    If we create a massive workforce of machines because it benefits us economically from a production capability standpoint, but creates massive unemployment, who will be buying the products the machines are creating?
     
  9. joe

    joe Well-Known Member

    Joined:
    Mar 30, 2009
    Messages:
    8,993
    Likes Received:
    5,632
    Have dildo sales spiked over the past year? That might be an indickator.
     
  10. JStokes

    JStokes Well-Known Member

    Joined:
    Apr 27, 2013
    Messages:
    20,735
    Likes Received:
    9,196
    Maybe most of the products will be after-market parts for the machines that have replaced the workers?

    _
     
  11. RuJFan

    RuJFan Well-Known Member

    Joined:
    Jun 8, 2012
    Messages:
    4,128
    Likes Received:
    1,851
    Back in 19th century there were massive anti-machine riots. Same deal really -- machines are taking our jobs.
    History has cyclical nature.

    Some of the scenarios are beyond laughable, they are simply illogical. For example, if AI developed taste for trees, how can it destroy all forests of the world in a matter of days? If it's intelligent, it must realize that consumption without restrictions would result in destruction of what AI wants to consume.
    The laws off population are applicable to machines just as they are to humans.
     

Share This Page