Results 1 to 5 of 5
Like Tree1Likes

Thread: Skynet: Control dangerous A.I. artificial intelligence before it controls us

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

  1. #1
    Senior Member AirborneSapper7's Avatar
    Join Date
    May 2007
    Location
    South West Florida (Behind friendly lines but still in Occupied Territory)
    Posts
    117,696

    Skynet: Control dangerous A.I. artificial intelligence before it controls us

    Control dangerous AI before it controls us, one expert says

    He believes super-intelligent computers could one day threaten humanity's existence


    A killer robot from the 2009 film "Terminator Salvation" — exactly the type of future we don't want to see.
    By Jeremy Hsu

    updated 3/1/2012 1:22:12 PM ET 2012-03-01T18:22:12

    Super-intelligent computers or robots have threatened humanity's existence more than once in science fiction. Such doomsday scenarios could be prevented if humans can create a virtual prison to contain artificial intelligence before it grows dangerously self-aware.

    Keeping the artificial intelligence genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity's problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky. But successful containment requires careful planning so that a clever breed of artificial intelligence cannot simply threaten, bribe, seduce or hack its way to freedom.

    "It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways," Yampolskiy said. "Such software is not limited to infecting computers and networks — it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it."

    A new field of research aimed at solving the prison problem for artificial-intelligence programs could have side benefits for improving cybersecurity and cryptography, Yampolskiy suggested. His proposal was detailed in the March issue of the Journal of Consciousness Studies.


    Jeremy Hsu / TechMediaNetwork Computer scientist Roman Yampolskiy has suggested using this version of the biohazard or radiation warning signs to indicate a dangerous artificial intelligence.


    How to trap Skynet
    One starting solution might trap the artificial intelligence, or AI, inside a "virtual machine" running inside a computer's typical operating system — an existing process that adds security by limiting the AI's access to its host computer's software and hardware. That stops a smart AI from doing things such as sending hidden Morse code messages to human sympathizers by manipulating a computer's cooling fans.

    Putting the AI on a computer without Internet access would also prevent any "Skynet" program from taking over the world's defense grids in the style of the "Terminator" films. If all else fails, researchers could always slow down the AI's "thinking" by throttling back computer processing speeds, regularly hit the "reset" button or shut down the computer's power supply to keep an AI in check.

    Such security measures treat the AI as an especially smart and dangerous computer virus or malware program, but without the sure knowledge that any of the steps would really work.


    "The Catch-22 is that until we have fully developed superintelligent AI we can't fully test our ideas, but in order to safely develop such AI we need to have working security measures," Yampolskiy told InnovationNewsDaily. "Our best bet is to use confinement measures against subhuman AI systems and to update them as needed with increasing capacities of AI."

    Never send a human to guard a machine

    Even casual conversation with a human guard could allow an AI to use psychological tricks such as befriending or blackmail. The AI might offer to reward a human with perfect health, immortality, or perhaps even bring back dead family and friends. Alternately, it could threaten to do terrible things to the human once it "inevitably" escapes.

    The safest approach for communication might only allow the AI to respond in a multiple-choice fashion to help solve specific science or technology problems, Yampolskiy explained. That would harness the power of AI as a super-intelligent oracle.

    Despite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever. A past experiment by Eliezer Yudkowsky, a research fellow at the Singularity Institute for Artificial Intelligence, suggested that mere human-level intelligence could escape from an "AI Box" scenario — even if Yampolskiy pointed out that the test wasn't done in the most scientific way.

    Still, Yampolskiy argues strongly for keeping AI bottled up rather than rushing headlong to free our new machine overlords. But if the AI reaches the point where it rises beyond human scientific understanding to deploy powers such as precognition (knowledge of the future), telepathy or psychokinesis, all bets are off.

    "If such software manages to self-improve to levels significantly beyond human-level intelligence, the type of damage it can do is truly beyond our ability to predict or fully comprehend," Yampolskiy said.

    You can follow InnovationNewsDaily senior writer Jeremy Hsu on Twitter @ScienceHsu. Follow InnovationNewsDaily on Twitter @News_Innovation, or on Facebook.



    Control AI before it controls us, expert says - Technology & science - Innovation - msnbc.com
    Join our efforts to Secure America's Borders and End Illegal Immigration by Joining ALIPAC's E-Mail Alerts network (CLICK HERE)

  2. #2
    Senior Member Airbornesapper07's Avatar
    Join Date
    Aug 2018
    Posts
    62,618
    If you're gonna fight, fight like you're the third monkey on the ramp to Noah's Ark... and brother its starting to rain. Join our efforts to Secure America's Borders and End Illegal Immigration by Joining ALIPAC's E-Mail Alerts network (CLICK HERE)

  3. #3
    Senior Member Airbornesapper07's Avatar
    Join Date
    Aug 2018
    Posts
    62,618


    A.I. Skynet Terminator; The REAL Beast System is NOW

    A.I. SKYNET TERMINATOR; THE REAL BEAST SYSTEM IS NOW





    pacmanpacks
    pacmanpacks

    521 subscribers
    If you're gonna fight, fight like you're the third monkey on the ramp to Noah's Ark... and brother its starting to rain. Join our efforts to Secure America's Borders and End Illegal Immigration by Joining ALIPAC's E-Mail Alerts network (CLICK HERE)

  4. #4
    Senior Member Airbornesapper07's Avatar
    Join Date
    Aug 2018
    Posts
    62,618
    If you're gonna fight, fight like you're the third monkey on the ramp to Noah's Ark... and brother its starting to rain. Join our efforts to Secure America's Borders and End Illegal Immigration by Joining ALIPAC's E-Mail Alerts network (CLICK HERE)

  5. #5
    Senior Member Airbornesapper07's Avatar
    Join Date
    Aug 2018
    Posts
    62,618
    If you're gonna fight, fight like you're the third monkey on the ramp to Noah's Ark... and brother its starting to rain. Join our efforts to Secure America's Borders and End Illegal Immigration by Joining ALIPAC's E-Mail Alerts network (CLICK HERE)

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •