Re: Fighting UFAI

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Wed Jul 13 2005 - 19:00:16 MDT


"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
justin corwin wrote:
> For those of you who are still shaking your heads at the impossibility
> of defending against a transhuman intelligence, let me point out some
> scale. If you imagine that an ascendant AI might take 2 hours from
> first getting out to transcension, that's more than enough time for a
> forewarned military from one of the superpowers to physically destroy
> a signifance portion of internet infrastructure(mines, perhaps), and
> EMP the whole world into the 17th century(ICBMs set for high altitute
> airburst would take less than 45 minutes from anywhere in the
> world(plus America, at least, has EMP weapons), the amount of shielded
> computing centers is miniscule).
Even if you nuke the entire world back to the seventeenth century and the UFAI
survives on one Pentium II running on a diesel generator you're still screwed. It just waits and plans, and by the time civilization gets started again,
it's running everything behind the scenes - waiting for the exact first
instant that the infrastructure is in place to do protein folding again.
Assuming there isn't some faster way. Can you hurt the UFAI more than you
hurt humanity? Can you annihilate it, ever, if you give it even sixteen
seconds running free on the Internet in which to plan its perpetuation? Maybe if you destroyed the entire planet you could get the UFAI too.
One phone call to a vulnerable human mind (for we are not secure
architectures) and the nukes could be directed at you, not at the UFAI.
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence

  At a certain maturity level of our robotics industry, UFAI should be able to confidently contemplate wiping out humans with available stocks of chemical and bio-weapons, if it can gain access to key infrastructure computer systems. At the same time, once we have FAI or MM, we should be able to be rid of the UFAI threat once and for all, even if there are pieces of UFAI-infected chipboards lying all over the place from a previous catalcysm; the survivors would've presumably been aware of the dangers of playing with computer artifacts.
  Robotics is rapidly evolving. But RFID tags and wireless fiber optics cameras on all robots and mobile mechanical arms in the future, along with monitoring all bio-labs and many fabrication plants, should allow us to notice anomalies. Regarding executing mysterious bio-experiments or microscopy constructs: don't do it. Have the meme spread. The same communications available to UFAI to corrupt individuals, is available to us to warn people. The key is to have a single individual warn the appropriate military authorities, before a single UFAI disciple is found. There are many redundant and cheap AI alarms which could allow a contacted individual (many decoys can be trained too) to counter an UFAIs monopoly on conventional communications and warn others. Many of our important military and financial infrastructures will utilize quantum encryption techniques which are unbreakable (without the hacker being observed) in the next decade or two. UFAI will attempt to find a weak or
 very evil individual. But it has to find this person before we become aware of its existence. So if we warn everyone who uses communications technology, not to do any freelance labwork, we might catch UFAI with simple surveillence technologies (external bugs in Johnny Psycho's apartment) or decoys. If we had any warning at all, we could sacrifice civilization for survival.

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT