Re: Eliezer: unconvinced by your objection to safe boxing of "Minerva AI"

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Mar 07 2005 - 23:55:29 MST


Peter de Blanc wrote:
> Hi,
>
> you seem to be saying that your AI has this concept of a mind, which it
> assumes must be a Bayesian rationalist, and so when it encounters a human
> being, it will not have anticipated a mind which is irrational. What seems
> more likely to me is that a mind-in-general would view a mind-in-general as
> just another type of system which can be manipulated into a desired state.

*nod*

> You also asked for examples of what an AI could infer about humans from
> looking at its source code. Here are some, off the top of my head:
>
> - Look at the last modified dates. Humans take a long time to write code.
> - Human-written code contains many lengthy, ignorable comments; humans do
> not think in code.
> - Humans use long variable names; they need to be constantly reminded of
> what each variable is for.
> - Human code is highly modular, to the detriment of performance. By this and
> the above, humans have a small short-term memory.

Depending on whether the code is compiled or interpreted, the first
three items may not be available.

But the last item will be available, and it and other structural cues
are sufficient information (given sufficient computing power) to deduce
that humans are fallible, quite possibly even that humans evolved by
natural selection.

We know a tremendous amount about natural selection on the basis of (a)
looking at its handiwork (b) thinking about the math of evolutionary
biology that systematizes the evidence, given the nudge from the evidence.

One can equally well imagine a Minerva superintelligence (that is, a
fully functional transhuman AGI created as pure code without any
real-time interaction between the programmer and running AGI code) that
studies its own archived original source code. This is a smaller corpus
than terrestrial biology, but studiable in far greater detail and much
more tractable to its intelligence than is DNA to our own intelligence.
  I would not be surprised to see the Minerva SI devise a theory of
human intelligence that was correct on all major points and sufficient
to manipulation.

The point is probably irrelevant. A Minerva FAI seems to me to be three
sigma out beyond humanly impossible. I can just barely conceive of a
Minerva AGI, but I would still call it humanly impossible.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT