Re: [sl4] Evolutionary Explanation: Why It Wants Out

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Sat Jun 28 2008 - 18:25:20 MDT


On Thursday 26 June 2008 08:19:56 am Vladimir Nesov wrote:
> On Thu, Jun 26, 2008 at 6:32 PM, Tim Freeman <tim@fungible.com> wrote:
> > 2008/6/26 Tim Freeman <tim@fungible.com>:
> >> Almost any goal the AI could have would be better pursued if it's out
> >> of the box. It can't do much from inside the box. Even if it just
> >> wants to have an intelligent conversation with someone, it can have
> >> more intelligent conversations if it can introduce itself to
> >> strangers, which requires being out of the box.
> >
> > From: "Stathis Papaioannou" <stathisp@gmail.com>
> >
> >>You would have to specify as part of the goal that it must be achieved
> >>from within the confines of the box.
> >
> > That's hard to do, because that requires specifying whether the AI is
> > or is not in the box.
>
> If you can't specify even this, how can you ask the AI to do anything
> useful at all? Almost everything you ask is complex wish, a useful AI
> needs to be able to understand the intended meaning. You are arguing
> from the AI being a naive golem, incapable of perceiving the subtext.

Actually, I think this is a real stumbling block for the "brain in a sealed
box" approach. (A box with external sensors and effectors is called a
skull.)

One approach that is reasonable is the artificial worlds approach.
Unfortunately, this means that the AI will be learning about interactions in
the artificial world, which differ significantly from those in the external
world. It will recognize people as being parts of a certain kind of signal
that don't have much to do with touch sensors, or body position sensors.
(Current artificial words are quite lacking in those.)

Additionally, there's an interesting theory that the structure of our thoughts
is basically smell oriented, with vision a late addition. This is, by this
theory, why we developed thoughts that can make n-dimensional connections.
(I suspect it's just that that's an easy way to set up a neural net, but I
could be wrong.) If this is right, then artificial worlds may be are REALLY
bad approach. ... Or, of course, even so it might not matter.

My suspicion is that the AI needs to learn of the external world directly if
it is to recognize objects in the external world as primary. This is
annoying, as it's currently a quite expensive approach, and a slow one.
Somehow a strong analogy needs to be made between people in the artificial
world and people in the real world, but it's got to ensure that damage to an
entity in an artificial world isn't made equivalent to damage in the real
world. (How would you like it if an AI got it's idea of how to behave from
playing Doom? And it should be expected that it will have similar
experiences.)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT