RE: AI in <what?>

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 19 2002 - 07:41:30 MDT


> As I see it, there are four reasons an AI needs an environment:
>
> 1. For training the AImind to accept input.
> 2. For allowing the AImind to develop mental skills in an interactive
> setting.(action/response kind of stuff)
> 3. Possibly for keeping the AImind close to us, in it's mental landscape.
> While it may be possible to make a mind with an entirely disembodied
> intelligence, just I/O ports and internet access, such a mind may have
> problems relating to us, as physically oriented many of our
> language-objects
> are.
> 4. To allow the AImind to more effective when it begins acting in
> the real
> world. If it has to extrapolate 'everything' it'll take longer
> and be more
> error-prone.

You have left out the potential necessity of socialization for the
development of the self.

I guess it is somewhat implicit in 2-3, but I mean something a little
stronger.

I mean that interacting with *other minds* is a key part of the process of
learning how to deal with *one's own mind*.

Socialization may not be the *only* path to self-understanding, but, it is
*one* path as shown by human developmental psychology, and I have an
intuitive feeling (perhaps overly anthropomorphic, hard to tell) that it is
a VERY GOOD path in an objective sense, not just for human beings.

> My basic conclusion is that the optimal tradeoff seems to be in a
> concrete
> instatiation of the AI in a virtual or sandboxed environment
> slightly lower
> in detail than our own. It seems to offer the best of all the
> options, while
> raising the complexity to a reasonable (if still ridiculous)
> level. I would
> like an AI that has a concrete concept of itself in space, and
> learns in an
> environment similar to my own. It seems that such an AI would be the most
> useful, relatable, and intelligent; given other tradeoffs.

My own intuition is that

1) Of course, a great diversity of powerful sense-inputs and actuators is a
*good thing*

2) Unlike Eliezer, I think that interacting with humans and software agents
on the Net [considered broadly, including financial datafeeds, biodatabases,
weather satellite data etc. etc., not just Web pages], will probably provide
an adequate environment for AGI, though it certainly won't lead to a
human-like mind

3) I think that in the early stages of an AGI project (and yes, Novamente is
*still* early-stage, because we don't have our mind-engine fully implemented
yet, not by a long shot. Webmind AI Engine was almost out of the early
stage of implementation & software testing and into the mid-stage of basic
testing and teaching, but I think it would not have passed thru the
mid-stage due to various implementation and design issues), it is best NOT
to focus on the building of elaborate perception and action systems. There
are tremendous resources devoted to this already in the academic and
business worlds: robotics, computer vision, etc. One thing the robotics
and computer vision (etc.) algorithms out there now LACK is serious feedback
from adaptive cognition. I think it makes sense to get cognition "basically
working" in a very simple perception/action environment, and then as one
enters the mid-state of seriously teaching one's AGI, THEN one works on more
serious perception & action modules, aimed at giving one's AGI a richer and
more humanlike subjective environment. Of course, it is also possible that
by this stage one has a deeper perspective on AGI, which tells one that so
much perception/action work is not so necessary ;->

Partly, one's view on this issue depends on how humanlike one wants one's
AGI to be. I am not aiming at a humanlike AGI, just a very smart one,
because I think that the latter is an easier problem. Compared to more
closely brain-inspired approaches like DGI and A2I2, my approach has less
data to use for motivation (as the human brain is only a loose inspiration
rather than a close guide), but has a lot fewer problems to solve in terms
of efficient harmonization with current hardware platforms (though these
problems are *still* very severe even for Novamente and we've put a lot of
work in on them).

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT