RE: friendly ai

From: Ben Goertzel (ben@webmind.com)
Date: Sun Jan 28 2001 - 15:24:37 MST


You're positing a silly "straw man AI" that has learning as its only goal,
and hence
learns "at random", but that's not something I or anyone would build

I feel this discussion has arrived at a fairly subtle
mathematical/cognitive-science
issue that is not going to be resolved through verbal e-mail discussions....
The optimal
weighting of different goals in an AI system is not a trivial matter by any
means.

If you have
some kind of simple proof that keeping Friendliness as the ultimate
supergoal is always the most
efficient thing, I'd like to see it presented systematically -- and
hopefully you'll attempt this
in your write-up.

We seem have different intuitions about what's going to be more or less
effective in terms
of an AI goal system, and we can present each other with our intuitions
till we're blue in the face without changing one another's
intuitions...

If your write-up doesn't bridge the gap, perhaps an intensive F2F discussion
will result in progress here, or
else some careful systematic, hopefully mathematical analysis

I'll try to find the time to write up my point of view systematically, but
don't expect to get the time
till Friday at earliest -- it's looking like a busy week

ben

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Eliezer S. Yudkowsky
> Sent: Sunday, January 28, 2001 4:16 PM
> To: sl4@sysopmind.com
> Subject: Re: friendly ai
>
>
> > > If the system isn't smart enough to see the massive importance of
> > > learning, use a programmer intervention to add the fact to
> the system that
> > > "Ben Goertzel says learning is massively important". If the system
> > > assumes that "Ben Goertzel says X" translates to "as a
> default, X has a
> > > high probability of being true", and a prehuman AI should make this
> > > assumption (probably due to another programmer intervention),
> then this
> > > should raise the weight of the learning subgoal.
> >
> > Yeah, this is basically what we've done by explicitly making learning
> > a system goal
>
> Then I predict your system will be a far less effective learner than a
> Friendly AI.
>
> Consider three young AIs, still in the laboratory. One has learning as a
> supergoal. One has learning as a subgoal of survival. One has learning
> as a subgoal of Friendliness.
>
> The first AI just learns things at random, depending on how you defined
> the internal programmatic predicate that determines whether a cognitive
> event is an instance of "learning".
>
> The second AI will try to learn things that ve predicts will be useful in
> survival... in other words, ve will try to learn things that are useful
> for modeling, predicting, and above all manipulating reality.
>
> The third AI will also try to learn things that are useful for modeling,
> predicting, and manipulating reality.
>
> But, unless the young survivalist AI encounters complex laboratory
> problems that threaten to wipe vis disk drive as often as the Friendly AI
> encounters complex laboratory humans, the Friendly AI is going to suck
> *much more* complexity and feedback out of the situation.
>
> That supergoal context isn't just there for looks, you know.
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT