Re: friendly ai

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 28 2001 - 14:15:45 MST


> > If the system isn't smart enough to see the massive importance of
> > learning, use a programmer intervention to add the fact to the system that
> > "Ben Goertzel says learning is massively important". If the system
> > assumes that "Ben Goertzel says X" translates to "as a default, X has a
> > high probability of being true", and a prehuman AI should make this
> > assumption (probably due to another programmer intervention), then this
> > should raise the weight of the learning subgoal.
>
> Yeah, this is basically what we've done by explicitly making learning
> a system goal

Then I predict your system will be a far less effective learner than a
Friendly AI.

Consider three young AIs, still in the laboratory. One has learning as a
supergoal. One has learning as a subgoal of survival. One has learning
as a subgoal of Friendliness.

The first AI just learns things at random, depending on how you defined
the internal programmatic predicate that determines whether a cognitive
event is an instance of "learning".

The second AI will try to learn things that ve predicts will be useful in
survival... in other words, ve will try to learn things that are useful
for modeling, predicting, and above all manipulating reality.

The third AI will also try to learn things that are useful for modeling,
predicting, and manipulating reality.

But, unless the young survivalist AI encounters complex laboratory
problems that threaten to wipe vis disk drive as often as the Friendly AI
encounters complex laboratory humans, the Friendly AI is going to suck
*much more* complexity and feedback out of the situation.

That supergoal context isn't just there for looks, you know.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT