RE: Pain vs. negative feedback (was: Evolving minds)

From: Ben Goertzel (ben@intelligenesis.net)
Date: Sat Nov 18 2000 - 21:25:07 MST


> Ben Goertzel wrote:
> >
> > In Feb. we'll start a new phase, when we'll make operational
> > the "psyche" component of the system (goals, feelings,
> > motivations) ... we then will be quite precisely dealing with
> > issues of friendliness and unfriendliness. Questions like:
> > What attitude does the system have when we insert new knowledge
> > into its mind, which causes it annoyance and pain because it
> > forces it to revise its hard-won beliefs....
>
> I really think you're making unnecessary problems for yourselves! The
> human brain uses instincts because it was built that way. Why use
> instincts when you can use declarative, rational, context-sensitive
> thoughts to accomplish the same function with more finesse? Because
> thoughts are slower? True.

Because higher-order inference (even of the probabilistic variety)
requires a lot of data to draw plausibly
accurate judgments. In the absence of quality data, less explicitly
inferential,
more intuitive (and neural-nettish) methods work better.

As it happens, the achievement of important goals for an organism tends to
involve a lot of inference in domains where data is scanty.... Thus
intuition
becomes important...

And "Goals" are necessary to avoid a mind being a completely disorganized
chaotic mess.
The goal of survival is a particularly handy one, as is the goal of
procreation of one's genetic
material. AI's that have this goal will tend to proliferate, for obvious
reasons.

A "feeling" is really just an internal sensor, specialized for some purpose.

In humans, due to the weirdness of evolution, some of our feelings have
become
detached from their original purposes. This won't be so much of a problem
for
self-modifying AI's.....

For instance, the feeling of lust when you see an attractive member of the
opposite
sex is a useful one. (The human race is better off because we have it;
otherwise why would
we bother reproducing ourselves.) But the need for variety in sexual
partners, even, say, once one has
had a vasectomy and cannot produce more children, serves no purpose and in
many cases can
fuck up one's life. (This isn't an example from my own personal life, by
the way; it's something
I observed in a friend and found particularly ironic ;).

For another example, aggressive and angry feelings are less useful now than
they were in
preindustrial societies...

In a Webmind context, we give our system an instinctively bad feeling when
it runs out of memory,
or answers queries too slowly, or fails to gain any new information for a
while. Are you suggesting
that the system should learn, by abstract reason, that running out of memory
is bad?

This kind of thing can't be effectively learned over an individual lifespan,
because once the system has
run out of memory, it can no longer reason!! The data for reasoning is
REALLY not present in this case.
Thus, it can only be learned on an evolutionary time-scale.
Evolution results in a system being born with an instinctive aversion to
running out of memory. Instead of
requiring this to evolve, we've built it in as an instinctive feeling.
(This is just the simplest example.)

> Still, why use blind instincts when you can
> use context-sensitive instincts? Why should the system experience pain
> when propagating updates to old knowledge, any more than it experiences
> pain on updating its visual field? Why is that kind of pain necessary?
> How does it make Webmind more intelligent?

Because, throwing out a lot of your knowledge can make you idiotic for a
while,
which, in an evolutionary setting, is very bad for your survival value.

Actually, we want the system to generally be skeptical about other agents
wanting to
feed it new brain matter! But we want it to trust us when we feed it new
brain matter...

> > How does the system
> > feel about us changing the way it evaluates its own health... or
> > the degree to which it "feels it" when humans are unhappy with it...
>
> It looks to me like it would take an extremely sophisticated design for
> Webmind to feel anything at all. I mean, you and I might not like it if
> someone started tweaking our own feedback systems to increase the amount
> of pain - because we map ourselves onto our future selves and sympathize
> with our future selves. That is not a trivial ability.

It's not ~trivial~, but it's well within the capability of WM's inference
and
prediction components.

> Webmind would need to realize that it had more pain than it would have had
> otherwise, trace back the causality for that to the action of the human,
> categorize the presence of "more pain than in a subjunctive alternate
> reality" as "undesirable" (regardless of the purpose that pain is supposed
> to accomplish), and combine the fact of "human responsibility" with the
> "undesirable outcome" to resent the humans.

This kind of reasoning is really not very hard for a probababilistic
inference
engine like the one implicit in WM's inheritance links, given adequate
experiential
data. I mean, I could write out the exact inference steps involved in the
train
of thought you describe, in terms of WM nodes and links (but I won't...).
We have
not achieved this kind of application of WM inference yet, but this is where
we expect
to be in a year's time or less, using the tools we have now...

> I don't think Webmind should use an anthropomorphic pain architecture,

Well, it doesn't really, but it's more so than you seem to find intuitively
optimal

I feel you haven't come to grips with the limitations of rationality, due to
not yet
having experimented with large-scale prob. reasoning systems, and their
strong need to
be guided by simpler intuitive/associative (even instinctive) methods...

> Webmind 3.0, or whenever Webmind
> starts getting into sophisticated self-imagery, can analyze its own mind,
> trace back undesirable behaviors to their causal origin, and perform
> design adjustments that would have prevented that undesirable outcome and
> as many related undesirable outcomes as possible.

WM 1.0 will have self-imagery ... the system already does, actually.

The design adjustment part will only come with 2.0 however

> In this latter case, Webmind should have no objection to your tweaking
> with the "negative feedback" (not "pain") mechanisms, if by doing so, you
> increase the probability of desirable outcomes in the future.

Sure.... The instinct part, though, is the "a priori" probability that a
tweak with
its mechanisms is good (to use a Bayesian term that isn't entirely apt).

Try as you might, you can't make probability theory entirely objective. In
practice
it always rests on assumptions -- a prior probability distribution, an
assumption
of independence between various factors, etc. These assumptions are, well,
instinct...
feeling...

This is not an abstract philosophical point, it's a philosophical ~twist~ on
well-known
maths that you just can't get around...

> This is why I'm so heavy on the necessity of pain being a design subgoal
> of Friendliness, rather than Friendliness being a way of achieving
> pleasure or avoiding pain. By adopting that design stance, you are making
> huge problems for yourselves which are entirely unnecessary.

In my view, friendliness and happiness are both subcomponents of each other.
That's how it works in Webmind, anyway. And in me. the happier I am, the
more
friendly I am; and the friendlier I am, the happier I am.

Each one is a subgoal of the other ;>

> > Because we're so close to this phase (just a couple more months
> > of testing & debugging simpler components), this conversation
> > is particularly interesting to me
>
> I'm very much interested as well, especially insofar as the choices you
> make now may constrain the options you have available later.

Probably not, really. Not for a while...

> > There certainly is something to be known in advance... but the
> percentage
> > of relevant knowledge that can be known in advance is NOT one
> of the things
> > that can be known in advance ;>
>
> Ah, yes, but once you know something in advance, you can take a pretty
> good guess as to whether that particular thing is something you need to
> know in advance.
>
> Obviously, one of the fundamental goals in Friendly AI should be to use
> methods that minimize the number of things you need to know in advance.
> It also follows that those methods are one of the things you most need to
> know in advance. (See? Now we know that in advance!)
>
> Try saying all that with a straight face... but it's all true.

Sure... this is basic to any kind of holistic, realistic AI design, not just
friendly AI...

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT