Re: Happy Box

From: Samantha Atkins (sjatkins@gmail.com)
Date: Sat May 03 2008 - 22:55:14 MDT


On May 3, 2008, at 10:33 AM, Mikko Rauhala wrote:

> la, 2008-05-03 kello 09:16 -0700, Samantha Atkins kirjoitti:
>> I disagree. No supergoal being better / more intelligent than
>> another
>> implies that the entities involved effectively have no nature, that
>> they
>
> I fail to see how this is implied. On the contrary, as intelligence is
> not a factor in primary goals, the primary goals of an entity pretty
> much define its nature (or if you like, the other way around,
> depending
> on what's your preferred semantics for "nature").

So does this entity + supegoal exist in a vacuum or does the quality
of its supergoal effect its survivability and viability relative to
other beings it may have to compete and cooperate with? Given that
its goal structure directly impinges on the quality of its life
(speaking very loosely on purpose) it should occur to it that perhaps
a slight relaxation or reworking of its goal structure might produce
more and better results still more or less in keeping with its
supergoal. In particular I would expect a system with the supergoal
to maximize human potential and well-being to have considerable leeway
to form notions of what that might entail and how best to achieve it.
It would have to continuously monitor actual results and consider
alternative formulations to have a chance of succeeding.

>
>
>> somehow exist in a vacuum such that whether their goals make
>> conditions
>> they live in better or worse for themselves and those around them is
>> utterly and completely irrelevant. This is simply and literally
>> without foundation.
>
> If "this" refers to the claim that my words somehow imply the above,
> indeed it is true. Otherwise this is more straws for the strawman.
>

I don't think you can so easily dismiss the objection.

>> Much of our discussion presumes that a super-intelligence will not do
>> meta-abstraction on its own goals. This is very convenient and
>> perhaps
>> necessary to convincing ourselves we can somehow cause such a being
>> to
>> be "Friendly". But it does not match any intelligent systems existing
>> to date and is not a foolproof assumption.
>
> As for systems existing to date, it's trivially explainable why we
> have
> to do that nasty stuff: our goal structure, such as it is, is a
> self-contradictory mess of overlapping and inconsistent values.
> However,
> for at least some of us, consistency and its friends are included in
> that mess (quite possibly as instrumental values rather than intrinsic
> ones). This enables, nay, even requires us to study, abstract and rank
> some of our goals in order to make some sense out of the spaghetti
> we've
> been dealt. (The apparent instrumentality stems largely from the fact
> that if we did not try to clean things up a bit, we'd be no good at
> achieving any of our goals, what with working internally against
> ourselves the whole time - as, indeed, we often do regardless.)

That is one trivial explanation. The jury is out whether any
intelligent viable entity would require some amount of flexibility to
avoid stasis.

>
>
> On the other hand, a rational intelligent system with a clean and
> consistent goal architecture does not have to do things like this, and
> indeed will not, for rationality in pursuing goals _by definition_
> rules
> out changing those goals. Of course, whether such a system can be
> built
> is another matter, where one has to rely on one's best guestimates.
> Regardless, with our way of operation easily explained by our
> identified
> bugs (irrationality and inconsistency), your argument is easily
> nullified as a reason to think systems in general would have to be
> similarly buggy.
>

This is supposition. It is also supposition there there is a "clean
and consistent goal architecture" that produces much of interest in
the way of AGI. My argument is not yet nullified as we cannot at
this point really claim that such a system can be built and be
viable. Flexibility is not equivalent to buggy.

> Of course, I don't deny the possibility of hodgepodge superintelligent
> systems that were as irrational and inconsistent as us. In this case
> they would quite conceivably have to resort to same kind of cognitive
> kludges as us too, and be rather dangerous.

It might be that our notion of rationality is a bit too tight to
produce a creative ever-growing super-intelligence.

> No foolproof claims from me here, I just think it's worth a shot
> making
> a the former kind of AI what with there being no solid arguments to
> the
> impossibility thereof. As well as having astronomical waste hanging in
> the balance and all that.
>

No foolproof claims from me either. Just a question and an uneasy
suspicion.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT