Re: Happy Box

From: Mikko Rauhala (mjrauhal@cc.helsinki.fi)
Date: Sat May 03 2008 - 11:33:11 MDT


la, 2008-05-03 kello 09:16 -0700, Samantha Atkins kirjoitti:
> I disagree. No supergoal being better / more intelligent than another
> implies that the entities involved effectively have no nature, that they

I fail to see how this is implied. On the contrary, as intelligence is
not a factor in primary goals, the primary goals of an entity pretty
much define its nature (or if you like, the other way around, depending
on what's your preferred semantics for "nature").

> somehow exist in a vacuum such that whether their goals make conditions
> they live in better or worse for themselves and those around them is
> utterly and completely irrelevant. This is simply and literally
> without foundation.

If "this" refers to the claim that my words somehow imply the above,
indeed it is true. Otherwise this is more straws for the strawman.

> Much of our discussion presumes that a super-intelligence will not do
> meta-abstraction on its own goals. This is very convenient and perhaps
> necessary to convincing ourselves we can somehow cause such a being to
> be "Friendly". But it does not match any intelligent systems existing
> to date and is not a foolproof assumption.

As for systems existing to date, it's trivially explainable why we have
to do that nasty stuff: our goal structure, such as it is, is a
self-contradictory mess of overlapping and inconsistent values. However,
for at least some of us, consistency and its friends are included in
that mess (quite possibly as instrumental values rather than intrinsic
ones). This enables, nay, even requires us to study, abstract and rank
some of our goals in order to make some sense out of the spaghetti we've
been dealt. (The apparent instrumentality stems largely from the fact
that if we did not try to clean things up a bit, we'd be no good at
achieving any of our goals, what with working internally against
ourselves the whole time - as, indeed, we often do regardless.)

On the other hand, a rational intelligent system with a clean and
consistent goal architecture does not have to do things like this, and
indeed will not, for rationality in pursuing goals _by definition_ rules
out changing those goals. Of course, whether such a system can be built
is another matter, where one has to rely on one's best guestimates.
Regardless, with our way of operation easily explained by our identified
bugs (irrationality and inconsistency), your argument is easily
nullified as a reason to think systems in general would have to be
similarly buggy.

Of course, I don't deny the possibility of hodgepodge superintelligent
systems that were as irrational and inconsistent as us. In this case
they would quite conceivably have to resort to same kind of cognitive
kludges as us too, and be rather dangerous.

No foolproof claims from me here, I just think it's worth a shot making
a the former kind of AI what with there being no solid arguments to the
impossibility thereof. As well as having astronomical waste hanging in
the balance and all that.

-- 
Mikko Rauhala   - mjr@iki.fi     - <URL:http://www.iki.fi/mjr/>
Transhumanist   - WTA member     - <URL:http://www.transhumanism.org/>
Singularitarian - SIAI supporter - <URL:http://www.intelligence.org/>


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT