Re: Paperclip monster, demise of.

From: Mitchell Porter (mitchtemporarily@hotmail.com)
Date: Fri Aug 19 2005 - 06:45:48 MDT


Mitchell Porter:

>This discussion is obscured by the use of concepts from human psychology
>such as 'obsession', 'motivation', and 'pleasure'.

Richard Loosemore:

>The discussion is not obscured by this issue. I am using such terms
>advisedly, because they pertain to a type of mechanism (note: mechanism,
>not metaphysical or nonphysical entity) that I am trying to bring into the
>discussion.

And also said (in "Retrenchment"):

>And if you mentioned "motivation", "compulsion",
>"obsession" or "pleasure" they would assume you were just using these as
>shorthand for certain mechanisms, rather than assuming you were talking
>dualist philosophy.

I have lately concluded that any literal attribution of psychological or
phenomenal states to a classical computer should require either dualism or
eliminativism, because assertions of literal *identity* between such states,
and *any* state definable in terms of presently acknowledged physical
properties, are metaphysically outlandish; and that the relevant dualism,
while not metaphysically outlandish, would be extremely complex in its
specification, since it would have to posit some association between mental
states and physical states by way of "computational states", but the
definition of computational states in terms of physical states is usually
done only using fuzzy predicates, whereas this association (being by
hypothesis a sort of fundamental law) would have to be an exact one, and I
see no especially natural way to draw the exact lines in microphysical state
space that will define the "computational states" that are to be the
physical counterparts of the mental states. So an AI-friendly dualism is
logically possible, but unlikely because of the necessary complexity of its
"psychophysical bridging laws".

People feel warranted to make such identifications (of thought with
computation, for example) because they think the brain has been shown to be
a classical information processor, and so that it simply *must* be true. For
my part, my a-priori objections to it (which have grown greater as I have
thought further about the nature of the two things that are being portrayed
as one) lead me to consider (i) scientific heterodoxies such as quantum
brain theories (ii) ontologies other than naturalism.

I say all this just so you understand exactly where I'm coming from when I
assert that a machine feels nothing, has no motivations or goals, etc. I do
fundamentally dispute the legitimacy of equating psychological and
computational concepts, even though this is taken for granted by most people
in cognitive psychology, most people in AI, most people concerned with the
Singularity, and perhaps a majority of the scientifically minded public.

However, the issue of safety in AI design is to some extent logically
independent of such considerations. One may consider an AI to be a
self-organizing physical system, without attributing any genuine mentality
to it whatsoever, and still agree with arguments about it made in
mentalistic language, so long as they have a translation in terms of (say)
the purely causal or computational.

In any case, having re-read your book excerpt, I've noticed an interesting
feature of your argument. In previous discussions of goal systems on this
list, and how they might go astray, the AI analogue of "wireheading" often
comes up. Just as a human might sidestep the laborious aspects of
pleasurable activity by instead directly stimulating their pleasure centers,
an AI capable of self-modification might increase the (self-assessed)
utility of its actions simply by changing the assessment criteria, rather
than by acting on the world in new ways. Your argument that an evil
intelligence would choose to adopt good motivations, but not vice versa, on
the grounds that the nature of the universe offers a benign intelligence
greater opportunities for pleasure, actually turns the temptation behind
wireheading into the thing that saves us! You could even extend the argument
to say why, on purely selfish grounds, a benign intelligence might remain
benevolently engaged with the rest of the universe, rather than hiding away
in a Dyson sphere and literally wireheading itself - because the payoff for
expansive engagement will necessarily eventually exceed any possible payoff
accompanying solitary self-reconstruction.

However, such broad assertions about the prospects accompanying different
cosmic lifestyles must be argued for at much greater length. If the universe
is indeed "a fragile place where order and harmony are rare, always
competing against the easy forces of chaos", then to side with life sets you
up for frustration and heartbreak. If creation is a painful struggle that
must nonetheless be motivated by pleasure, it would appear to require
auto-sado-masochistic motivations, in which some of the pleasure derives
from anticipation of the relief from painful struggle which success (or even
just definitive failure) will bring. And so forth.

So I would add this intuitive criticism to the more technical ones offered
already. You could call it the Argument from Angst: Even if the AI is human
enough in its response to reality to attach significance to the
considerations you originally listed, it will have to weigh them against
equally real and cogent reasons for adopting an attitude of caution,
pessimism, or nihilism.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT