Re: Please Re-read CAFAI

From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Tue Dec 13 2005 - 22:27:20 MST


Jef Allbright wrote:

>On 12/13/05, Tennessee Leeuwenburg <tennessee@tennessee.id.au> wrote:
>
>
>>Jef Allbright wrote:
>>
>>
>>
>>>On 12/13/05, Michael Vassar <michaelvassar@hotmail.com> wrote:
>>>
>>>
>>>
>>>
>>>
>>>>The same confusion relates to the discussion of the categorical imperative.
>>>>The categorical imperative simply makes no sense for an AI. It doesn't tell
>>>>the AI what to want universally done. Rational entities WILL do what their
>>>>goal system tells them to do. They don't need "ethics" in the human sense
>>>>of rules countering other inclinations. What they need is inclinations
>>>>compatible with ours.
>>>>
>>>>
>>>>
>>>>
>>>Let me see if I can understand what you're saying here. Do you mean
>>>that to the extent an agent is rational, it will naturally use all of
>>>its instrumental knowledge to promote its own goals and from its point
>>>of view there would be no question that such action is good?
>>>
>>>If this is true, then would it also see increasing its objective
>>>knowledge in support of its goals as rational and inherently good
>>>(from its point of view?)
>>>
>>>If I'm still understanding the implications of what you said, would
>>>this also mean that cooperation with other like-minded agents, to the
>>>extent that this increased the promotion of its own goals, would be
>>>rational and good (from its point of view?)
>>>
>>>If this makes sense, then I think you may be on to an effective and
>>>rational way of looking at decision-making about "right" and "wrong"
>>>that avoids much of the contradiction of conventional views of
>>>morality.
>>>
>>>- Jef
>>>
>>>
>>>
>>>
>>Perhaps I can simplify this argument.
>>
>>The Categorical Imperative theory is an "is" not an "ought".
>>
>>Cheers,
>>-T
>>
>>
>>
>
>Huh? Thanks for playing.
>
>Would you like to comment on the questions I posed to Michael?
>
>
I thought that I had done so. I will be specific.

"Do you mean that to the extent an agent is rational, it will naturally
use all of its instrumental knowledge to promote its own goals and from
its point of view there would be no question that such action is good?"

The Categorial Imperative (CI) is not a source of morality, but a
description of how to make (orignially, moral) rules. Any number of
moral positions are possible under this system, and as such the CI is no
guarantee of Friendliness.

"If this is true, then would it also see increasing its objective
knowledge in support of its goals as rational and inherently good (from
its point of view?)"

Not necessarily. It may consider knowledge to be inherently morally
neutral, although in consequential terms accumulated knowledge may be
morally valuable. An AGI acting under CI would desire to accumulate
objective knowledge as it related to its goals, but not necessarily see
it as good in itself.

"If I'm still understanding the implications of what you said, would
this also mean that cooperation with other like-minded agents, to the
extent that this increased the promotion of its own goals, would be
rational and good (from its point of view?)"

Obviously, in the simple case.

I can't work out who made the top-level comment in this email, but the
suggestion was that CI might be relevant to an AI, and the confusion
seemed to be related to what the CI is, and how it might affect
somebody's goals. The CI is little more than an adoption of logical
consistency, suggesting that one be quite careful about adopting moral
principles that may not apply universally.

In terms of an AI's goal system, something similar will be true. An AI
may have no "guilt" as we understand things. It will, however, have a
set of goals and analyses for making choices. Something like the CI will
still apply to the logic of its goal system, but describing it as a
morality is not necessarily true.

An AI with the top-level goal of "Steal Underpants" will assess all
other actions in consequential terms as they contribute towards the
stealing of underpants. Morality will not enter into the equation unless
there is a goal of "Be Good".

The CI is a description of an objective ethics, claimed to be superior
because all CI expressions are intransitive. Their objectivity makes
them superior, more logically desireable, less flawed etc.

For any goal system, be it centered around "moral goodness" or "stealing
underpants", a set of principles may be found, and some of them may be
objective.

All AIs that I have seen described are consequentialists and (I think)
objectivists.

Cheers,
-T

Cheers,
-T



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT