Re: Complexity, Ethics, Esthetics (was re: Defining Right and Wrong)

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Dec 05 2002 - 12:13:33 MST


Cliff Stabbert wrote:
> Wednesday, December 4, 2002, 5:23:46 AM, Samantha Atkins wrote:
>
> SA> Cliff Stabbert wrote:
>
>>>Tuesday, December 3, 2002, 1:46:23 PM, Samantha Atkins wrote:
>>
>
>>>SA> All the above said though, I have no right to choose for anyone
>>>SA> else. If they want the equivalent to being a wirehead then they
>>>SA> must have room to choose that although not to bind others to
>>>SA> supporting their decision directly.
>>>
>>>But here, we get into the subtle details (where mr. S. is known to
>>>hang out) of how one determines what an entity wants. If you're a
>>>parent, you know that ultimately your child's happiness is better
>>>served by a healthy diet than always giving in to the child's
>>>*proclaimed* desire -- for McDonald's and candy, say (I am
>>>conveniently sidestepping media saturation influence here, which does
>>>play a big role).
>>>
>>
> SA> I was not speaking of children and I don't think a metaphor of
> SA> human adults as children relative to a FAI is at all
> SA> appropriate. A FAI worth its salt will know that Friendliness
> SA> relative to humans requires persuasion in non-coercive human
> SA> activities.
>
> Alright, I think we're talking past each other here. Of course I
> don't have the right to deny others the choice to be a wirehead.
> I was raising the issue of whether an _FAI_ would offer that option, and
> if it did, to what extent it should try to persuade people choosing it
> that there are better things.
>
> I was not trying to reduce the FAI-human relationship to a simple
> parent-child one, but there are analogous elements if:
> - self-actualization of human potential is "best" for humans
> - many humans will choose quick and shallow satisfaction over
> that deeper one
> - the FAI "knows better"
>

Full actualization is the goal. Self-actualization is one
interpretation of the principle of non-coercion. There is
nothing wrong with "leading the horse to water". Forcing the
horse to drink is coercive. Seeking to persuade the horse is not.

> Here in the US, it's certainly not just children who eat too much
> McDonald's and candy...
>
> I don't think an FAI should force anybody to do anything. But the
> question of where "persuasion" crosses that line is a bit tricky with
> a superintelligence.
>

Agreed in the sense that there is an open question, in my mind
at least, about when it is or is not permissible to augment/cure
problems in individuals that they are incapable of perhaps even
noticing, even when they are carefully pointed out, much less
requesting a cure for. This problem exists today in, for
instance, our treatment of the allegedly mentally ill.

>
>>>======
>>>A tangentially related issue:
>>>
>>>SA> Not to mention that the above is massively boring. You would
>>>SA> have to remove part of human intelligence to have many people
>>>SA> "happy" with simply continuous pleasure. Pleasure is also quite
>>>SA> relative for us. Too much of a "good thing" results in the
>>>SA> devaluation of that pleasure and even eventual repugnance.
>>>
>>>What if you could devise an "optimal path" -- the best rythm of
>>>alternating ups and downs, challenges and rewards -- is that something
>>>a superintelligence should guide us along, or would that be _less than
>>>optimally rewarding_ because we hadn't chosen that path completely
>>>independently?
>>
>

I don't posit any requirement that we find the optimal path
completely independently. As a matter of fact, I do not believe
that human beings are powerful or clean enough information
processors to have an great likelihood of finding such a path
unaided.

One of my strongest desires for GAI and IA is to increase such
processing ability (and yes, wisdom to boot) to find much more
optiomal paths than those proposed and followed today.

> SA> What if we stop thinking up rather low grade "solutions" and
> SA> think about higher level ones and how those may be made most
> SA> likely? Human actualization is not about getting the most
> SA> pleasure jollies.
>
> Yes, that's my point. That there may be an optimal path towards
> actualization, consisting of the right sequence of challenges and
> rewards, in any given instance. My question is whether we would feel
> cheated out of "real" challenge if offered such a path (should it
> exist).
>

Carrot and stick is not a good model but I take your general
point. Some people will feel cheated perhaps. But I will not
be among them.

> Should an FAI offer such paths? Or should it just restrict itself to
> giving people freedom, i.e. disallowing the initiation of force?
>

It most certainly should offer such paths. A FAI inforcing
non-coercion whether the people freely choose it or not is a
preemptive example.

> If it does more, then what is that more and where are the lines it
> shouldn't cross?
>

There are lines but I doubt that either of us have sufficient
intelligence and wisdom to fully draw them. :-)

>
>>>Except maybe to point out that the notion of "objective ethics" is at
>>>least as difficult as the notion of "objective aesthetics".
>>
>
> SA> That is not a meaningful observation in this context.
>
> Perhaps it is for those who claim objective ethics are possible while
> they might agree if asked that beauty is in the eye of the beholder,
> or determined by (cultural, historical, personal) context. If
> aesthetics is context-dependent surely ethics is.
>

That does not cleanly follow and this line does not seem
fruitful to the main conversations.

>
>>>Somehow we
>>>have to reconcile the notion that "it's all subjective" with the
>>>notion that it's not _all_ _just_ subjective, that some things _are_,
>>>dammit, better/more artful than others.
>>
>
> SA> It is impossible to reconcile opposites. It is not all just
> SA> subjective so why should I reconcile what is to that spurious idea?
>
> Because humans hold contradictary ideas, which is why I used "the
> notion that" rather than "the fact that". If we're going to build
> superintelligences we need to get beyond that and other paradoxes.
>

We need to follow what is true to the best of our ability (and
the AIs eventually) to find it. We do not need to reconcile
what truth we have found to falsehoods.

>
>>>To tie this in with your
>>>earlier statement, perhaps the ethical as well as the aesthetical is
>>>that which increases your intelligence and / or the opportunities for
>>>actualizing its potential...words such as "uplifting" are often
>>>applied in such contexts.
>>>
>>
>
> SA> Perhaps a shorter statement would be that the Good is that which
> SA> actualizes the life/existence of the sentient beings involved.
> SA> The "Good" applies both to judging/providing a partial basis for
> SA> Ethics and Aesthetics.
>
> I can agree with that statement, and I'm curious what role you feel an
> FAI should play in regards to it.
>

I think the FAI will be much better at understanding what most
fully actualizes the life/existence of itself and other
sentients than we are.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT