From: Samantha Atkins (email@example.com)
Date: Wed Apr 02 2003 - 10:58:00 MST
>>So I agree with the general notion that there are
>>more ways of a
>>FAI being present with and allowing suffering than
>>can be easily
>>be disposed of.
> Do you reject the idea that one person’s interests can
> ever be more important than another’s? Why? Do you
> think you know the best reason to do so?
I am not quite seeing the relevance of your line of questioning.
Generally speaking, persons are not of the same capabilities
or potentials and I doubt that they will be. Thinks may develop
such that there is little/no perceived conflict of interest
between persons. That would be great. But if there is then it
is quite possible that one person's interest, say in improving
drastically the lot of the world might be more important than
another's interest, say in finding the next fix for a drug
What does "to do so" refer to in the above paragraph, choosing
between different person's interests? But why should it be up
to any external entity to choose? In any condition of real
conflict, assuming both sets of interest are actually ethical, I
see no reason why some form of competition for resources to
satisfy the interests or some means of bargaining/voting could
not be used.
> Does it matter if someone does so for a reason other
> than the best reason? Do you understand the
> “disconnect” that can happen in such a case?
Does so what? I am a bit lost here.
> By what possible mechanism would you determine another
> individual’s interests? How would you like this to be
> determined in your case? And might the ideal method of
> conclusion vary between individuals then?
Again, I don't know what you are about with this.
> So the question is, "Why would active benevolence
> allow for involuntary suffering?" That's sure not an
> interest of mine.
What isn't, allowing for involuntary suffering? Perhaps it
cannot be removed without also removing a set of very much
wanted goods utterly in keeping with Friendliness. I simply do
not have sufficient arrogance to claim or in depth understanding
to claim boldly that it can and will be removed in all
> Benevolence isn't supposed to have unintended
> consequences. And the whole ideal of “rejecting the
> idea that one person’s interests can ever be more
> important than another’s” is that each person’s
> "interests" are theirs alone. That’s what human
> benevolence is. I can’t wish any other definition of
> benevolence on anyone. Do you want a world where any
> mind can? Why would you?
Our world is full of benevolence in intentions having unintended
consequences. Mainly this occurs when we attempt to muck with
each other "for our own good" with or without permission. I am
not at all convinced that an SAI can always, should always or
will always choose to do such involuntary mucking about.
I don't want a world where any mind can impose its notion of
benevolence on me against my will.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT