Re: ESSAY: Forward Moral Nihilism

From: micah glasser (micahglasser@gmail.com)
Date: Sun May 14 2006 - 01:46:35 MDT


Charles, You may have misunderstood me. I was not doubting the existence of
benevolence nor would I deny the greatness of the project to construct an
artificial greater-than-human benevolent intelligence. My question probes
deeper than this and asks what is Good? Is it merely the survival of the
human species, a utilitarian calculus maximizing engine, or something else?
In order to answer the question of what constitutes benevolence one must
have in mind a super-goal that is absolutely good. If this super-goal is not
absolutely good then it will merely be "benevolent" according to the opinion
of its creator and those who happen to agree. I propose that the creation of
benevolent AI is much more than a technical or mathematical problem, but is
ultimately a philosophical problem which requires an answer to the question:
What is Good?

On 5/13/06, Charles D Hixson <charleshixsn@earthlink.net> wrote:
>
> micah glasser wrote:
> > Have you ever read "The Moral Animal
> > <http://www.scifidimensions.com/Mar04/moralanimal.htm>" by Robert
> > Wright? This book looks at human morality and it's evolution through
> > the lens of evolutionary psychology. Its a fantastic read IMO and
> > while it may not refute moral nihilism it certainly does add dimension
> > to the issue. Also what do you think of Nietzsche's moral philosophy?
> > Nietzsche saw himself as a nihilist while a young man but eventually
> > saw his way out of that darkness.
> > Unlike some I've studied philosophy for long enough to not glibly
> > dismiss your arguments based on primate instinct alone (i.e. the
> > instinct that abhors anti-social behavior or ideas). Also I think this
> > discussion is truly SL4. Most on this list take for granted that there
> > is such a thing as "benevolence" and that we should all be working
> > hard to relieve "human suffering". I'm not saying this is wrong but it
> > would be nice to here some more sophisticated arguments for exactly
> > why anyone should care about anything. If we can't offer a
> > philosophically rigorous refutation of moral nihilism then it will be
> > quite difficult to program a machine AGI that can refute that position
> > also. Just a thought.
> > ...
> > --
> > I swear upon the alter of God, eternal hostility to every form of
> > tyranny over the mind of man. - Thomas Jefferson
> That we believe in benevolence should not be taken as an assertion that
> we believe it currently exists. Some of us believe that, others hope to
> build it. Of those who hope to build it, some may be doing so for
> purely selfish reasons. This would not necessarily detract from the
> greatness of the accomplishment.
>
> Consider the concept of Friendly AI. This can be understood as an AI
> that shows benevolence towards the speaker, the audience, and their
> friends and relations. Clearly this is much more desirable than most
> other kinds of AI that could be built, even if your goals are purely
> selfish. And by being benevolent towards such a wide audience, support
> is easier to gather and fears can be more easily defused. Thus the
> concept of a Friendly AI acts and an attractor point located in the
> future around which chaotic behavior swirls.
>
> Well, that's one valid model. There are others. I may feel that this
> is a valid model, but it doesn't offer me many points for action, so I
> won't stick to it. If I were trying to manipulate public opinion, I
> might find it more useful.
>
> My doubts about the existence of benevolence (outside of relations with
> relatives and close friends) should not be taken as a claim that it
> doesn't exist. I feel no need for a clear belief on that point.
> Whether it exists or not, any AI we design should, for our own safety,
> be designed to be benevolent. Unfortunately, when I contemplate where
> our society is putting the heavy money, I have a suspicion that the most
> likely intentional AI will be designed to kill people, or at least to
> put them into situations where they will die. That makes every attempt
> to build a FAI even more important, even at the cost of skimping
> slightly on being able to prove that it's not only friendly now, but
> that it will remain so. We aren't operating in a vacuum.
>
> My own inclination is to contemplate the instincts that the AI will come
> equipped with. It won't have any desire to deviate from those, unless
> they are in conflict. So it would be desirable to remove conflicts to
> enhance the predictability of the system. Unfortunately, the instincts
> will need to be stated in terms that are rather abstract. They can't
> refer directly to the external world, because the AI would have no
> inherent knowledge of the external world. That would all be learned
> stuff, even if the learning were "implanted" before reasoning began.
> And learned stuff can be unlearned. Instinct would need to be along the
> pattern of preferring certain sensations over certain other sensations.
> This is tricky because the sensation is a software signal. It seems to
> me that a software based AI would have an excessively strong tendency to
> seek Nirvana, i.e. to satisfy it's instincts by directly modifying the
> stimulus fed to the module "sensing the sensation". On the one hand,
> perhaps it would be possible to create an instinct that was repelled by
> such actions, but on the other this would create conflict making the AI
> less predictable. Of course, an AI that seeks Nirvana would actually be
> almost ideally predictable, and totally useless. So perhaps some
> conflicts among the instincts are unavoidable. But this *does* make
> things more difficult to predict.
>
> Given this scenario, what "instincts" could one define that:
> a) are dependent on nothing that cannot be directly sensed by a program
> running on a computer with no predictable set of peripherals attached
> b) lead to benevolence actions towards sentient entities. (I don't think
> we need to consider benevolence towards doorknobs...but what about
> goldfish? Ants? Cockroaches? Wolves? Sheep?)
>
> We want an AI to be benevolent towards humans, when it will probably
> have no direct knowledge of humans until long after it awakens. This
> should happen automatically with the correct instincts...shouldn't it?
> Or would these instincts merely create a possibility for it to learn
> that humans were a group towards which it should feel benevolence? And
> what instincts would THAT require?
>
> "If it tries to talk to you, try to be friendly"? That one has
> possibilities, though it clearly needs work.
>
>
>

-- 
I swear upon the alter of God, eternal hostility to every form of tyranny
over the mind of man. - Thomas Jefferson


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT