RE: Ben's _Thoughts on AI Morality_

From: Ben Goertzel (ben@goertzel.org)
Date: Mon May 06 2002 - 23:19:43 MDT


> It's very
> traditional to
> view category structure as a hierarchy with more abstract concepts at the
> "top", although whether this convention reflects real properties of the
> system is debatable. However, goals tend to center around neither very
> abstract categories nor very concrete categories, but rather an
> intermediate
> level called basic-level categories. Basic-level categories have a wide
> range of interesting properties; they tend to be the most abstract
> categories for which you can still call up a specific mental
> image,

Ah, we do disagree here.

I think that some goals are definitely associated with more abstract
categories than the ones you call "basic-level."

*Compassion*, I feel, is an example. I find it hard to get a concrete feel
or image for what compassion really means, in the deepest sense, yet it's
still a deep-seated goal of mind.

Similar is the goal of *creativity*. This is a very abstract goal, not
tangible or easily imaged.

To me the most important and deepest goals are the ones that go *beyond* the
domain of the easily, tangible image-able.

I guess I am revealing my quasi-mystical streak here... but of course
"mysticism" and the notion of "higher goals" is an important part of human
psychology and experience...

And I think that much of what is screwed up about the HUMAN goal system is
that our high-level abstract goals are on an equal (or often lower) level to
very concrete goals like "Get me sex!" , "Get me food!", "Keep me alive!"

One thing I didn't say in that article, but I now think I should have, is
that the human goal structure DOES violate the dual network structure in
many ways, AND that this is, in my view, a large part of the reason why
we're so bloody fucked up...

However, we do NOT have very powerful self-modifying capabilities, which is
why we have persisted with a somewhat fucked-up goal structure for so
long...

The thing is, our goal structure is largely atavistic. When our goals of
sex, food and continued life emerged in our animal ancestors they were close
to the top of our internal dual networks.

But, then we developed much more abstract upper levels in our internal dual
networks, yet our goal systems have only very partially and slowly adapted.

A strongly self-modifying superhumanly intelligent AI is going to be able to
adapt itself a lot faster... so it will not last as long with a
non-dual-network-harmonious goal structure...

Thanks for leading me down this direction of thought; this will go in the
next revision of that essay ;>

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT