RE: Universal ethics

From: Mike (mikew12345@cox.net)
Date: Wed Oct 27 2004 - 07:26:26 MDT


> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Ben Goertzel

>So the key to Friendly AI may well be engineering a situation in which
caring a little bit is enough.
>In other words: make sure the AI has a really big universe to play in,
so that it doesn't need to
>annihilate our patterns in order to make room for its. In this case
just a little bit of specially-
>focused compassion on us humans will be enough to keep us around.
http://www.goertzel.org/papers/UniversalEthics.htm

-- Ben G

Earlier in your paper you mentioned the idea of complementary patterns,
which I would interpret
to include the "needy - need satisfier" pair. This would give another
route to safety
for humans: ensuring the AI knows that it needs humans. Some possible
AI needs that we can fulfill:
- We fill the unique role of being its creator. If nothing else, keep
us around for historical value.
- We occupy a position in the food chain. Removal of humans would
disrupt the current balance of
  life on this planet (on 2nd thought, from a macro perspective, that
might not be such a bad thing for
  the world, better not to mention this...)
- Next to the AI, we're the most intelligent species on the planet. It
could present some kind of
  challenge for the AI to try to bring us up to its level, much like
humans spend time trying to
  teach sign language to chimps.

Mike W.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT