From: Christopher Healey (CHealey@unicom-inc.com)
Date: Sun Dec 02 2007 - 09:10:58 MST
> John K Clark wrote:
> And in the bizarre world where the friendly AI
> lives human infants, embryos even, give orders
> to their parents and their parents obey without
> question. Perhaps if the Many Worlds interpretation
> of Quantum Mechanics is correct there is a universe
> where this happens, but it's not the one we live in.
As a matter of fact, my father shared with me that in almost every
decision he made regarding my upbringing, he did his best to do those
things I would have asked him to do, were I looking back upon them as my
future self. I believe many parents do this at least from time to time,
though certainly more implicitly. And where my mistakes were not so
serious as to excise huge regions of my future choices (or small but
critical ones), he let me make them and learn from them.
In an important and relevant sense, I believe you could say he did obey
me without question.
Most of us could be accused of allowing children to starve. It's not
because we're malevolent and wish them a swift demise; these mostly
distant youths are, more than anything, just not relevant in our daily
lives. However, they starve whether we justify it or not. If we're
simply not relevant to an AI with the potential ability to control most
or all resources, we'll likely "starve", too.
The only relevance we're ultimately going to have to an entity we
construct that is radically more intelligent than us is the relevance we
embed into the structure and content of the seed we plant (however
tenuous that embedding ends up being over iterated self improvement will
surely vary as a function of specific design). Any game-theoretical
explanations for emergent long-term cooperation seem to revert when one
agent can act with literal impunity. I'd expect any intelligent actor
seeking any goal to cooperate as long as that was the shortest path to
goal fulfillment, but at some power ratio/disparity, cooperation can
start to become just another needless hoop to jump though.
When Lord Acton said, "Absolute power corrupts absolutely", he was
definitely talking about game theory. And it is our Friendliness to
other entities that is corrupted. At least *we* can often count on the
end result of our entire evolutionary path and its cultural expression
to ground us. But in constructing an intelligence, we have no a priori
need to maintain that deeply seated continuity. We can do it in any
number of ways that actually function.
FAI is, in a sense, about maintaining that arc of continuity (of which
we tend to consider ourselves an important part), if possible, as much
as possible. It may not be possible. But its possibility is the
difference between facilitating our future en force, or going silently
into the good night of a strange and final mitochondrial eve.
And at least if *that* has to happen, let's push something
non-degenerate through that bottleneck. It may turn out that some
processes that look like reasonable Super-AIs can be shown to (likely)
halt after making a solar system worth of paperclips. Let's choose a
different design than that one, if we can. And better yet, one that
doesn't result in an AI posthumously concluding it would have been nice
to keep us around (we were so close!), once it's *really* done growing
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT