Re: How to make a slave (was: Building a friendly AI)

From: Randall Randall (
Date: Thu Nov 29 2007 - 10:41:54 MST

On Nov 29, 2007, at 8:43 AM, John K Clark wrote:

> On Wed, 28 Nov 2007 "Robin Lee Powell"
>> His point is that you should have gone
>> and done some actual logical symbol
>> manipulation to come to that conclusion,
> Oh thatís what he was talking about! He was saying I should write a
> paper full of symbol manipulation and references to ďAGIĒ (not AI,
> never
> that),

No, I wasn't. When you take a perfectly reasonable
explanation of what I was saying (Robin's), and then
begin talking about papers and pretending that I've
made some reference to your hobby horse about the terms
of art in the field, you are not arguing against what
I said, but merely being dishonest. Perhaps, however,
you seriously misremembered what I said and believed
I'd said something about the AGI acronym, or asked for
a published paper. Perhaps.

> that no working scientist will ever read in order to prove that
> intelligent people can make more intelligent decisions than less
> intelligent people.

That wasn't your original assertion though. Rather,
you said it was logical that more intelligent beings
should be in charge, rather than less intelligent
beings. Being in charge has nothing to do with making
decisions, because you cannot decide what your top
goal should be -- decisions imply a "better" and "worse",
which only make sense in the context of a pre-existing

So, to show your original assertion, you'd have to show
that the most important goals of more intelligent beings
were better than the most important goals of less
intelligent beings. This is why other people on this list
have inferred that you believe in a universal morality; if
you do not, how could you show such a thing?

Randall Randall <>
"If we have matter duplicators, will each of us be a sovereign
  and possess a hydrogen bomb?" -- Jerry Pournelle

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT