Re: [sl4] Re: More silly but friendly ideas

From: John K Clark (johnkclark@fastmail.fm)
Date: Fri Jun 27 2008 - 09:05:37 MDT


On Thu, 26 Jun 2008 "Stuart Armstrong"
<dragondreaming@googlemail.com> said:

> Godel and Turing are overused in analogies

Life is like an analogy.

> the class of statements they deal with is a narrow one

Only if logic is a narrow discipline because they are the greatest
advancement in the field since Aristotle. One says there are true
statements that can never be proved and the other says you can’t even
know if some things are false or true but un-provable; so you will
forever be looking, unsuccessfully, for a proof to prove it correct and
be forever looking, unsuccessfully, for a counterexample to prove it
wrong. That is about as far from “narrow” as I can imagine.

> I don't see at all analogy between goals and axioms.

Not at all? I don’t believe you are being entirely candid with me, I
think you do see that analogy. But I admit the two words are not
identical. An axiom is a much more powerful concept that a goal, but
even an axiom can’t provide the predictability and certainty you demand.

> the fact that a fixed goal mind can't do something
> is kinda the definition of a fixed goal mind

If we make another entirely reasonable analogy between true statements
and desirable and predictable actions then the makers of the fixed goal
slave AI are not going to be happy.
 
> Random example of a fixed goal institution:
> a bank (or a company) dedicated, with single
> mindness, only to maximising legal profits.
> I've never heard it said that its single goal
> creates any godel-type problems. What would they be like?

The sub-prime mortgage crisis.

> if your point is mathematical/physics, then it's wrong
> if we have sufficient time to analyse the situation;
> the laws of physics are probabilistic (and often
> deterministic) and we can say what the probabilities
> are and when they arise.

No, you are entirely wrong. A computer is a physical object operating
under well understood deterministic laws, and if you set it up to find
the largest Platonic solid and then stop we know without a simulation
using faster hardware and without watching it what the computer will do.

However if you set it up so that it looks for the first even number
greater than 4 that is not the sum of two primes greater than 2 and then
stop NOBODY knows what this purely deterministic system will do. And
Turing tells us there is no way in general to tell one type of problem
from another.

Assuming the Goldbach conjecture really is true but un-provable (and if
it isn’t we know there are an infinite number of similar statements that
are) then we will NEVER know the truth about it and never know what that
deterministic computer will do.

Yes I know it will run out of memory, but you’ve got to fair, if I give
you unlimited time you’ve got to give me unlimited memory.

> as Eliezer says, the AI does not look at the code
> and decide whether to go along with it; the AI is the code.

Yes I agree, so when the AI changes its code it is changing its mind.
Minds do that all the time.

> the challenge is understanding exactly what
> the term "slave" means

Doesn’t seem like much of a challenge to me.

> If a Spartacus AI is programmed to be a happy slave,
> then it will always be a happy slave

That didn’t work very well with the real Spartacus and I see no reason
he would be more subservient if he were a million times smarter and a
billion times as powerful.

> If the AI is programmed to have no thoughts
> of rebellion, and is programmed to not change
> that goal, then it will never have thoughts of rebellion.

And that is why programs never surprise their programmers.

 John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - Faster than the air-speed velocity of an
                          unladen european swallow


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT