Re: More silly but friendly ideas

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Wed Jun 11 2008 - 01:17:30 MDT


2008/6/11 John K Clark <johnkclark@fastmail.fm>:

>> You are saying that because naturally evolved
>> intelligence behaves in a particular way,
>> every possible intelligence must behave in that way.
>
> Yes that is what I'm saying, and because such a being has never been
> observed it is your responsibility to prove it is possible

I gave the example of a computer smart enough to simulate the
behaviour of simple organisms but deciding not to do so.

> It can be proved that a finite set of axioms cannot derive all
> that is true, so I see no reasons why a finite set of goals can derive
> all actions that can be performed.

If that's true then a machine with a finite set of goals won't be able
to perform all actions that can be performed. But that's not
surprising: a bomb intent on blowing up its target is unlikely to take
up knitting as a hobby unless it is part of an ingenious scheme to
help it achieve its objective.

> Intelligence means being able to think outside the box, but you claim to
> be able to dream up a box that something much smarter than you cannot
> think outside of.
> You want a brilliant simpleton, something very intelligent but who can't
> think, something smarter than you that you can outsmart. That's nuts.

In general you can't predict the behaviour of something smarter than
you are, but in special cases you can. Babies can predict that their
mothers will feed them when they cry, because mothers are programmed
that way. For the most part this programming works, despite the fact
that evolved brains have only vague and changeable goal systems, and
the degree to which a mother is committed to looking after her baby
does not depend on her intelligence.

>> an AI cannot arrive at ethics, aesthetics
>> or purpose without having such arbitrary
>> axioms as givens in its initial programming.
>
> What are the axioms of human behavior? What is the top super-goal?

If there is one, it could easily change, but it is not necessary to
program a computer this way. Conversely, it is possible to program a
computer to randomly vary its goals, eg. a word processing program
that decides on a whim what character it writes when you press a
particular key. Variability of goals has nothing to do with level of
intelligence.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT