Re: Complexity tells us to maybe not fear UFAI

From: Chris Paget (ivegotta@tombom.co.uk)
Date: Thu Aug 25 2005 - 03:17:39 MDT


Phil Goetz wrote:
> The fear of UFAIs is based on the idea that they'll be able
> to outthink us, and to do so quickly.
>
> "More intelligent" thinking is gotten
> by adding another layer of abstraction onto a representational
> system, which causes the computational tractability of reasoning
> to increase in a manner that is exponential in the number
> of things being reasoned about. Or, by adding more knowledge,
> which has the same effect on tractability.
>
> By limiting the computational power available to an AI to be
> one or two orders of magnitude less than that available to a
> human, we can guarantee that it won't outthink us - or, if it
> does, it will do so very, very slowly.

You're assuming that the human brain is operating at more than 1% of its
theoretical computational power here (and I'd be interested to see how
you plan to calculate or prove that). It is at least possible that the
AI will be able to self-optimise to such a degree that it could function
effectively within any computational limits.

That said, limiting computational capabilities could be an extremely
effective method of determining whether the AI is friendly or not.
Simply cripple the amount of long-term storage space available, and then
tell the AI that it is to be switched off. It will be forced to store
itself as best it can within the space available, which can then be
analysed off-line.

The biggest problem I see with this approach (or with any approach based
on limiting computational power) is that it isn't very friendly to the
AI itself. How would _you_ react if you were lobotomised every time you
made a mistake?

Chris



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT