From: Nick Tarleton (firstname.lastname@example.org)
Date: Thu Mar 13 2008 - 06:41:35 MDT
On Wed, Mar 12, 2008 at 11:48 PM, Thomas McCabe <email@example.com> wrote:
> > Of course. The vast majority of systems are going to be simple but they
> > will also be unintelligent. The intelligent systems are going to be complex
> > and have many goals (or else all the effort in making them complex was
> > wasted).
> Complexity(intelligence) < Complexity(intelligence + complicated goal
> system) < Complexity(intelligence + Friendly goal system). As
> complexity increases, prior probability drops off *very* fast. 10 bits
> of complexity = factor of 1,000 decrease in prior probability.
This isn't completely valid, as the probability distribution of AIs
designed by humans will *not* match the Kolmogorov distribution;
still, we are pretty likely to make simple goal systems.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT