Re: Is complex emergence necessary for AGI?

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Sep 20 2005 - 13:02:20 MDT


I am not going to reply to any more of this stuff, because almost
everything said here about Complex Systems is based in a complete
misunderstanding of what Complex Systems actually are.

Michael Wilson wrote:
> Ben Goertzel wrote:
>
>>An unpredictable emergent phenomenon in a system is a behavior in a whole
>>system that we know can in principle be predicted from the behavior of the
>>parts of the system -- but carrying out this prediction in practice is
>>extremely computationally difficult.
>
>
> I'm not going to continue criticising the 'we must use Complexity theory'
> position, as I think that debate is past the point of diminishing returns.
> However given the potential for confusion in the extended back-and-forth
> I think I should clarify my (and to a lesser extent, the SIAI's) position.
>
> 1. The requirement for a certain amount of strong predictability comes from
> the need for Friendliness, analysis that suggests that unless you strongly
> constrain goal system evolution it will be highly unpredictable, and the
> simple fact that when humans are confident something will work, without
> having a technical argument for why it will work, we're usually wrong.
>
> 2. Thus the SIAI has the design requirement; goal system trajectory must
> reliably stay within certain bounds, which is to say that the optimisation
> targets of the overall optimisation process must not drift out of a
> certain region. This is a very specific and limited kind of predictability;
> we don't need specific AI behaviour or cognitive content. I agree that the
> task would be impossible if one were trying to predict much more than just
> the optimisation targets. I am happy to have all kinds of emergence and
> Complexity occuring as long as they stay within the overall constraints,
> though theory and limited experimental experience suggests to me that there
> will be a lot less of this than most people would expect.
>
> 3. If that turns out to be impossible, then we'd agree that AGI development
> should just go ahead using the best probabilistic methods available (maybe;
> it might make sense to develop IA first in that case). But we shouldn't
> write something this important off as impossible without trying really
> hard first, and I think that many people are far too quick to dismiss this
> so that they can get on with the 'fun stuff' i.e. actual AGI design.
>
> 4. Various researchers including Eliezer have spent a fair amount of time
> on this, and so far it looks probable that it is possible given arbitrary
> AGI design that have access to unbounded computing power. The critical
> question is whether there is a tractable design for an AGI that satisfies
> the structural requirements of these theories. This is something that I'm
> working on; unfortunately I'm not aware of anyone else working on it at
> present, though I certainly wish there was.
>
> 5. Any system compatible with the known approaches to strong verification
> of Friendliness will need to be consistently rational, which is to say
> Bayesian from the ground up and have the structural property of being
> 'causally clean', although not necessarily driven by expected utility.
> When I first accepted these constraints, they seemed onerous to the point
> of making a tractabale architecture impossible; all the 'powerful'
> techniques I knew of (improved GAs, stochastic codelets, dynamic-toplogy
> NNs, agent systems etc) were thoroughly probabilistic* and hence difficult
> to use or completely unusable. But after a period of research I now
> believe that there are acceptable and even superior replacements for all
> of these that are compatible with strong verification of Friendliness.
> I'm not going to defend that as anything more than a personal opinion at
> this time.
>
> * Annoying terminology conflict; 'probabilistic methods' are not the same
> thing as 'probabilistic logic'. The former are problem-solving techniques
> that don't reliably obey constraints and/or fail to show a reliable
> minimum performance in relation to normative decision theory; an analogy
> could be drawn to 'soft real time' instead of 'hard real time'. This is
> why saying 'Bayesian logic' to mean 'probabilistic logic' is not too bad
> an idea even if it causes people to fixate on one particular derrivation.
>
> 6. Basically, a rational system of this kind avoids unwanted interactions
> that would violate top-down constraints by constraining the way in which
> components can inteact as you string them together. The resulting
> structure could reasonably be called fractal; combining any set of
> rational components in a rational framework produces a combined system
> that is still rational. Yes, I mean something specific and moderately
> complicated by 'rational' which I don't have space to fully describe.
> Yes, doing this without sacraficing tractability is hard, but at present
> I am optimistic that it will not turn out to be impossible. Yes, I am
> working on a practical experiment/demonstration, this will take some
> time, and I wish I had more resources to do it.
>
> 7. Note that this introduces the notion of 'kinds of Complexity'; a
> system of this kind would be 'Complex' in some respects and non-Complex
> in others. There are plenty of existing technological systems that
> already look like this, so I see no reason to object to it.
>
> 8. Neither I nor the SIAI have claimed that this is the only way to build
> AGI; in fact if it was we'd sleep a lot safer at night. Unfortunately it
> seems entirely possible to build an AI using 'emergence', given enough
> brute force, neuroscience and/or luck. The SIAI's claim is that this is
> a /really bad idea/, because the result is highly likely to be iminical
> to human goals and morals. The claims that any transhuman intelligence
> will renormalise to a rational basis, and that this is actually a better
> way to develop AGI regardless of Friendliness concerns, are weaker ones
> and again stand only as opinion in public at this time.
>
> 9. No-one associated with the SIAI denies that the brain is an example
> of a 'Complex system', or that emergence as a concept won't be useful
> for studying it. We do claim that it is a horrible mess from and that it
> isn't terribly relevant to the task of building an AGI compatible with
> strong Friendliness verification. The position that closely mimicking
> the brain isn't a good way to build AGI regardless of Friendliness is
> again opinion, but the position that an AGI built in this fashion will
> probably be Unfriendly is strongly justified from the previously
> mentioned arguments.
>
> 10. The issue of 'Friendliness content' is genuinely seperate from
> 'Friendliness structure' and hence 'strong Friendliness verification'.
> The latter is perhaps a misnomer, as there is some theory that is
> applicable to any attempt to verify that an RPOP will do something
> specific, though it is true that there are some things we would probably
> want an FAI to do that require additional theory to describe and verify.
> Arguments about whether CV, or 'joy, choice and growth', or domain
> protection, or hedonism or Sysops or anything similar are a good idea
> are debates about Friendliness content. This is important, but it's
> well seperated from issues of structural verification and tractable
> implementation, and different in character (because it involves what
> we want instead of how to do it).
>
> 11. Personally I am quite skeptical of Eliezer's ideas about
> Friendliness content, but I support his very important and (as far as
> I can see) valid work on structural verification. I do wish he'd
> publish more, but that criticism can be levelled against most people
> working on AGI, including me. It's true that neither Eliezer nor the
> SIAI has done much work on tractability, which is the main reason
> why I'm working on it. However I agree that the question of how to
> build something in the real world should follow that of how to build
> it in principle, and that people need to be convinced about the
> desireability and theoretical possibility of structural verification
> (of AGIs as general optimisers) before it makes sense to argue about
> if we can do it with real software on contemporary hardware.
>
> 12. Finally, my objection to claims about the value of Complexity theory
> were summed up by one critic's comment that "Wolfram's 'A New Kind of
> Science' would have been fine if it had been called 'Fun With Graph
> Paper'". The field has produced a vast amount of hype, a small amount
> of interesting maths and very few useful predictive theories in other
> domains. Its proponents are quick to claim that their ideas apply to
> virtually everything, when in practice they seem to have been actually
> useful in rather few cases. This opinion is based on coverage in the
> science press and would be easy to change via evidence, but to date
> no-one has responded to Eliezer's challenge with real examples of
> complexity theory doing something useful. That said, general opinions
> such as this are a side issue; the specifics of AGI are the important
> part.
>
>
>>It may be that intelligence given limited resources intrinsically
>>requires stochastic algorithms, but that is a whole other issue.
>>Stochastic algorithms are not all that closely related to emergent
>>phenomena -- one can get both emergence and non-emergence from both
>>stochastic and non-stochastic algorithms.
>
>
> I agree, but in practice it does seem that stochastic systems are more
> likely to show/use emergence and vice versa.
>
> That said, I really must stop spending so much time writing emails.
>
> * Michael Wilson
>
>
>
>
>
>
>
>
> ___________________________________________________________
> To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT