Re: Singularity Institute: Likely to win the race to build GAI?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 15 2006 - 10:48:24 MST


> And if so, perhaps the Institute needs to put all its resources into
> researching and evangelizing Friendliness, then teaming up with the
> world's leading GAI researchers -- whether at MIT, Stanford, or
> wherever they are -- to add Friendliness to their development
> program.

You can't "add Friendliness". It adds requirements like determinism and
verifiability to the entire architecture. It rules out entire classes
of popular AI techniques, like evolutionary programming, plus all
combinatoric architectures based on mixing up a pot of tools and hoping
that intelligence pops out. When I added Friendliness to my list of
requirements, it took me an extra year (I was young) to notice that I
needed to throw out my entire AI theory and start over from scratch.
But I did. I wouldn't give good odds about a single pre-existing AI
project making that decision, in all the pool of possibles.

And even if 80% of them make that difficult decision, which is
improbable, the problem is still just as bad because Friendly AI adds
extra effort and time to development, which the 20% defectors don't have
to put in. So the good guys have to be smart enough to win even with
their handicaps, or you're screwed.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:30 MST