Re: Seed AI (was: How hard a Singularity?)

From: Michael Roy Ames (michaelroyames@hotmail.com)
Date: Sun Jun 23 2002 - 13:00:42 MDT


>
> > I think explicit education by humans will be an important part of
> > bootstrapping an AI to the level of being able to solve its own
problems.
> > By the time human knowledge is even comprehensible to the AI, most of
the
> > hard problems will have already been solved and the AI will
> > probably be in the middle of a hard takeoff.
>
> I doubt this is how things will go. I think human knowledge will be
> comprehensible by an AI *well before* the AI is capable of drastically
> modifying its own sourcecode in the interest of vastly increased
> intelligence.
>

Above we have a number of different ideas all mushed together -->

1a) Source code

1b) Understanding the immediate/computational purpose of source code

1c) Understanding the meta-purpose of source code

1d) Improving source code while maintaining the immediate/computational
purpose

1e) Improving source code while maintaining the meta-purpose

---
2a) Human knowledge
2b) Understanding human knowledge
---
I think that the level of intelligence required for 1e) is substantially the
same as that required for 2b).  [Purpose = what-it-does, Meta-Purpose = the
reasoning behind the design]
One other thing: It would be prudent to design a take-off trajectory such
that "Understanding human knowledge" has significant (equal?) value to
"gaining intelligence".  Being really smart is all very nice, but without an
understanding of what has come before: much perspective is lost.  We want
not just *intelligence*, but the right kind of intelligence.
Michael Roy Ames


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT