From: Brian Atkins (firstname.lastname@example.org)
Date: Tue Jul 31 2001 - 13:27:49 MDT
Ben Goertzel wrote:
> > > Just as I myself was a human-equivalent intelligence well
> > before I became a
> > > master of AI and computer science myself ;)
> > >
> > Yes, but no one doubled your total number of neurons around age 18.
> > And then doubled it again at 19, 20, 21... Do you think doubling a human-
> > level Webmind's mind resources would lead to little improvement
> > in effective
> > intelligence, double improvement, or much more, and why?
> Hmmm. Sure, that's a good point. Of course, having more memory would lead
> to an improvement in effective intelligence. The problem is that what's
> needed to make a WM able to intelligently rewrite its own source is not just
> "raw intelligence", but knowledge about CS and AI theory. To get that
> knowledge it needs to either read and fully "get" human research papers and
> textbooks (which requires not only human-LEVEL intelligence, but a
> particular understanding of human language and the pragmatics of human
> discourse), or have a lot of long teaching sessions with humans.
Right, but raw intelligence dramatically affects the learning speed. It
makes the difference even between what the AI can understand and what
it can't. Just in the very small range of human intelligence we have
kids that graduate college in their teens with physics degrees and we
have people with IQ below 100 who could never even do that in 50 years.
Now, what if we can exceed that small range dramatically by having something
with a much higher raw intelligence than the kid genius. And it can read
documents at computer speeds.
Do you think giving a human-level Webmind more resources (not just memory,
but ops) will increase its raw intelligence, or can this only be done
through it rewriting its code into new designs?
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT