RE: New Singularity-relevant book

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Oct 23 2002 - 09:35:43 MDT


Hi,

Bill Hibbard wrote:
> The difference between human and animal consciousness can be
> described in terms of whether animal minds include models of
> other animals minds, of events tommorrow, etc. Similarly, I think
> a key difference between human and machine consciousness will
> be the machines' detailed model of billions of human minds, in
> contrast to our detailed model of about 200 human minds.
>
> Because of the physical limits of human brains, our models of
> billions of human minds are averaged out. But machine brains
> that exceed our physical limits will be have detailed models
> of billions of human minds.

This ties in with a key point that has often been made on this list:
superintelligent AI's will have a superior ability to model and analyze
*themselves*.

I think that in the early stages of human-level general intelligence, the
ability of an AI to model billions of human minds will be useful to it.

Once it passes a certain level of intelligence, however, modeling human
minds will be no more interesting to it than modeling cockroach minds is to
a human.... Modeling other superintelligent minds (if indeed there is more
than one) will more likely be of interest...

> The statement "the essential property of consciousness in humans
> and animals is that it enables brains to process experiences that
> are not actually occuring" says something pretty rigorous. The
> simplest animal brains can only process events as they happen.
> But at some level of evolution, brains break free of "now".
>
> And the temporal credit assignment problem is a well known
> and rigorous problem. There has been some very exciting
> neuroscience into how brains solve this problem, at least when
> delays between behaviors and rewards are short and predictable,
> in the paper:
>
> Brown, J., Bullock, D., and Grossberg, S. How the Basal Ganglia
> Use Parallel Excitatory and Inhibitory Learning Pathways to
> Selectively Respond to Unexpected Rewarding Cues. Journal of
> Neuroscience 19(23), 10502-10511. 1999.
>
> This is available on-line at:
>
> http://cns-web.bu.edu/pub/diana/BroBulGro99.pdf
>
> I think that the need to solve the temporal credit assignment
> problem when delays between behaviors and rewards are not
> short and predictable was the selectional force behind the
> evolution of consciousness. Any known effective solution to
> this problem requires a simulation model of the world.

I basically agree with these comments. In our work with Webmind and
Novamente, we found that what we call "experiential schema learning" was the
hardest problem we faced. And temporal credit assignment is one of the
tricky parts of experiential schema learning....

It's a problem that human minds solve only by using some really poor
heuristics, and current narrow-AI programs don't solve at all. We didn't
really solve it in Webmind, though we were starting to make some interesting
headway on small problems when the project died for financial reasons in
March 2001. At our current rate of progress we're probably a year from
approaching this problem in Novamente (though a couple more kick-ass
volunteer coders with a lot of time on their hands could speed things up a
little... ;).

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT