Re: [sl4] Long-term goals

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Fri Jul 04 2008 - 16:07:43 MDT


On Friday 04 July 2008 01:59:54 pm Lee Corbin wrote:
> Charles writes
>
> > [Lee wrote]
> >
> >> The thing hardly qualifies as an AI if it doesn't have the ontology
> >> of a three year old. And if an AI can understand what trees, cars,
> >> tablespoons, and about twenty thousand other items are---that is,
> >> can reliably classify them from either sight (pictures) or feel---then
> >> it's going to know what a human being is, though (just as with
> >> any of us) it will be undecided about some borderline cases.
> >
> > One doesn't start out knowing about the world. One learns about it. A
> > three year old may have ontologies and know about object persistence, but
> > a new born doesn't, and presumably you aren't doing basic restructuring
> > after it becomes intelligent (i.e., when you're through writing it), only
> > before hand.
>
> Babies are really complicated. We have to be careful not to
> extrapolate too much. But the baby brain is still growing
> very quickly, and for a long time. It could be that it acquires
> its ontological beliefs *as* it's getting smarter.
>...
> Lee

Yes, babies are really complicated. But object permanence is one of the
things that they learn after they're born. (I *suspect* that it's via
Bayesian reasoning, and doesn't require any presumption of what nerve means
what.) OTOH, I'm certain that there are lots of complexities that I haven't
yet considered that will turn out to be essential. I'm not sure how to
disentangle those which are essential from those that appear to be optional
extras, but it sure would be nice to be able to do so before investing years
in trying to teach it. OTOH, one doesn't want to build in anything excessive
for two reasons:
1) It's a lot of work on a project that's probably already more than I can
handle, and
2) Anything you build in is a point of rigidity, which will render the system
less flexible. This is true even if the system has the ability to rewrite
the original code (which I think should only be true to a limited degree).
This is because in Bayesian logic your current beliefs bias the conclusions
that you will draw for any information that you later experience. And
Bayesian probabilities seems the best way to draw conclusions when there is
insufficient data. OTOH, it's an expensive process computationally, and
where good alternatives exist they should be employed. E.g., once you know
that you're dealing with an arithmetic series, guessing the next number via
Bayesian prediction is foolish. But to originally decide that it's an
arithmetic series... So you build in efficient ways to do things, but you
are very careful about how insistent you are that any particular way be used
at any particular time.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT