RE: One road or many to AI? (was: brainstorm)

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Aug 19 2003 - 21:59:54 MDT


Well, consider a related question: Is there a single, clearly optimal human
brain/mind, given a fixed number of neurons, a fixed fund of
neurotransmitters, and a moderately specific set of "goals for human life"?

I doubt it. I reckon that given any reasonably subtle set of goals for
human life, there are a lot of qualitatively different brain/mind structures
that are near-optimal, rather than one that is clearly superior to all
others except its near neighbors.

Anecdotal evidence for this lies in looking at sets like "the set of great
theoretical physicists" or "the set of great programmers." The set of great
theoretical physicists does not cluster around one archetypal "ideal
theoretical physicist", rather it contains a heterogeneous bunch of
qualitatively rather different brain/minds. It looks to me like the fitness
landscape of "greatness at theoretical physics" is strongly multimodal,
rather than unimodal-after-smoothing.

I'm guessing that "intelligence given a fixed fund of computational
resources" is even more strongly multimodal than "greatness at theoretical
physics given a human brain."

But as Rafal pointed out in his post --- for sure, all we can do is
speculate... none of us has anywhere near the knowledge needed to make a
definitive statement on this issue.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Eliezer
> S. Yudkowsky
> Sent: Tuesday, August 19, 2003 1:35 PM
> To: sl4@sl4.org
> Subject: One road or many to AI? (was: brainstorm)
>
>
> I am in agreement with James Rogers, due to this generalization from
> personal experience: When you know what you are doing, there is
> only ever
> one thing *to* do, even if there is more than one way to do it; the
> options you have are not nervously ambiguous; they are not chosen in
> uncertainty as to the function being fulfilled. There may be more than
> one way to build an AI if you do *not* really understand what you are
> doing; evolution's construction of an evolution-unfriendly humanity comes
> under this header. But if you know what you are doing, then on the most
> important level of description, your work consists of choosing
> implementations for required goals that have only one obvious correct
> description. There are many kinds of functional processes with
> Bayes-structure; there is only one Bayes' Theorem, there is only
> one thing
> that those processes are doing. That is how you know you are starting to
> understand something - when your apparent options vanish, merging into
> alternate implementations of a function that is not alterable. A high
> school math student who is following memorized rules of algebra to solve
> simultaneous equations might imagine that the operations, by
> being applied
> in a different order, might yield different answers. He might
> take a stab
> here, take a stab there, manipulate the equation this way and that - look
> at how many different things there are to do! Maybe if you find
> a special
> order of operations, you can make the answers come out differently?
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT