From: Michael Vassar (firstname.lastname@example.org)
Date: Thu Mar 23 2006 - 05:23:41 MST
>What exactly would be the result of teaching him to do applied Bayesian
>reasoning? That he would become a Superhuman intelligence, or that he
>would help to develop an AGI?
Well, "super" is one of those adjectives that describes a trait that recedes
as approached. I wouldn't bother talking to anyone who wasn't runing mental
software that makes them "super" relative to a "natural" hairless ape who
was not running such software. From the perspective of contemporary human
rationalists with human values the point of almost any rationally applied
intelligence is to develop FAI.
>Would you not have to decide on a set of goals to give him as he bacme a
>BDT system, to guide the reasoning processes?
We don't have technology to install goals into a human, but humans come
equipped with goals other than paperclip maximization. A human who
implemented BDT rapidly and accurately would still be a far cry from a
simple BDT system. They would basically be a less defective new kind of
>Does his ability to handle some types of math extend to an ability to keep
>a goal stack in his head?
No idea. I think most humans can do that reasonably well with a little
>And do we know anything about/have any control over the semantics and the
>grounding mechanisms that he would use to connect his facts to the real
Is this different from with any other human? The "grounding methods" are
already in place.
>If it were not for the fact that we are talking about a human being here,
>and not a computer, I would be interested to see the outcome of such an
Do you usually consider teaching people things to be human experimentation
and suspect, a-priori, as inhumane.
>It feels a little uncomfortable to think of anyone seriously picking up the
>phone and trying to involve him in such a project unless he understood
>exactly what it was that he was getting into.
By definition, without BDT he can't know what he's getting into, but neither
did any of us before acquiring most of the mental software we are running.
In so far as he is intentional he is already running approximations to BDT,
and at least some of those approximations should hold preferences for their
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT