Re: How to make a slave (was: Building a friendly AI)

From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Nov 27 2007 - 13:47:00 MST


On Nov 27, 2007 1:24 AM, John K Clark <johnkclark@fastmail.fm> wrote:
> On Mon, 26 Nov 2007 "Thomas McCabe"
>
> > If you tried to use anthropomorphic reasoning
> > about a 747, or a toaster, or a video game,
> > you'd be laughed at.
>
> Of course it would be ridiculous to use anthropomorphic reasoning to
> understand how a toaster works, but not if you used it to understand
> another mind;

To get a vague sense of how different "another mind" can be, try
talking to someone who has *never* experienced Western culture. Then
realize that they're still 99.9% identical to you genetically. Or
better yet, try conversing with an orangutan via sign language; he's
something like 90% identical to you genetically, which is more than
you can say for an AGI.

> it is after all the only tool we have for doing such a
> thing,

You see, we have these things called "reason" and "logic", which we
can also use to understand minds. If we tried to use ancestral
instincts instead of reason for getting along in modern society, we'd
be utterly screwed. Every time you didn't get something you really
wanted because you knew it would screw up long-term plans, you used
reason to override pre-GI instincts.

> that's why it evolved.

It evolved because it was useful *in the ancestral environment*, when
everyone thought pretty much the same way. That's no longer true even
in contemporary human culture; I use anthropomorphic reasoning
incorrectly with *other humans* all the time.

> At any rate if you want to insult me
> you're going to have to find something new to call me because I don't
> find it insulting to be called a believer in anthropomorphism, just a
> bit repetitive.

We've long since established that you're a believer in
anthropomorphicism, there's no need to dwell on it. Initially, I
presumed nobody on this list would be that naive, but, heck, I've been
wrong before.

> > Probabilities of zero will give you nonsense
> > in Bayesian probability theory.
>
> Then to hell with Bayesian probability theory, the probability that 2
> and 2 will turn out to be 5 is zero.

If you start out assuming Bayesian probability theory is wrong, you
will be forced to conclude that 2 + 2 = 5, or something equally
ridiculous. For the full derivations, see Probability Theory: The
Logic of Science by E.T. Jaynes.

> > This does not mean it is an invalid term
>
> The term AGI is used by members of this list and almost nobody else, if
> you don't believe me do a Google search for AGI and see what you get.
> This is a classic example of inventing new jargon to make tired and
> rather silly ideas sound revolutionary.

If you actually did Google it, you would have found that there was a
full-scale, physical conference on AGI scheduled for March 2008
(http://agi-08.org/), with more than fifty papers submitted so far.

> > Please, please, please *read the bleepin' literature*
>
> You mean read the "literature" about AGI that no working scientist is
> the slightest bit interested in?

If you want the academic literature, get a copy of Artifical General
Intelligence (Goertzel, Pennachin) at
http://www.amazon.com/Artificial-General-Intelligence-Cognitive-Technologies/dp/354023733X.

> John K Clark
>
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
> http://www.fastmail.fm - Access all of your messages and folders
> wherever you are
>
>
>

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT