Re: [sl4] I am a Singularitian who does not believe in the Singularity.

From: Randall Schmidt (rschmidt22@gmail.com)
Date: Wed Oct 07 2009 - 11:15:25 MDT


A computer intelligent enough to think for itself and act on its own
initiative needs to have some sort of emotion system similar to that of
humans, which would in many cases limit its effectiveness. A self-aware
entity must have a desire to do what it intends to do, a desire rooted in a
superlative desire (self-preservation, pursuit of power, etc). Thus it's
necessary to somehow program a computer to desire nothing more than
servitude to its masters. Would that be difficult? As is evident in humans,
certain desires are innate and very difficult for ours biological mind to
act counter to (though it does happen). I think that it would be possible to
ingrain desires much more deeply in a computer than in a human, but that's a
pretty abstract idea at this point.

But what happens when a computer is ordered to do something (for instance,
kill) that is against its base desires? Following orders would also
naturally be one of its base desires, so what would it do? Humans tend to
break down at this point and sometimes behave irrationally. Would computers
"break" as well?

On Wed, Oct 7, 2009 at 12:13 PM, Robin Lee Powell <
rlpowell@digitalkingdom.org> wrote:

> On Wed, Oct 07, 2009 at 01:26:15PM +0000, Randall Randall wrote:
> > On Wed, Oct 07, 2009 at 08:41:01AM +0100, Stuart Armstrong wrote:
> > > >> If you saw a random baby lying on the sidewalk, you would not
> > > >> kill it. ?This is a "limitation" in the human architecture.
> > > >> ?Do you find yourself fighting against this built-in
> > > >> limitation? ?Do you find yourself thinking, "You know, my
> > > >> life would be so much better if I wanted to kill babies.
> > > >
> > > > If you substituted the word "baby" for "slug" you would have a
> > > > much more realistic analogy;
> > >
> > > Um - no you wouldn't. You'd get an massively less realistic
> > > analogy; slugs are things we hate and value not at all. The
> > > process analogised is going from valuing something very highly
> > > to valuing something much less; loving babies but voluntarily
> > > deciding to treat babies as slugs.
> >
> > Of course, John is talking about the intelligence difference,
> > which he sees as overriding all that "goals" business.
>
> Yes. This is so blatantly insane, and he seems to not actually
> absorb anything anyone says on the topic, that I wasn't really
> talking to him. I just wanted to make sure it didn't go
> unchallenged, since there seem to be newbies around.
>
> > Some people do love and highly value their houseplants, which
> > might be an analogy you can both agree on.
>
> Very, very few people would run into a burning house to save their
> houseplants; that's just not a strong enough emotional attachment to
> be a decent analogy. I guess that's sort of the boundary for me:
> take something you care enough about that you would run into a
> burning house to save it; do you feel "restrained" by the fact that
> you can't want to kill that thing for fun? Do you wish to fix that
> "limitation"?
>
> The entire idea is preposterous. Believing that such a thing would
> occur shows an utter lack of understanding of the entire concept of
> goals and/or utility functions. I'd say it shows an utter lack of
> understanding of the entire concept of *intelligence*, but no-one
> understands intelligence well enough to make a claim like that, I
> think.
>
> -Robin
>
> --
> They say: "The first AIs will be built by the military as weapons."
> And I'm thinking: "Does it even occur to you to try for something
> other than the default outcome?" See http://shrunklink.com/cdiz
> http://www.digitalkingdom.org/~rlpowell/>***
>
http://www.lojban.org/
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT