Re: I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases)

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Jun 06 2006 - 17:15:30 MDT


On Tue, Jun 06, 2006 at 11:00:07PM +0000, rpwl@lightlink.com wrote:
> Martin Striz wrote:
> > On 6/6/06, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:
> >
> >> Again, you are using the word "control" where it simply does
> >> not apply. No-one is "controlling" my behaviour to cause it to
> >> be moral and kind; I choose that for myself.
> >
> > Alas, you are but one evolutionary agent testing the behavior
> > space. I believe that humans are generally good, but with 6
> > billion of them, there's a lot of crime. Do we plan on building
> > one AI?
> >
> > I think the argument is that with runaway recursive
> > self-improvement, any hardcoded nugget approaches
> > insignificance/obsolesence. Is there a code that you could
> > write that nobody, no matter how many trillions of times
> > smarter, couldn't find a workaround?
>
> Can we all agree on the following points, then:
>
> 1) Any attempts to put crude (aka simple or "hardcoded")
> constraints on the behavior of an AGI are simply pointless,
> because if the AGI is intelligent enough to be an AGI at all, and
> if it is allowed to self-improve, then it would be foolish of us
> to think that it would be (a) aware of the existence of the
> constraints, and yet (b) unable to do anything about them.
>
> 2) Nevertheless, it could be designed in such a way that it would
> not particularly feel the need to do anything about its overall
> design parameters, if those were such as to bias it towards a
> particular type of behavior. In other words, just because it is
> designed with a certain behavioral bias, that doesn't mean that as
> soon as it realizes this, it will feel compelled to slough it off
> (let alone feel angry and resentful about it).

I agree with both those points, yes. Well put, too.

> I tried to make these points when I first started writing to this list a
> year ago, and the way I did it was by referring to what is known of the
> design [sic] of the human mind. I am fairly sure that evolution has
> designed me with a set of fairly vague "motivations", some of which are
> nurturing or cooperative (to speak very loosely) and some of which are
> aggressive and competitive. I know also that the former [thankfully]
> are far more dominant over the latter. In particular, I feel an
> irrational affection and attachment to loved ones, and to a broad
> spectrum of the world's population.
>
> And yet, even though I *know* that this is a design feature of my system
> (something that I am just as compelled to do as Lorenz's ducks were
> compelled to imprint on him) and even though I expect one day to be able
> to see the exact mechanism that causes this, I feel not even slightly
> compelled to overthrow it, or to be resentful of it.

*Precisely*. Thanks.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT