Re: Notice of technical term: "Volition"

From: David K Duke (davidisaduke@juno.com)
Date: Tue Jun 15 2004 - 15:49:29 MDT


> In the initial dynamic? Yesssssssssss! Because there's no way in
> hell
> that your current conscious will is going to understand the
> successor
> dynamic, much less construct it. No offense.

Okay, I understand. But at the same time, I just don't want to transcend
yet. And I don't think you have the right to tell me (or program
something) to make me do that. I suppose that doesn't matter to you
though if you've thought think far does it?

> Will the successor dynamic do anything without your current
> conscious will?
> My guess is, yes, that is what is wise/benevolent/good-of-humanity
> etc.,
> which is to say, if you thought about it for a few years you would
> decide
> that that was the right decision. Philip Sutton and some others
> seem to
> think that this is not only unwise, it would be unwise for the
> successor
> dynamic to do anything other than present helpful advice.

Once there is a (virtually) all-powerful, benevolent being, what's the
hurry to force us what to do? Do you know about a not-so-friendly AI
traveling at light speed towards us or something?

> If you write an initial dynamic based on individual volition, then
> as
> discussed extensively in the document, you are *pre-emptively*, on
> the
> basis of your personal moral authority and your current human
> intelligence,
> writing off a hell of a lot of difficult tradeoffs without even
> considering
> them. Such as whether anyone fanatically convinced of a false
> religion
> ever learns the truth, whether it is theoretically possible for you
> to even
> try to take heroin away from drug addicts, and whether infants grow
> up into
> humans or superinfants. I do not say that I know the forever answer
> to
> these moral dilemmas. I say I am not willing to make the decision
> on my
> own authority.

When you undertake the creation of such a powerful being, you're already
doing that!

> No, not even for the sake of public relations. The
> public
> relations thingy is a lost cause anyway.

Well it surely is now. Do you think the politicians and military would
just hand over their wills, even if you say their "future" selves would
approve? There's a good chance this concept of yours could get some very
bad press - and worse - governmental/police/military/whatever
intervention.

> Also, you must correctly
> define
> the "is-individual?" predicate in "individual volition" exactly
> right on
> your first try, including the whole bag of worms on sentience and
> citizenship, because if the initial dynamic can redefine
> "is-individual?"
> using a majority vote, you aren't really protecting anyone.

I very much doubt, at least initially, humanity would vote to merge into
a single Jupiter-brain (since 99.5%+ of them aren't Singularitarians), or
whatever SL4 topic about sentience you wanna throw at me. It's similar to
many of those improbable moral dilemmas which won't have any application
in a real world.

> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence

DtheD

________________________________________________________________
The best thing to hit the Internet in years - Juno SpeedBand!
Surf the Web up to FIVE TIMES FASTER!
Only $14.95/ month - visit www.juno.com to sign up today!



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT