From: David K Duke (firstname.lastname@example.org)
Date: Tue Jun 15 2004 - 14:01:44 MDT
> This is *not* what I want. See my answer to Samantha about the hard
> of the problem as I see it. I want a transparent optimization
> process to
> return a decision that you can think of as satisficing the
> superposition of
> probable future humanities, but that actually satisfices the
> of extrapolated upgraded superposed-spread present-day humankind.
So will the individual have a conscious, present choice in whether to
take part in this "optimization".
Lets say SingInst becomes hugely popular, and a reporter asks you about
the AI: "So what's this bibbed computer do anyway?"
You: "Well, it's going to [insert esoteric choice of words here
Reporter: "Doesn't that mean it's gonna do stuff without our presently
Reporter "Run for your lives and insurance company!"
Basically what I'm asking is, will it do this without my current
conscious will? Yes or no?
> Eliezer S. Yudkowsky http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
The best thing to hit the Internet in years - Juno SpeedBand!
Surf the Web up to FIVE TIMES FASTER!
Only $14.95/ month - visit www.juno.com to sign up today!
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:00:55 MDT