From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Thu Aug 10 2006 - 12:31:55 MDT
Jef Allbright wrote:
> Your statement "some we might want a superintelligence to maximize..."
> obscures the problem of promoting human values with the presumption
> that "a superintelligence" is a necessary part of the solution. It
> would be clearer and more conducive to an accurate description of the
> problem to say "some values we may wish to maximize, others to
Anissimov's phrasing is correct. The set of values we wish an external
superintelligence to satisfice or maximize is presumably a subset of the
values we wish to satisfice or maximize through our own efforts. What
you would-want your extrapolated volition to do *for* you, if you knew
more and thought faster, is not necessarily what you would-want to do
for yourself if you knew more and thought faster.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT