From: Alex F. Bokov (firstname.lastname@example.org)
Date: Fri Oct 24 2008 - 10:26:42 MDT
Actually, I'm not talking about individual volition. I'm talking about
self-organizing clusters of coherent extrapolated volition. At the
moment, it seems like Eliezer is willing to concede failure of CEV
altogether if it fails to converge on one consensus (his Final Judge
comments), and other parts of CEV.html. I'm just asking why not
empirically determine the optimal number of CEVs, and follow them all?
If anything, it would be safer to pursue 10 well-fitted CEVs than
shoehorning humanity into 1 kinda-fitting CEV. The objection seems to be
lack of lateral mobility and conflict between factions. Well, having
just one CEV means 0 lateral mobility to any other model. As for
conflict, we seem to be assuming some scarce quantity to have conflicts
*about*. Why would this be a valid assumption in the long term? Is there
any reason the FAI can't ultimately build planets/dyson
spheres/simulation spaces to accomodate as many diverging humanities as
I suspect that, like most of us Western intellectuals, Eliezer has a
'Universalist' bias. I.e. diversity and "getting along with each other"
takes on an axiomatic value regardless of the actual impact this has on
the species (which, I might add, has spent most of its history living in
relatively small and relatively interrelated groups). For this reason, I
expect the initial implementation of CEV to be wrong, and hope that at
least the self-correcting functionality works properly. Hopefully the
Universalist bias will not infect the CEV to the point where it
propagates through the self-correction cycle itself.
It would be useful to find a way to replace these places in the equation
currently occupied by 'hope' with 'likelihood'.
Kaj Sotala wrote:
> On Fri, Oct 24, 2008 at 5:10 PM, Alex Bokov <email@example.com> wrote:
>> I've been lurking on this list for a long time, here's my first post.
>> It seems from reading EY's essays that there is the assumption that
>> CEV is calculated over the entire human race. Why is this constraint
>> necessary? Why foreclose on the possibility of self-organizing
>> subgroups of the human race having their own CEV's, with the
>> stipulation that killing off rival subgroups, etc., is off the table?
> Isn't this basically answered by the response to the "Why not base the
> Friendly AI on individual volition instead of coherence in humankind's
> extrapolated volition?" question in the original CEV article? (
> http://intelligence.org/upload/CEV.html )
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT