From: Eliezer S. Yudkowsky (email@example.com)
Date: Sat Jul 23 2005 - 15:27:28 MDT
Russell Wallace wrote:
> On 7/23/05, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
>>Read some real evolutionary psychology. You might want to delete the words
>>"good" and "evil" from your vocabulary while you do that, then restore them
>>afterward, once your empirical picture of the world is correct.
> *rolls eyes heavenward* Eliezer, if I wanted to write a textbook on
> evolutionary psychology, I would do so. That's not what I'm trying to
> do here. If you don't get the line of argument, then you don't.
If you're not trying to write a textbook on evolutionary psychology, then
state the exact condition you believe the CEV to simulate, its effect on human
psychology, and the distorting effect you believe it has on the output of CEV.
Without tossing in random terms from evolutionary biology that I've never
heard anyone even try to relate to ev-psych before; if there's a relevant
paper, feel free to reference it.
>>The problem here is the ratio of cubic expansion through galaxies to
>>exponential reproduction. CEV doesn't solve this problem of itself, though it
>>might search out a solution. Neither does CEV create the problem or make it
>>any worse. Why do you suppose that we want to lock ourselves into a little
>>box rather than expanding cubically?
> By "locked into a box" I don't mean a bounded volume of physical space
> - that's not the issue.
Well, it bloody is in evolutionary biology! K-selection and r-selection are
not metaphors! They refer to bounded resources for growth, THAT'S ALL.
> I mean that everyone will be forced at all
> times to follow the will of the Collective in every deed, word and
> thought, with no escape, no refuge and no privacy; it doesn't matter
> whether their material substrate is a meter or a gigameter away from
> the center.
Which has absolutely NOTHING repeat NOTHING to do with
dN/dt = rN(K - N)
>>Okay. I think I may write you off as simply having failed entirely to
>>understand the concept of _extrapolated_ volition writing an AI, as opposed to
>>putting an AI in the hands of an immediate majority vote, a fairly common
>>misunderstanding. I honestly don't know how I can make the distinction any
>>clearer short of grabbing people by the shirt and screaming at them.
> Grabbing by the shirt and screaming is exactly what I feel like doing
> right now! Let me try once more:
> *It doesn't matter whether it's extrapolated or not!!!*  Someone
> who wants to trample a human face when his IQ is 100 is still going to
> want it when his IQ is 200, he'll just be more effective at finding
> ways to get what he wants.
Either CEV(2), or, how do you know this?
How do you know this holds true for every one of the billion different
extrapolation dynamics I might eventually decide to implement?
If I observed an attempted CEV returning this result, I would conclude that
the extrapolation dynamic was broken or failing to take something into account
required to return the desired sense of "want", most likely the part I
described as "More the people we wished we were". If no extrapolation dynamic
can get past that, the CEV concept fails - silently, I would expect, though
I'm afraid I'd presently have to rely on the Last Judge to implement the
silent failure part.
I do not believe it to be true, as an empirical fact, that people with IQ 100
and IQ 150 display the same distribution of personal moralities. For a start,
they don't display the same distribution of theologies.
> *This is the bit you keep refusing to understand.* I know you're smart
> enough to understand it if you tried, but you keep half-believing in
> the ghost of the old "intelligence => morality" idea.
*In humans*. Which makes all the difference in the world. Also it's not just
increased intelligence that the CEV extrapolates.
> Let me put it another way:
> Your hyperlibertarian Sysop scenario was effectively domain protection
> with a domain size of 1 person, with the obvious problems previously
> With CEV, all of humanity is forced into the same domain.
CEV(4) CEV(5) CEV(6)
> What objection do you have to allowing the intermediate position of
> multiple domains?
I never proposed to directly build a Sysop in the CFAI days, much less the
Objective Morality days. I would have strongly objected to directly building
a Sysop of any kind. Likewise domain protection. Such power should not exist
without a superhumane veto. Multiple domains are all well and good, provided
that some kind of superhumane veto has the chance to say "No". I would expect
that such decisions are best made by a superhumane entity directly, but
perhaps I am mistaken. I wouldn't want to exert that kind of optimizing power
without a superhumane veto on the decision to exert optimizing power.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:00:46 MDT