Re: Collective Volition: Wanting vs Doing.

From: Jef Allbright (jef@jefallbright.net)
Date: Mon Jun 14 2004 - 19:01:56 MDT


Eliezer Yudkowsky wrote:

> Keith Henson wrote:
>
>>>
>>>
>
>>> Also, where do I get the information? Like, the judgment criterion
>>> for "wise decisions" or "good of humanity". Please note that I mean
>>> that as a serious question, not a rhetorical one. You're getting
>>> the information from somewhere, and it exists in your brain; there
>>> must be a way for me to suck it out of your skull.
>>
>>
>> Not when it isn't there.
>
>
> If the algorithm isn't there, or the map to an algorithm, then where
> is it?
>
>> Further, the question is poorly framed. "good of humanity" for
>> example. What is the more important aspect of humanity? Genes?
>> Memes? Individuals built by genes who are running memes? I have
>> been thinking around the edges of these problems for close to two
>> decades and I can assure you that I don't have *the* answer, or even
>> *an* answer that satisfies me. (Right now, of course, they are all
>> important.)
>
>
> I see no reason why I should care about genes or memes except insofar
> as they play a role in individuals built by genes who are running
> memes. What exerts the largest causal influence is not necessarily
> relevant to deciding what is the *important* aspect of humanity; that
> is a moral decision. I do not need to make that moral decision
> directly. I do not even need to directly specify an algorithm for
> making moral decisions. I do need to tell an FAI, in a well-specified
> way, where to look for an algorithm and how to extract it; and I am
> saying that the FAI should look inside humans. There is much
> objection to this, for it seems that humans are foolish. Well, hence
> that whole "knew more, thought faster etc." business. Is there
> somewhere else I should look, or some other transformation I should
> specify?
>
You're asking good questions, and the process of asking these
increasingly accurate questions will lead to increasingly accurate
solutions.

The vector sum of current human volition is not wisdom. It's not even
an early approximation of wisdom. In fact, it's currently badly skewed.

The vector sum of current human volition does not represent wisdom, let
alone embody wisdom. Acting as if it did would invite disaster, given
the current state of human development. The answers you seek do not
exist yet, no matter how deeply and widely one might be able to probe
the collective human psyche. There are no pointers, maps, or
transformations of this collective data that could be directly applied
to the solution you (we all) seek. The current data set is strongly
skewed toward short term, local scope thinking. The answers do not
exist in the current data set but we can expect they will emerge only as
part and result of the process.

Yes, wisdom is present within the collective landscape, but most of
humanity perceive and consider only a small portion of the whole, and
the answers you seek within it require a broader scope of human
intelligence. The seeds exist but they have not yet grown, and it is
impossible to see the tree without planting, nurturing, and waiting for
the seeds to grow.

To model the collective volition of humanity is a worthy goal, not to
extract from it the ideal human volition, or even an a starting
approximation of the ideal, but for the purpose of better understanding
and contributing to the process that will get us wherever we will be in
the future. There is much work that can be done to improve the process
of humanity getting closer to its evolving goals, but progress will be
made by building upon the foundations of morality rather than futilely
preparing to prune a unique and yet unknown tree when it is still a seed.

What is moral, in the minds of people of disparate backgrounds, tends to
converge as their understanding and interests broaden. As the scope
expands to include broader space of interaction, broader range of
interacting parties, and broader time period being considered, "what is
moral", tends to converge into an ever clearer sense of shared
direction. You can only get there by performing the interactions -- a
model of sufficient accuracy would take just as long to run the
simulation as the reality -- but you can extract principles of
successful interactions along the way and apply these principles toward
"promoting the good."

- Jef
http://www.jefallbright.net



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT