RE: QUES: CFAI

From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Sun Jun 16 2002 - 18:14:34 MDT


Michael Roy Ames [mailto:michaelroyames@hotmail.com] wrote:

experiment, something which might facilitate discussion. When I read the
suggestion that there would be a strong superposed build-up around
'altruism', I found the idea intriguing... but hardly convincing, as it was
just an idea within a thought experiment. If indeed there is actual
"evidence" to support this idea, I too want to see it. Thinking further...

### Eliezer has a point with this hypothesis - I find it quite feasible that
a superposition of the volitional input of large numbers of human would
cancel out certain types of self-directed actions/concepts while leaving
other concepts unaffected: whenever the wills of two persons clash (are
producing opposite or incompatible plans for action), as the case usually is
in actions directly benefitting one of them while harming the other, there
is a cancelling effect but if there is a motivation aimed at benefitting
both of them at the same time, such motivations do not cancel. Averaged over
large numbers of persons there should be a buildup of enlightened
selfishness - concepts that lead to actions benefitting most while harming
nobody. This is a good agreement with my intuitive survival-oriented ethics
(which I described some time ago on the Exilist).

On the other hand, I could also envision a situation where the superposition
of individual wills would produce a very complicated shape, whose center
(the normative ethics) would not satisfy any individuals (for more details
see my discussion with Eliezer on the Exilist earlier this year).

------

How would one obtain such evidence? Through analysis of what people *do*
perhaps? Wouldn't you also need to know the reasoning behind thier actions?
That would involve asking them questions. How would we judge who was lying,
or even that they understood the questions?

IMO, there is unlikely to be any conclusive evidence (of convergence to a
central definition of 'altruism') until after uploading, and a direct,
impartial analysis can be made against a population of humans who's wetware
brains have been accurately simulated in software. It will be a great time
to be a cognitive psychologist :)

### Very good point, although it might be only necessary to model large
numbers of humans on a simpler level - not a full personality emulation but
a better understanding of the interactions between the human cognitive
subsystems (the analytical versus experience-based, left vs. right
prefrontal cortex, how do they interact to produce volitional output, how
discrepancies can be identified and avoided). This would still mean
understanding humans better than they do but then the FAI is supposed to be
really smart, or at least smart enough to hold off on actions as long as ve
does not have such an understanding.

Interesting you mention uploading in this context - maybe the future must
contain both Eliezer's and Eugen's visions to be complete. First the FAI
develops uploading, then ve invites representative humans, and is finally
able to understand the Right Thing To Do.

------

But by all means, if someone has real-world evidence of this convergence
now... bring it forward.

### The whole history of political progree has a distillation of the general
from the particular - starting with cliques explicitly aimed at eating
others, then grudgingly allowing others some freedom in order to be free
yourself, up to the idea of being nice to all nice people, so that nice
people would be nice to you.

Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT