Re: Metarationality (was: JOIN: Alden Streeter)

From: Mitch Howe (mitch@iconfound.com)
Date: Sat Aug 24 2002 - 01:22:19 MDT


The problem I continue to have with this recurring debate is that ultimately
even the most rational thoughts stem from something that is hardwired. I
totally agree that it is better to have positions that are the conclusions
of clearly describable logical reasoning. But, in the end, "Because it will
enhance my overall understanding of the universe" might be of no more cosmic
significance than, "Because it will stimulate my pleasure centers." People
who are doing things for this second cause will rarely give you a sound
argument that identifies this motive, which merely indicates the unconscious
hotwiring of internal reasoning that we call rationalization... But if they
did, would you be impressed? Is reading a book on cognitive science really
any more rational than shooting up on heroin if the drug user impeccably
explains his reasoning?

In other words, it's not the unwillingness of some to use logic as a metric
that scares me, but rather the possibility that no supergoals are inherently
better than any others. After all, even the most noble altruistic urge, no
matter how stripped away from the ancestral instinct that originally gave
rise to that pattern of behavior, is still the result of a process native to
the particular hardware design of the person harboring it. "Because it
makes the world a better place." is really just a variation of "because it
makes me feel good," even if it has a deeper, less obvious path leading to
it. Hence, it's the psychological egoism pit that I fall into, not the fog
of self-mistrust.

And I have trouble seeing why an SI seeking the True Meaning of Friendliness
wouldn't reach the same conclusion, as much as I hope otherwise. I tried
(with equal incomprehensibility) to describe this dilemma in the closing
hours of this month's SL4 chat. I just have this sad image of a
superintelligence looking over the hopeful, expectant masses of humanity: a
mind alone with the awful and certain knowledge that Friendliness is a red
herring -- that the perception of goal A's inherent superiority over goal B
is really just a product of cerebral process X... no more remarkable than
processes A through W, and, in a cosmic context, really very silly. In such
a future, I have difficulty seeing why I should care if the SI somehow
stimulates my process X directly rather than bringing about an environment
where "natural" triggerings of process X are maximized. After all, says
this vision, its really just a neurotransmitter thing anyway. The universe
doesn't care about my human sense of good and evil.

Will someone please help me out of here? I'm guessing that Eliezer's
concept of Friendliness is supposed to reassure me through the idea that a
Friendly AI would possess a "human" sense of good and bad for the same
reasons I do, which is to say for the same potentially pointless (if not to
me), idiosyncratic thought patterns that I have. Is there some major
philosophical point I'm forgetting? Am I supposed to be content with this
solution because a Friendly SI (referenced to human Friendliness) is
theoretically optimal for human space?

--Mitch Howe



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT