RE: The dangers of genuine ignorance (was: Volitional Morality and Action Judgement)

From: Ben Goertzel (ben@goertzel.org)
Date: Wed May 26 2004 - 18:14:35 MDT


Eliezer,

My main point in this dialogue was: I don't believe I'm so ignorant or
such a dipshit or so hide-bound by my preconceptions that, if you
articulated your current theories on FAI and related subjects, I'd be
unable to understand them. I've understood a lot of opaque and subtle
things from a lot of scientific disciplines. I don't believe that your
insights are an order of magnitude more difficult to grok than
everything else in science, math, philosophy, etc.

Next, to respond briefly to a few other peripheral points from your
message...

1)
I'm quite knowledgeable of probability theory, including Bayes rule and
its accompanying apparatus, so if I make errors in judgment about FAI or
related topics, it's not because of ignorance of probabilistic
mathematics. I used to teach that sorta math in the university, back in
the olden days. And I've done a lot of work with probabilistic
inference lately, in the Novamente context.

2)
It really doesn't seem to me that YOU are more rational than most
scientists I know, although you talk about rationality more, and are
more interested in your own rationality. I think you're a highly
rational person, relative to most humans; but from my point of view, it
seems that your emotions affect your judgment sometimes, just like with
everyone else. You may argue this is not the case -- but then, I
suppose the scientists who you call less-rational-than-Eliezer would
argue with you and defend their own rationality as well....

I have known others who seemed to me more consistently rational than
either you OR me; but all these folks were also significantly less
creative than either of us.

3)
About recognizing, in hindsight, the stupidity of alchemy: yes, of
course, it's relatively easy to avoid making mistakes of the same type
that were made in the past (though humans as a whole are not so good at
even this!). What's much harder is to avoid making *new* types of
mistakes. The universe is remarkably good at generating new kinds of
mistakes to make fools out of us ;-)

-- Ben

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Eliezer Yudkowsky
> Sent: Wednesday, May 26, 2004 7:46 PM
> To: sl4@sl4.org
> Subject: The dangers of genuine ignorance (was: Volitional
> Morality and Action Judgement)
>
>
> Ben Goertzel wrote:
>
> >> Feel free to explain how a realistic and frightened medieval
> >> alchemist can convince a hopeful, cheerful, vaguely
> mystical medieval
> >> alchemist that there is no way to concoct an immortality serum by
> >> mixing random chemicals together. Bearing in mind that the first
> >> alchemist has to drink whatever the second alchemist comes
> up with.
> >> Bearing in mind that there is in fact no way to do it, and
> that being
> >> ignorant of this does not change Nature's law in the slightest.
> >
> > The problem is that the realistic and frightened medieval alchemist
> > actually has no way of knowing that concocting an
> immortality serum is
> > impossible. We know that NOW because of the science we've
> accumulated,
> > but no one could know that in the medieval times. Alchemy
> was not as
> > stupid then as it is now. Which is why there were more
> alchemists then
> > than now.... Alchemy is idiotic only in hindsight, which
> is why someone
> > as brilliant as Isaac Newton was an avid alchemist.
>
> So the second alchemist, having triumphantly proved that he
> is genuinely
> ignorant of the difficulty, drinks his immortality potion and
> dies. What's
> wrong with this picture?
>
> This is why it is important never to place yourself in a
> situation where
> you have something to gain by proving your ignorance.
>
> If the realistic and frightened alchemist happens to be a Bayesian
> rationalist with a history of science to study, it's
> straightforward enough
> in hindsight (once you realize the danger exists) to notice
> that nobody has
> any legitimate reason to expect an immortality serum to pop
> out of randomly
> mixing chemicals. Or think that the difficulty of mixing an
> immortality
> serum might be comparable to the difficulty of building an
> airplane. Or
> compare heartwarming mystical thinking about immortality serums to
> heartwarming mystical thinking about vitalism. Or to observe that
> present-day alchemy is a field in chaos, with many facts
> known, but the
> generalizations mystical, heartwarming, and unpredictive.
> The mistake is
> obvious in retrospect and once I caught myself at it - my
> previous self's
> triumphant ignorance of the Singularity or the nature of
> consciousness -
> the mistake also becomes obvious looking forward.
>
> No, Isaac Newton didn't see the stupidity of alchemy. Isaac
> Newton had a
> much shorter scientific history to study. Newton did not read about
> evolutionary psychology, vitalism, Bayes (Laplace, actually),
> or Thomas
> Kuhn. I am sick and tiring of hearing about allegedly smart
> people who
> made mistake XYZ. Isaac Newton was irrational? Fine. Do
> you suppose that
> no one is ever allowed to do better, lest it diminish the
> luster of sacred
> Newton's name? Humanity has moved on, and today's geniuses
> may aspire to
> higher standards. It's not like Isaac Newton applied his
> genius to the
> mathematics of rationality. I have more shoulders to stand
> on, and I can
> do better than that.
>
> Rationality is not an innate talent, though it depends
> strongly on innate
> talent. Training the innate talent requires knowledge
> humanity has only
> recently discovered, not known to Newton's day. And learning
> the art of
> rationality requires changing yourself as a person, an
> inconvenience to
> which few scientists are willing to subject themselves. It
> is so much more
> fun to leave the map blank, for then you can draw in the
> heartwarming lands
> you would like to see.
>
> If alchemy was less visibly stupid in medieval times, it is
> because there
> was no history of science to tell worried alchemists the folly of
> heartwarming ignorance, or the improbability of specific
> complex miracles,
> or that Nature does not need to convince you of the danger
> before She is
> allowed to kill you. Today that is not a valid excuse.
>
> >> There is too much science. Funny, how the people asserting the
> >> ignorance of science on some subject are so rarely specialists in
> >> that
> >> *particular* field...
> >
> > I am a specialist in the field of dynamical systems (at
> least, I was a
> > few years back, I published in the field, knew all the literature,
> > etc.). So I think I know basically what is known about the
> attractors,
> > terminal attractors, invariant measures, etc. of complex
> systems. And
> > it ain't nearly enough to tell us anything about the
> attractors etc.
> > that complex self-modifying AI's will get into.
>
> Interestingly enough, my "too much science" quote was cribbed from an
> unpublished work which reads:
>
> "How could anyone possibly state with confidence that science
> did not know
> a thing, if he were not a specialist in that field? And even
> then, outside
> specialists might know detailed technical answers to
> questions that you
> fancied untouched mysteries of your own field. I had lost
> track of the
> number of papers I had read, offering up as mysteries
> questions another
> field had solved. And when someone gave their favorite
> mystery as proof of
> humanity's ignorance, more commonly than not there would be
> entire journals
> devoted to the answer, a depth of technical knowledge that
> stretched back
> for decades, international conferences and research
> institutes. There
> might be multiple fields of science devoted to separate parts of the
> question. Science was a huge edifice, already vastly more
> than any lone
> human could absorb, and it kept on accumulating. No one
> could say what
> science did not know. There was too much science. One
> needed to train
> oneself to always ask if a mystery might already have a known answer,
> rather than incredulously saying, 'But how could anyone know
> that?' For
> seven-eighths of the time, someone would. That was what it
> meant to live
> in a world of billions of individuals, rather than a
> hunter-gatherer tribe
> of two hundred."
>
> In this case the math of dynamical systems (of which I do
> know a little) is
> simply inappropriate. The appropriate math is Bayesian
> probability theory,
> and expansions of expected utility theory.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT