Fascinating, fascinating

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Thu May 20 2004 - 04:59:42 MDT


 --- "Eliezer S. Yudkowsky" <sentience@pobox.com>
wrote:
>
> This was my ancient argument, and it turned out to
> be a flawed metaphor -
> the rule simply doesn't carry over. If you have no
> understanding of the
> psychology of a being with the brain the size of a
> planet, how do you know
> that no human can understand its psychology? This
> sounds like a flip
> question, but it's not; it's the source of my
> original mistake - I tried
> to reason about the incomprehensibility of
> superintelligence without
> understanding where the incomprehensibility came
> from, or why. Think of
> all the analogies from the history of science; if
> something is a mystery
> to you, you do not know enough to claim that science
> will never comprehend
> it. I was foolish to make statements about the
> incomprehensibility of
> intelligence before I understood intelligence.

I never made your mistake.

>
> Now I understand intelligence better, which is why I
> talk about
> "optimization processes" rather than "intelligence".

What do you mean 'optimization processes'? Sounds
like a major change in your views.

>
> The human ability to employ abstract reasoning is a
> threshold effect that
> *potentially* enables a human to fully understand
> some optimization
> processes, including, I think, optimization
> processes with arbitrarily
> large amounts of computing power. That is only
> *some* optimization
> processes, processes that flow within persistent,
> humanly understandable
> invariants; others will be as unpredictable as
> coinflips.
>
> Imagine a computer program that outputs the prime
> factorization of large
> numbers. For large enough numbers, the actual
> execution of the program
> flow is not humanly visualizable, even in principle.
> But we can still
> understand an abstract property of the program,
> which is that it outputs a
> set of primes that multiply together to yield the
> input number.

> Now imagine a program that writes a program that
> outputs the prime
> factorization of large numbers. This is a more
> subtle problem, because
> there's a more complex definition of utility
> involved - we are looking for
> a fast program, and a program that doesn't crash or
> cause other negative
> side effects, such as overwriting other programs'
> memory. But I think
> it's possible to build out an FAI dynamic that reads
> out the complete set
> of side effects you care about. More simply, you
> could use deductive
> reasoning processes that guarantee no side effects.
> (Sandboxing a Java
> program generated by directed evolution is bad,
> because you're directing
> enormous search power toward finding a flaw in the
> sandboxing!) Again,
> the exact form of the generated program would be
> unpredictable to humans,
> but its effect would be predictable from
> understanding the optimization
> criteria of the generator; a fast, reliable
> factorizer with no side effects.
>
> A program that writes a program that outputs the
> prime factorization of
> large numbers is still understandable, and still not
> visualizable.

Excellent, excellent. However: the level of
understanding required to understand the abstract
properties fully might still be beyond the IQ of most
people...if not all people.

Also...you may be technically correct that there's an
understandable abstract invariant, but the abstract
property might be *so* abstract that *for all
practical purposes* specfic results are unpredictable.

>
> The essential law of Friendly AI is that you cannot
> build an AI to
> accomplish any end for which you do not possess a
> well-specified
> *abstract* description. If you want moral
> reasoning, or (my current
> model) a dynamic that extrapolates human volitions
> including the
> extrapolation of moral reasoning, then you need a
> well-specified abstract
> description of what that looks like.

What if a 'well-specified abstract description' is
beyond your IQ? What's wrong with a partial
description which has a probabalistic chance of
working? Nothing is ever specified with 100% rigour
in the real world: there are simply 'degrees' of
rigour. And one's IQ will probably place a 'ceiling'
on the degree of rigour which can be reached. We see
this in mathematics. There's really nothing proved
with 100% absolute certainty - just a shading off in
degree of rigour as the proofs get more and more
complex. Case in point: Wiles 'proof' of Fermat's
Last Theorem. 100 pages of dense mathematics, with
'total rigour' only being an ideal.

>
> In summary: You may not need to know the exact
> answer, but you need to
> know an exact question. The question may generate
> another question, but
> you still need an exact original question. And you
> need to understand
> everything you build well enough to know that it
> answers that question.
>

Keeping plugging away. You're moving in the right
direction but have some way to go yet. I must stop
dropping hints ;)

> --
> Eliezer S. Yudkowsky
> http://intelligence.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence

=====
"Live Free or Die, Death is not the Worst of Evils."
                                      - Gen. John Stark

"The Universe...or nothing!"
                                      -H.G.Wells

Please visit my web-sites.

Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT