From: Richard Loosemore (firstname.lastname@example.org)
Date: Sun Sep 18 2005 - 16:13:40 MDT
Chris Capel wrote:
> Eliezer has mentioned that some of his current work involves
> mathematical correctness verification. Having just finished "The
> Singularity Is Near", and its strong emphasis on the use of
> self-organizing, complex systems based on understanding the human
> mind; I wonder whether it's possible to avoid this systems in an AI
> design, or whether verification of these systems of the kind we might
> need to guarantee some mathematical correlate of Friendliness is
> possible, despite their complexity.
I have just spent considerable energy on this list trying to get some
discussion going on these issues, with no success whatsoever.
The answers that others offer to your questions are, pretty much: no
you cannot really avoid complex systems, and mathematical verification
of their friendliness is the very last thing you would be able to do.
The main defining characteristic of complex systems is that such
mathematical verification is out of reach.
> The thing is, from what I understand from visual processing, (and this
> may apply on many other levels too,) neural nets are pretty much the
> only way we know how to create flexible and reliable pattern/feature
The message from several levels of the cognitive science view of
intelligence (not just visual processing) is that something like neural
nets (although a good deal more sophisticated than the NNs available
now) are indeed implicated in pattern/feature detection and, higher up,
in the general extraction of concepts from world data.
That's more or less the summary version of the lengthy debate I tried to
I'm interested that Kurzweil says this. Haven't seen the book yet.
> Chris Capel
> "What is it like to be a bat? What is it like to bat a bee? What is it
> like to be a bee being batted? What is it like to be a batted bee?"
> -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:58 MDT