Re: [sl4] Unlikely singularity?

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sat Aug 09 2008 - 22:19:32 MDT


This is an interesting discussion. I don't believe there will be a singularity of the form described by Good and Vinge, where if humans can create agents smarter than we are, then so can they. I say this because there is currently no model for recursive self improvement where the parent chooses the fitness function. The problem is that an agent cannot recognize higher intelligence in another agent. Humans can recognize children with an IQ of 200, but not adults.

I would be persuaded if there was a software or mathematical model of RSI. An example of a model would be an agent that makes modified copies of itself and tests its children by giving them problems that are hard to solve and easy to verify. For example, the child could play the parent at chess to the death, or they could compete to factor products of random numbers or solve NP-complete problems. The modifications could be made by flipping random bits in the parent's source code, or in a compressed representation of it, or by some other process. However I know of no model with either a proof or experimental verification. Certainly nothing like this exists in nature.

 
I do believe that humans can create an environment with greatly accelerated evolution where agents do not get to choose what "improve" means. This type of singularity would probably be bad for the human race, according to our current definition of "bad".

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT