From: Stathis Papaioannou (firstname.lastname@example.org)
Date: Mon Jun 16 2008 - 03:48:02 MDT
2008/6/16 Matt Mahoney <email@example.com>:
> If RSI is possible, then we should be able to model simple environments with agents (with less than human intelligence) that could self improve (up to the computational limits of the model) without relying on an external intelligence test or fitness function. The agents must figure out for themselves how to improve their intelligence. How could this be done? We already have genetic algorithms in simulated environments that are much simpler than biology. Perhaps agents could modify their own code in some simplified or abstract language of the designer's choosing. If no such model exists, then why should we believe that humans are on the threshold of RSI?
Might it not be that RSI is impossible below a certain threshold of
intelligence, as seems to be the case for many human accomplishments?
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT