From: Stuart Armstrong (email@example.com)
Date: Wed Jun 18 2008 - 06:44:48 MDT
> My question is not whether such a thing is possible (I think it is), but whether a *non-evolutionary* RSI is possible.
Let's try and build a simple model. A lone agent in some simplified
grid with simple "move, eat, look" commands. Assume for the moment (I
know this isn't what you are looking for, but bear with me) that there
is a unique ideal behaviour for the agent. The agents programming
1) The ideal behaviour algorithm (just a list of the right commands).
2) Lots of complicated subroutines that always produce the same
answer, whatever the input possible imput.
3) A small program that looks for, and deletes, any subroutine that
always produces the same answer, replacing it with just the answer. It
does this one subroutine per trial.
Now the agent is sent repeatedly through exactly the same situation.
Its reward is zero if it fails to find the ideal behaviour; if it
finds the ideal behaviour, its reward is inversely proportional to the
Then this very simple model will display RSI, in that it will get
faster and faster, hence better and better, at its job (admitedly this
"job" is the reverse of what we mean by intelligence, but it is
improving in its narrow sense). So non-evolutionary RSI is definitely
possible, in some situations.
So let's look at the other part of your question:
> The crucial difference is whether the parent chooses the fitness function (e.g. intelligence), or the environment chooses.
> How do you distinguish an IQ of 1000 from an IQ of 2000?
The problem doesn't seem insoluble in principle - I can feel "she's
smarter than me" or "she's waaaay smarter than me", and rank the two,
even if I'm dumber than both. I'm sure the people who designed the IQ
test didn't have the highest possible IQ's. A problem with IQ tests is
that they are limited, short, and have correct answers. IQ 1000 and IQ
2000 would both ace the test.
But I can design "IQ" tests that can differentiate even advanced AI's.
Something along the lines of "run a command economy" to maximise some
result. Run and win an election campaign (against each other). Build a
living copy of a certain human being from scratch, for the least cost.
Find a shorter proof of the Shimura-Tanyama conjecture, using only
basic symbols. Create a blockbuster movie, more sucessful than your
competitor's (and any other). Be the first to build a reproduction of
Manhattan on one of Uranus's moons. Select among all living humans,
the six hundred who would self-organise into the most successful
mini-economy if they were all abandonned on a desert island.
We don't know exactly what intelligence is, or what higher
intelligence is. That doesn't mean we have no idea.
Hope this is useful!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT