Re: [sl4] A model of RSI

From: Mikael Hall (mikael.hall@gmail.com)
Date: Tue Sep 16 2008 - 18:14:45 MDT


I think you have many valid points but I can't manage to agree visavi
evolution. In fact I believe dissolving your confusion is interesting and
illuminating.

Ok. To me "competition" is to "evolution" the same way "ability to solve
problems" is to "intelligence".
And, to me, you are using 'evolution' where 'competition' should be used:
(note
Let 'outin' be defined/examplified by "competition" and "ability".
Let 'inout' be defined/examplified by "evolution" and "intelligence".)

Computers are good at fullfilling goals but not in choosing goals, therefore
computes are stupid. So ability on its own is not enough to make
intelligence. Most people seem to believe this. I believe likewise that
competition is "stupid" and not enough to make evolution. Also, if one were
to put the concept of "goals" in relation to that picture, you would put
"goal choosing" in the inout box and "goal fullfillment" in the outin box.
Discussing self improving systems in terms of goalchoosing/inout
goalfullfillment/outin is crucial.

In fact, the almost complete separation of 'outin' (no danger around you -do
what you please) and 'inout' (something is happening - look around to find
out what it is) accomplished by letting programmed computers do things is
very related to the basic concern of sl4. This is a very basic fact - we
don't first and foremost need computers with human like behaivour. The basic
drive for the development of computing is goalfullfillment not goalchoosing.
It's the freedom of goalchoosing man strive to attain, and the easyness by
which man can choose goals for goal fullfilling entities has been a long
subject for war, thought and lambda calculus. And political ideologies gives
views "freedom of goal choosing" should mean just the same way we might try
to control possible goalfullfillers which we are'nt able to choose goals
for. In the late 1800'reds Russia started to loose control over their
goalfullfillers, now we fear loosing control over ours.

I don't have development towards transhuman intelligence as the primary
concern. I regard the increasing stupidification of goalfullfillment (inout
marginalisation), together with accelerating goalfullfillment rate, to be
the most pressing concern. All supereffective goalfullfillers are
potentially very dangerous, especially when we do not choose the goals.
Secondly, what happens when humans no longer can aspire to be
goalfullfillers - ownership wars?, extinction?. I do not dare to simply hope
for rapid prohuman superintelligence - the relationship between
goalfullfillment, goalchoosing and mankind must be discussed.

Kind Regards
Mikael Hall

2008/9/16 Matt Mahoney <matmahoney@yahoo.com>

> I think you are right that we can't distinguish between code and data. If
> we use the definitions in my paper then there is no difference between RSI
> (the program rewriting itself) and a simple program with a goal (meaning
> that the utility of the output would increase if its run time limit were
> relaxed).
>
> So how could RSI be defined in a meaningful way?
>
> Some examples I might consider to be self improvement:
>
> - Accumulating knowledge on how to better accumulate and use knowledge.
> - Books on how to write books.
> - Computer aided software engineering.
> - Computer aided hardware design of faster computers.
> - Genetically engineering humans for larger brains.
>
> Some examples I would NOT consider RSI:
>
> - Machine learning.
> - Human education.
> - Evolution.
> - Development of language and culture.
> - Economic development.
>
> The distinction I want to make is that RSI does not make use of external
> information not available at the start. Specifically, the agents who execute
> the improvement algorithm must know what the goal is, how to compute it, and
> how to test themselves and/or their offspring as to whether they are making
> progress toward this goal. In my examples of non-RSI, the agents and the
> systems have different goals. The teacher has different goals than the
> student. Evolution has the "goal" of increasing competitive fitness, which
> is at odds with the goals of agents that want to eat, have sex, and not die.
> The economy has a "goal" of producing a complex organization that can
> support a large population efficiently, as opposed to the goals of
> individuals to acquire money.
>
> So how can this idea be expressed formally as a property of Turing
> machines?
>
>
> -- Matt Mahoney, matmahoney@yahoo.com
>
>
> --- On Mon, 9/15/08, Denis <dnsflex@yahoo.com> wrote:
>
> > From: Denis <dnsflex@yahoo.com>
> > Subject: Re: [sl4] A model of RSI
> > To: sl4@sl4.org
> > Date: Monday, September 15, 2008, 4:09 PM
> > I think if "RSI" mean a program searching to
> > improve its behaviour without using others data can be a
> > good idea but it is very different to a "rewriting
> > itself" program.
> > The "rewriting itsef" is a ill-definition and the
> > only thing is possible to achieve in this way is a reduction
> > on a costant C.
> > For example given an universal Turing machine accepting in
> > input a program ( program without parameters) this turing
> > machine executing the program can use new empty cells or
> > rewrite a part or all the cells of the starting program.
> > If this program rewrite itself partially or totally by C
> > cells the only advantage you can have is to use also this C
> > cells in the elaboration.
> > There is not substantially difference from program and
> > data.
> > The trick is that you can move the program in the costant C
> > and this disappear asymptotically.
> > "Rewiting itself" is only an illusion.
> > A nice example is the Hanoy tower . In the recursive
> > program solving this problem you can watch at the stack and
> > you can think to it as a program with the istructions to
> > move the stones and this programs change! The trick is that
> > you are watching the wrong program!
> >
> > Denis.
> >
> > --- On Sun, 9/14/08, Matt Mahoney
> > <matmahoney@yahoo.com> wrote:
> >
> > > From: Matt Mahoney <matmahoney@yahoo.com>
> > > Subject: [sl4] A model of RSI
> > > To: "sl4" <sl4@sl4.org>
> > > Date: Sunday, September 14, 2008, 7:16 PM
> > > I have written a (rather trivial) recursively self
> > improving
> > > program, along with a draft of a paper that tries to
> > give a
> > > reasonable but rigorous definition of RSI. Any
> > comments are
> > > appreciated.
> > >
> > > http://www.mattmahoney.net/rsi.pdf
> > >
> > > -- Matt Mahoney, matmahoney@yahoo.com
>

-- 
"No, no, you're not thinking; you're just being logical." — Niels Bohr
"There are two kinds of people, those who finish what they start and so on."
— Robert Byrne


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT