Re: Definition of strong recursive self-improvement

From: Russell Wallace (russell.wallace@gmail.com)
Date: Mon Jan 03 2005 - 10:06:50 MST


On Mon, 03 Jan 2005 00:26:42 -0600, Eliezer S. Yudkowsky
<sentience@pobox.com> wrote:
> When a car has three flat tires and a broken windshield and is leaking
> oil all over the pavement, you don't need to see a new car to know this
> one is broken.

Mind you, a car in that condition wouldn't go, and humans do. From my
perspective you're saying "cars are crap compared to teleportation"
and I'm replying "maybe, but cars work and teleportation doesn't".

> But since you ask: A Bayesian standard, of course. Why
> do you think cognitive psychologists talk about Bayes? It's so that
> they have a standard by which to say humans perform poorly.

But okay, that's clear.

> This sounds like the old fallacy of a modular system not being able to
> copy a module into a static form, refine it, and execute a controlled swap.

The hard part is knowing when to swap in the new version of the module
and when to discard it. Okay, so you figure some variant of Bayesian
reasoning can do this with adequate reliability... go ahead and prove
me wrong! ^.^

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT