From: Jef Allbright (firstname.lastname@example.org)
Date: Fri Jun 09 2006 - 14:11:40 MDT
On 6/9/06, Martin Striz <email@example.com> wrote:
> On 6/9/06, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
> > Martin Striz wrote:
> > An AI with random-access memory *is* a complete internal model of
> > itself. Why should an AI bother to quine itself into its scarce RAM,
> > when two copies contain exactly the same information as one? What good
> > does it do to model yourself perfectly? What more does it tell you than
> > just being yourself?
> Wouldn't it be smart to test designs in a model before you dedicate
> them to your source code, rather than willy-nilly rewriting the stuff
> without being sure empirically what the changes would do? That seems
> even more dangerous.
> Either way, my point stands: you can't guarantee that AIs won't make mistakes.
Consider that your're not so interested in modeling the agent, but
rather its interactions with its environment.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT