Re: Eliezer: unconvinced by your objection to safe boxing of "Minerva AI"

From: Daniel Radetsky (daniel@radray.us)
Date: Tue Mar 08 2005 - 00:34:04 MST


On Mon, 07 Mar 2005 23:58:12 -0500
"Peter de Blanc" <peter.deblanc@verizon.net> wrote:

> You also asked for examples of what an AI could infer about humans from
> looking at its source code. Here are some, off the top of my head:
>
> - Look at the last modified dates. Humans take a long time to write code.
> - Human-written code contains many lengthy, ignorable comments; humans do
> not think in code.
> - Humans use long variable names; they need to be constantly reminded of
> what each variable is for.

These can all be eliminated. Also, as far as the machine is concerned, "source
code" (e.g. uncompiled C/C++/Java/...) is no different from plain machine code,
so long as the AI has the privileges to modify it. This doesn't have long
variable names (%ebx again! hah! they must have weak cognitive faculties) or
comments.

> - Human code is highly modular, to the detriment of performance. By this and
> the above, humans have a small short-term memory.

It looks a lot less modular when you compile it. Also, this doesn't tell you
that they have weak memories. It might tell you they have little patience, but
how would you infer that if you had no concept of "human," "date," "reminded,"
"patience," and so on.

A good exercise is to try to reason to the existence of aliens without using
any facts about the physics of our universe, or the existence of humans, or how
biology works, or any of that. I don't think it can be done.

> you seem to be saying that your AI has this concept of a mind, which it
> assumes must be a Bayesian rationalist, and so when it encounters a human
> being, it will not have anticipated a mind which is irrational. What seems
> more likely to me is that a mind-in-general would view a mind-in-general as
> just another type of system which can be manipulated into a desired state.

I can't tell if you're talking about the story I gave, or the application to
AI. If we mean the AI, why would it even think it was dealing with a
mind-in-general? Remember, the AI is like a solipsist who has no reason to
presuppose any facts about the outside world, other than that it exists
(assuming it has anything resembling a conscious state). But unlike most
solipsists, it doesn't have a wealth of sense-data that suggests that its
solipsism might be wrong. The information being fed into the AI would not be
like talking to someone on IRC. It would be like God speaking straight into
your mind.

Yours,
Daniel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT