Re: Eliezer: unconvinced by your objection to safe boxing of "Minerva AI"

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Mon Mar 07 2005 - 21:58:12 MST


Hi,

you seem to be saying that your AI has this concept of a mind, which it
assumes must be a Bayesian rationalist, and so when it encounters a human
being, it will not have anticipated a mind which is irrational. What seems
more likely to me is that a mind-in-general would view a mind-in-general as
just another type of system which can be manipulated into a desired state.

You also asked for examples of what an AI could infer about humans from
looking at its source code. Here are some, off the top of my head:

- Look at the last modified dates. Humans take a long time to write code.
- Human-written code contains many lengthy, ignorable comments; humans do
not think in code.
- Humans use long variable names; they need to be constantly reminded of
what each variable is for.
- Human code is highly modular, to the detriment of performance. By this and
the above, humans have a small short-term memory.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT