Re: [sl4] An attempt at empathic AI

From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Mon Feb 23 2009 - 08:30:11 MST


On Mon, Feb 23, 2009 at 9:53 AM, Krekoski Ross <rosskrekoski@gmail.com> wrote:
> On Mon, Feb 23, 2009 at 10:47 PM, Johnicholas Hines
> <johnicholas.hines@gmail.com> wrote:
>> A compiler would not be valuable if we could easily predict everything
>> about its output.
>
> But we can predict everything about its output if we have the input. A
> human-level AI on the other hand....

If I understand correctly, you say "we can predict a compiler, given
its input", and also "we can't predict a human-level AI, given its
input". I think you must be confused about what "predict" means.

If I understand how we're using the word "predict" in this
conversation, it means something like "compute using a short fast
algorithm". There's another sense that you might be imagining,
something like "compute its behavior by simulating it".

Though it's possible that modern compilers could be predicted in the
first sense (they're certainly not optimal), they're certainly not
EASY to predict, the way a harmonic oscillator is easy to predict.

Conversely, we certainly can simulate an AI if we have the the source
code, the computer power, and the input.

So there isn't a sense of "predict" under which compilers are
predictable but AIs are not.

I hope this is clarifying?

Johnicholas



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT