RE: [SL4] brainstorm: a new vision for uploading

From: James Rogers (jamesr@best.com)
Date: Mon Aug 18 2003 - 17:58:01 MDT


Ben Goertzel wrote:
> Yes, I understood what you were saying before!

Just making sure, since it wasn't clear. :-)

 
> But, you did not give any argument in favor of your
> hypothesis, and I don't understand why you feel this
> statement to be true.
>
> Do you have any communicable justification for your belief in
> this regard??

Let me put it this way: Can you name two or more formal universal definitions
of intelligence that are fundamentally orthogonal to each other? I can't think
of any good example of this. And if one can reasonably derive all
specializations from one particular universal solution, it would seem VERY odd
to me that you could derive those same specializations from a completely
different theoretical position without the differences in the universal
positions being superficial and consequently the result of shallow analysis. My
intuition suggests this isn't even allowed.

Therefore, all reasonable implementations will be approximations of the one
solution and necessarily be similar in all aspects that matter. Efficacy of an
algorithm in the universal case generally improves the closer the approximation
to the ideal, and I don't know why that would be different here. In fact, the
empirical evidence seems to suggest that this is *definitely* the case when
talking about AGI. The thing that makes many broken AGI implementations
"broken" isn't that they are not approximations of the One True Solution, but
that the efficacy is so poor that they may as well not be in the real world on
real machines.

IMPORTANT POINT: I think it would be trivial to show that *most* AGI designs are
theoretically valid approximations of the One True Solution (e.g. is it an FTM).
The real distinguishing factor is the efficacy of the design in an algorithm
space that severely punishes poor efficiency to the extent of rendering many
designs intractable in the general case on real machines.

I'll explicitly state the caveat that my hypothesis is strictly for AGI, not for
specialized sub-fields of AI -- it seems obvious to me that it wouldn't apply in
the case of specialized AI. It is less that I have proof of my hypothesis than
I've never seen evidence to the contrary. Most of the "contrary" examples I can
think of were either shallow analysis or do not really address the underlying
issue.

Perhaps it isn't clear to me what your counter-hypothesis is.

Cheers,

-James Rogers
 jamesr@best.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT