From: Eliezer S. Yudkowsky (email@example.com)
Date: Wed May 10 2006 - 18:53:48 MDT
Ben Goertzel wrote:
>> We know what the Godelian restrictions are - but there's a
>> difference between knowing that, and being able to say that Godelian
>> restrictions imply limitations for AIs.
> They do imply limitations for AI's (or any other finite systems) but
> the question is how relevant these limitations are to the issues that
> interest us (e.g. ongoing Friendliness thru iterated radical
> self-modification). I don't know...
What I'm saying is, tell me something I can't *do* because of Godel -
not something I can't *believe*, or can't *assert*, but something I
can't *do*. Show me a real-world optimization problem I'll have trouble
solving - like a difficult challenge that causes me to need to rewrite
my own source code, which I can't do because of Godelian consideration
XYZ... This is the sense in which I know the Godelian restrictions, but
I can't yet say that they imply limitations for AIs.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT