(no subject)

From: Chris Healey (chealey@unicom-inc.com)
Date: Thu Feb 26 2004 - 10:39:53 MST


Hey Ben :)

Yes. It is a huge problem.

When I mentioned "primitives", my meaning was only that once a complex
concept is represented, it can be used as a logical primitive for more
complex ideas.

I'd expect that maintaining a model of human equivalent understanding
WOULD be performed in some fashion, but perhaps subsumed as a special
case of a larger modeling system; supporting representations of the
AGI's past cognitive capabilities, and how they have changed over
time. I find it very highly likely that an AGI diligently persuing
self-modification as an expected utility of it's supergoal would not
fail to implement satisfactory controls for arbitrary reversion to
previous content and structural states, if required. This becomes
less important as the AGI's predictive horizon increases, but for an
error/correction interval greater than the predictive window in place
at the time of the error, it would seem to require this type of
mechanism in at least SOME of the important cases.

I will agree with your position, from a few threads ago, that this
rapidly iterating self-modification should NOT occur before the AGI is
mature enough to near-perfectly predict programmer-generated
modifications, and always suggest better ones (with full programmer
review). Once it has reached this level however, the conceptual
network represented in it's mind should be interlinked enough to
absorb serious errors and self-correct. At least better than we could
correct it. Tack on a healty safety margin, double it, and I am
starting to think the AGI might remain stable.

But it would not be a purely structural solution. It would depend on
both structural and conceptual aspects. Since it is looking like we
might have AGI before we have a complete structural theory, my
interest is turning toward how the combination of structural and
content components could interact to result in self-modifications that
progressively address structural ambiguities.

Of course, the standard disclaimer applies: I'd say we have more gaps
than understanding, right now, so we should assume we ARE missing
something important, that we simply don't even recognize yet.

-Chris H

-----------------------
On Thursday 2/26 Ben G wrote:

Hi,
 
I understand that your "singly-rooted goal hierarchy" refers to the
goal system only.
 
But a big problem is that the primitives in terms of which the goal
system is defined, are not really likely to be "primitive" -- they're
more likely to be complex human-culture concepts formed from complex
amalgams of other complex human-culture concepts ... not the sort of
thing that's likely to remain stable in a self-modifying mind....
 
Of course, you can give the AI the goal of maintaining a simulacrum of
the human understanding of these primitives, even as it transcends
human understanding and sees the limitations and absurdities of human
"primitive" concepts ... but I'm doubting that *this* is a stably
achievable goal... for similar reasons...
 
-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT