RE: [agi] Future AGI's based on theorem-proving

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 23 2005 - 13:16:46 MST


> When a proposed system design turns out to require fancy emergency
> patches and somewhat arbitrary set points to achieve part of its
> function, then perhaps that's a hint that it's time to widen-back and
> re-evaluate the concept at a higher level.
>
> - Jef

I dunno, Jef ;-) ...

The human brain seems to have a lot of emergency patches and arbitrary set
points in it, and so does every *practical* AI design I've ever seen

The problem may be that "safety" is not a simple concept...

As Moshe pointed out, if one wants to trust to the power of elegant
generalization, then one can simply trust to the rule I called R_2 in my
second paper.

This has no emergency patches and no arbitrary set points.. it's beautiful
and pretty...

Does this mean it's better than the patched up version? I'm not so sure.
Elegance is a virtue, but not the only one...

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT