Re: [sl4] An attempt at empathic AI

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun Feb 22 2009 - 15:09:22 MST


--- On Sun, 2/22/09, Johnicholas Hines <johnicholas.hines@gmail.com> wrote:

> On Sun, Feb 22, 2009 at 3:42 PM, Matt Mahoney
> <matmahoney@yahoo.com> wrote:
> > Unfortunately it is a necessary property of any system
> that has greater algorithmic complexity than you do (beyond
> a small language-dependent constant, for those who want to
> nitpick about the math). You can't simulate (and
> therefore can't predict) what a system will do without
> knowing everything it knows.
>
> I think you're thinking about the undecidability of practically all
> predicates about general computer programs. However, it's entirely
> possible to solve the halting problem for particular computer
> programs. We just can't write an algorithm that does it for all
> computer programs.

No, my argument is not based on the halting problem or Rice's theorem. It is based on information theory (with knowledge as a surrogate for intelligence).

> We can make strong arguments (maybe not proofs, probabilistic and/or
> informal arguments) about how a modular system will behave, by
> inspecting its structure. Not all systems have the necessary structure
> to make arguments about them, of course.
>
> We should strive to make the AI, or the AI seed, at least
> somewhat analyzable, rather than holographic.

"Analyzable" and "modular" imply low algorithmic complexity. Do you want a system that is predictable, or one that is smarter than you? You can't have it both ways.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT