Re: [sl4] Re: More silly but friendly ideas

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Sat Jun 28 2008 - 17:42:57 MDT


On Saturday 28 June 2008 03:54:52 am Vladimir Nesov wrote:
> On Sat, Jun 28, 2008 at 2:30 PM, Stuart Armstrong
>
> <dragondreaming@googlemail.com> wrote:
> >>> I don't see at all analogy between goals and axioms.
> >>
> >> Not at all? I don't believe you are being entirely candid with me, I
> >> think you do see that analogy.
> >
> > I don't actually. I thought I did initially, but then when I analysed
> > it, the whole thing fell apart. Goals seem to be the opposite of
> > axioms; they are the end point, not the beggining of the processes. An
> > AI with a goal X will be building a sequence of logical steps that end
> > up with X, then compare this with other sequences with similar
> > consequences; this is the reverse construction to an axiom.
>
> You are discussing specific search algorithms now, not a problem
> statement that needs to be addressed by whatever algorithm is best to
> do that. Both axioms and goals specify preference on the set of all
> possibilities, axioms specify a clear-cut language, and goals specify
> a preference distribution. If your axiom is having the goal
> accomplished, you want to build a proof that you have an
> action-sequence leading to the goal. If goal is reached last
> temporally, it doesn't mean that the search algorithm also needs to
> place the goal thing at the end of its runtime.

Well, I would assert that you need both goals and axioms AND rules of
inference AND a current state.
1) Goals tell you where you want to get
2) Axioms tell you what you could try to do
3) Rules of inference tell you what are the legal state transitions
4) Current state tells you where you are right now.

OTOH, NARS appears to explicitly rule out the need for axioms. (I don't
currently understand the code. I'm going from the name.)
Also that description seems to leave out (probably just hides) a recursive
procedure that allows one to look at a current state and compute using a
simpler model.

As such, it's plausible that (using my definitions?) goals and axioms are only
very loosely analogous. Doesn't mean it couldn't be done, but it doesn't
look trivially obvious.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT