Re: [sl4] Re: More silly but friendly ideas

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Sun Jun 29 2008 - 11:38:02 MDT


On Sunday 29 June 2008 07:52:08 am John K Clark wrote:
> On Sat, 28 Jun 2008 "Charles Hixson"
>
> <charleshixsn@earthlink.net> said:
> > Goals tell you where you want to get
>
> Goals tell you (or better said I tell the goals) where I want to be
> today, they say nothing about where I will want to be tomorrow. That is
> anybody’s guess.
Goals come in many varieties: Short term, long term, in between. High
priority, low priority, in between. High importance, low importance, in
between. Probably a few dimensions I left out. (Priority says how quickly
it needs to be achieved, importance says the value placed upon achieving the
goal.)

Naturally, these dimensions aren't usually pure. Eating breakfast is a short
term, medium-low importance, high priority goal when you're normally well fed
and late for work. We more commonly, here, talk about primary (top level)
goals and derived goals, but I tend to think of that as an
oversimplification. Basic goals won't have meaningful symbolic names,
they'll be things like achieve this or that internal state, where the state
will be a complex transform of external sense images & world model, and it
will be "felt" that achieving this by modifying the state map directly
was "cheating". Which means that the planning for how to achieve the goal
won't plan on directly modifying the state map. (This is difficult, but you
need to allow a model of the state map to be directly modified to allow
modeling of the future, the past, or the actions of other entities. I.e., it
needs to be able to be used analogously to the way mammals use mirror
neurons, to construct a "theory of mind".)

N.B.: I think that a large part of the oversimplification with how we
consider goals has to do with the serial nature of language. Internally
goals aren't serial, but simultaneous, with values and priorities that shift
with both internal and external state. A stack won't work, though in a
processor deficient system (i.e., any likely actual implementation) a queue
might be used, with lots of time-slicing. (Priority Queue generally doesn't
quite capture the meaning of priority that I'm using

>
> > Axioms tell you what you could try to do
>
> Axioms determine what is possible but they don’t tell you anything, you
> must laboriously work out what they mean. The process is called
> “thought”.
They aren't supposed to. And the "axioms" are directly understood internally.
They're like a list of operations. Different operations require different
input values and yield different transforms of the data. What they "mean" is
somethine like apply("+", 2, 2) yields 4, but with the 2's and 4 replaced by
variables, and the "+" being the "axiom". They're the things that you could
try to do. Note that some axioms won't be possible to apply in some
situations because they will require inputs that aren't available.

>
> John K Clark
>
>
> --
> John K Clark
> johnkclark@fastmail.fm



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT