Re: Pattern recognition

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Fri Jan 09 2004 - 19:34:20 MST


Robin Lee Powell wrote:

>On Fri, Jan 09, 2004 at 01:57:14PM -0800, Charles Hixson wrote:
>
>
>>Robin Lee Powell wrote:
>>
>>
>>>http://www.lojban.org/
>>>
>>></off-topic plug>
>>>
>>>
>>Did that used to be called Loglan?
>>
>>
>
>That's a bit of a loaded question[1], but the short answer is
>"Lojban is a descendant of Loglan; the original Loglan still exists,
>but is quite dead by comparison".
>
>-Robin
>
>[1]: The people who started Lojban actually went to court for the
>right to call themselves Loglan if they wished; the fact that the
>founder actually took them to court to protect his stranglehold on
>the language is an example of why the split occured in the first
>place.
>
>
>
At one time there was talk of creating a computer based language parser
(or maybe it existed?) Does it exist? One gradual step into computer
understood language might be to have a parser for a full language, and a
compiler/interpreter for the parts of the language that the computer
knew how to deal with. Thus "See Spot run. Run, Spot, run!" could be
parsed, but to expect the computer to know what Spot was would be ...
not the right place to start. But if you said "Jane, Jane! What is one
plus two?" The computer should be able to determine the answer to the
question. Also that somebody named Jane was being asked to give the
answer. It might not know what kind of entity a Jane was, but should
be able to determine that a Jane was expected to understand verbal
interactions. Possibly also that Jane was an instance rather than a
class, i.e., some particular entity with the name Jane attached was
being addressed. Etc.

I've often thought that this might be a good starting point, but the
computer languages that I've examined have all been too inflexible to be
extended in this way. It's an interesting question just how much
knowledge of the world is inherrent in the language that is used to
describe it. For most of the world the answer is probably "it's a
pretty high level abstraction", but when we get right down to math and
engineering, then the levels of abstraction melt away, and a logic
engine should be able to derive a lot about the nature of the world
merely from the language usage. I *think* the reason so much of it is
at a high level of abstraction is because there's a usually valid
assumption of a great deal of common experience and of a basic
similarity of the underlying mental landscape. But this won't be true
of an AI of whatever kind. And neither will the environment that it
develops in be all that similar. Mutual understanding is necessarily
going to be quite difficult. A shared language might help. (And once
one language was handled, it might be possible to learn to translate
others.)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT