Self-awareness (Re: AI debate at San Jose State U.)

From: Woody Long (ironanchorpress@earthlink.net)
Date: Tue Oct 25 2005 - 15:02:57 MDT


> [Original Message]
> From: Olie Lamb <olie@vaporate.com>
>
> Woody Long wrote:
>
> >Example of a current, friendly intelligent system exhibiting self
> >awareness, self interest, and self destruction --
> >The Sony Aibo dog
> >
> Dude, you have a VERY different interpretation of the term "self-aware"
> going on there.
> I can deal with people using the term "intelligent" to cover
> non-conscious entities, even though I find this a dumbing-down of the
term.
> But self-aware???
> Surely, a basic requirement of an entity being "self-aware" is that has
> Consciousness. I know the c-word makes a few people round here squirm a
> little, but sometimes you have to tackle the hard problems. (Hee)
>
> Reaction to entities is not awareness of the entities. Bugs react to
> damage to their bodies (~ their selves), but, as far as anyone knows,
> they don't operate on the basis of "My body is in some way different
> from stuff that is not my body". They don't have concepts, let alone
> concepts of the self.
----------------------

lol, of course I should have said precisely "rudimentary, pseudo self
awareness." We do need more settled definitions in this field. For example,
I also generally accept Honda's and Sony's marketing materials which say
they are "intelligent systems," though I undertand this to mean
rudimentary, pseudo intelligence.

narrow AI - rudimentary intelligent robot systems such as above, expert
systems, partial systems, etc.
strong AI - human like intelligent systems with a conscious self awareness
of unique self and unique personal experiences, as you mention.
android - human shaped SAI robot.

In robotics we need a new term, like "droids" for SAI robots, so that we
get androids that are human shaped SAI robots, and paradroids that are
other-shaped SAI robots. (flying paradroids, talking dog paradroids, etc)

I understand via prior posts that there are at least two species of SAI,
humanoid intelligence SAI and human equivalent intelligence SAI. I won't
disagree as long as they meet the definitional requirements of SAI.

I should tell you where my interest comes from in all this. My business
project is to enter an android in the historic Roboprize.com Prize Fight.
To be entered the Rules Committee must certify that the entry is an
authentic android. They have asked me what parameters I would use, and I
said if must be driven by an SAI system and pass the Searle Chinese room
test. If not, it should not be considered a consciously self aware,
thinking, autonomous android, and be turned down.

What would you accept as an SAI android if you were on the rules committee?
Roboprize.com wants the AI community to agree with them that their android
they choose is in fact an android. Could the AI community come to an
agreement that it is or is not an authentic android?

> >However Sony built it so that when it meets a certain amount of light
> >resistance, it self-destroys this behavior; and its jaw goes slack. If a
> >hobbiest trys to get inside and tinker with this behavior, it self
destroys
> >the system, and is rendered inoperable. It implements what they call in
> >Japan the principle of harmony. It also could be considered an
> >implementation of Asimov's First Law of Robotics.
> >
> >
> If there is any tinkering, the entire system destroys itself, or just
> certain types of tinkering result in certain failures?
>
> Either way, the comparison to Asimov belies the flaw in the
> methodology: Rather than structuring the system for safety with a
> ground-up approach, you are describing a system with a safety catch
> "plastered on".
------------------
In fact that actuator mechanism is not something plastered on, but a
structural ground-level component. Just because it implements Asimov Law 1,
doesn't mean its plastered on. And there are many ways to thread the ground
structures with safety checks so that the robot never destroys person,
property or pet. The robot is driven by goals. If these behaviors are
excluded from the goal list, both ends and means, so that neither
self-modification, tinkering, or user suggestion can cause them to occur,
then FAI exists. In other words, I predict that we will never read in the
news that a Honda household SAI android has become defective and willed the
destruction of a person, pet, or property.

> >Ken Woody Long
> >www.artificial-lifeforms-lab.blogspot.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT