AI in <what?>

From: Justin Corwin (thesweetestdream@hotmail.com)
Date: Sun May 19 2002 - 06:07:23 MDT


Environment.

This is the question which has been bugging me for the past few days. I've
been focusing my middling-to-high brainpower on the writing of my response
to several AI approaches, and one issue I've been encountering is my various
problems with the proposed environment for the AI to live in.

Is an explicit environment for a developing AImind desireable, and what kind
of environment is best?

Now, bear with me, these ideas of mine are new-formed, and thus contain a
fair bit of anthropomorphism, simplistic modelling, and unattributed
assumptions. I'm presenting this in an informal email, so the focus is on
the concepts, rather than my poor formalization skills. Once the concepts
are in a more final form, I'll worry about precise representation. If you
have comments, please post to SL4, so a record is kept, and more comments
can be generated.

As I see it, there are four reasons an AI needs an environment:

1. For training the AImind to accept input.
2. For allowing the AImind to develop mental skills in an interactive
setting.(action/response kind of stuff)
3. Possibly for keeping the AImind close to us, in it's mental landscape.
While it may be possible to make a mind with an entirely disembodied
intelligence, just I/O ports and internet access, such a mind may have
problems relating to us, as physically oriented many of our language-objects
are.
4. To allow the AImind to more effective when it begins acting in the real
world. If it has to extrapolate 'everything' it'll take longer and be more
error-prone.

While it's certainly not a closed book, I would like to believe that an
environment's importance is accepted and accounted for by most parties
reading this.

There are, of course, downsides. Providing an environment for an AImind ups
complexity. Such an AImind requires modalities for relating to the input it
recieves, and possibly specialized mental structuring to interpret the
significance of what it sees/feels/hears/smells. But, as we see with Homo
Sapiens Sapiens, such modalities come in handy in surprisingly disparate
situations.

ex: Visual-Spatial orientation of memory: Memory Palaces, The Amphitheatre
of Knowldge, Cicero's Room, etc. Such entanglement of visual processing in
memory can lead to great gains in accessibility and reliability of memory
data.

ex: State-Associative Skills. Many humans report skills that are associated
with kinesthetic environment, but have little to do with kinesthetics. An
example would be many military personel's inability to think strategically
while sitting down.(Patton, Napoleon, etc) Or a mathematicians need to use
his hands while exploring a multidimensional problem(personal example: my
Applied Bio-mathmatics professor at UofUtah)

These examples make a case that mental organization often proceeds by
present and dominant modalities. This implies two things: That modalities
may improve cognition with their detail and organization, and that minds may
be significantly different, given different modalities.

This leads me to conclude that richer modalities may lead to richer mental
organization.

But do richer environments really bring a quantifiable advantage?
Unfortunately, there is little experiential data one can use to resolve such
a question, So I beg the user's indulgence, and apply the following thought
experiment;
          Suppose a human was born, whose eyes were twice as acute.
        He might have a problem initially, as our eyes are a result of
        environmental balance, and would already be more acute, if this
        presented an evolutionary advantage. However, living a life of
        a higher resolution, as it were, does seem to imply some
        advantages that may not be obvious to an evolutionary process.
           For one, his visual cortex would be working harder all the
        time(given twice the input), and such training may serve as a
        faster bootstrap time, for certain processes. (Recognizing
        objects, visual categorization, etc). Also, such increased data
        implies that such a human may be able to apply categorization
        of visual data that would be non-obvious to normal humans(by
        half-shade, by smaller differences in size, shape, etc) Such
        increased categorization in visual matter, could in theory mean
        that increased categorization would occur in Manydifferent area.
        Given that visual-like categorization in other mental area
        have already been shown to exist.
            This implies that such an improvement may lead to an
        improvement in mental state, in both complexity and
        precision. This may lead to problems later (the human brain can
        presumably only take so much complexity) but for the purposes of
        our discussion may be taken as a proto-proof of concept.

So richer environments may in fact lead to richer mental structure. However,
this doesn't immediately answer the question, whether an explicit
environment is desirable, and if so, what kind of environment is best?

So tradeoffs must be examined.

1.High mental complexity post-design is certainly desirable, so a process
that leads to faster training is very valuable. However, such environmental
complexity also adds to complexity of design, and may contribute to design
failure.

--
I believe that in this case, complexity is too important to let go, and the 
design hit should be taken. We don't want an AImind we have to relate to 
using 786432 pixel 2D metaphors. That would be annoying, and may represent a 
difficulty the AI may have trouble fixing when in the Self-Modification 
stage.
2.Design failure. It's possible that a given design for an AI may fail(whee, 
I'm a geeeeenius...) and it's important to evaluate what may cause this. A 
high complexity module relating to an environment may very well be the death 
knell for such a project, given how high the complexity is anyway. However, 
the question is, do you want your AIproject to succeed as a software 
project, or as an AI? Because insufficient mental complexity may cause your 
AI to fail not because of coding failure, but just because of nothing 
happening. Thus, environmental richness may play a crucial factor in 
allowing emergent mindstructures to emerge at all.
3. Relationship to humans. Humans have a pretty rich environment. 6 senses, 
good recall. An AI with a less rich environment may have difficulty relating 
to us. By contrast, one with a freakish 12D environment would probably find 
us funny looking. Upside, really complexish environments are probably beyond 
us anyway.
Hm. since it's late, and I don't want to muddle my thinking by touching it 
up, I'm sending this in now.
My basic conclusion is that the optimal tradeoff seems to be in a concrete 
instatiation of the AI in a virtual or sandboxed environment slightly lower 
in detail than our own. It seems to offer the best of all the options, while 
raising the complexity to a reasonable (if still ridiculous) level. I would 
like an AI that has a concrete concept of itself in space, and learns in an 
environment similar to my own. It seems that such an AI would be the most 
useful, relatable, and intelligent; given other tradeoffs.
Again, I apologise for informalism, and for the bad spelling, grammar, and 
thought.
hatemail to:
Justin Corwin
outlawpoet@****.com
"the stars are: hell"
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT