Re: Cognitive priming

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Sun Nov 03 2002 - 11:47:31 MST


On Saturday 02 November 2002 21:15, Emil Gilliam wrote:
> Quoting LOGI:
>
> [Matching imagery against stored memories is going to be very
> computationally expensive,
> but...] One hopeful sign is the phenomenon of cognitive priming on
> related concepts
> [Meyer71], which suggests that humans, despite their parallelism,
> are not using pure brute
> force.
>...
>
> - Emil

Yes, it will be expensive. But one can limit the amount of data that needs to
be processed in various ways. One was is by converting pixelated images into
vector images. This has the additional advantage if rendering itself
independant of scale. One way is by keeping a chronfile index. This lets
you filter out anything that happened at an inappropriate time. One way is
by indexing the imagery with various feature codes. Again, this allows for
lots of stuff to be filtered out. And finally composit objects (almost
everything) should have some kind of fancy hashing algorithm, to allow
filtering out of anything that didn't have the appropriate features.
This would allow one to retrieve, e.g., only images that had a red circle and
a blue square relatively cheaply.

This entire image would then be input into a model that indicated the 3-D
relationships between the parts, and this would be what was stored.
Otherwise perspective can seriously damage the ability to associate similar
images. But note that the model has lost a lot of detail along the way. It
is instances of this model that are indexed. And the model allows additional
data to be added as time goes by. E.g., when you first see someone's back,
that information is used to update the model that you had of their front.
Ditto for side views. Not a simple process, but a lot less computationally
expensive than, say, ray-tracing.

Also, somehow the pieces of this model will need to be able to be extracted
and named separately. E.g., a huge plaster hand is seen to be the same (in
some sense) as the thing at the end of your wrist. This is because of the
ability of the various gross features to be put into a "sort of" one-to-one
relationship. It has fingers is more important than, e.g., how many fingers
it has. Again, within limits. And we clearly understand (model) the deep
connection between a left hand and a right hand. So this probably means that
most of their model is shared.

Perhaps what I'm saying is that imagery is rarely directly compared with other
imagery. What are compared are the models via which they are understood.

N.B.: A lot of work would need to be done during the original conversion of
pixelated images to vectors. You probably need not just basic shape codes,
but also surface texture codes. You probably shouldn't expect to get enough
information to code things properly out of just one still image. You are
likely to need to know how things normally move in relation to each other, so
that you can determine natural boundaries. Also stereoscopic images are
quite useful in determining how the bumps and hollows of the surface work.
(Camoflage is used to render simple color based assumptions ... dubious.)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT