From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun May 19 2002 - 18:00:11 MDT
> >I don't understand your point here. What is wrong with 2D metaphors? If
> >you can get an AI that far, then that's freakin' great! Tackle 1D first,
> >then 2D, then 3 and 4...
> I agree that getting to this point would be great, and
> encouraging. However,
> you dont' want an AI that is 'conceptually limited' to representations
> containing 786432 pixels and 2 dimensions.
> See Eliezer's example of tying an AI's time perception directly to the
> system clock, the poor thing is thus unable to think about units of time
> smaller than the smallest system-time unit.
> that would be badd.....
If one explicitly engineers an AGI around these limited perception-action
domains, one is fucking up.
If one builds a generalized cognitive engine and initially trains it on
these limited perception-action domains, then things will be OK in my view.
I believe a good cognitive engine will be able to generalize a lot of useful
knowledge from a 2D domain like Novamente ShapeWorld or A2I2's
perception-action environment, to broader and richer environments.
This archive was generated by hypermail 2.1.5 : Sun May 19 2013 - 04:00:41 MDT