Thoughts on AI testing

From: Gordon Worley (redbird@rbisland.cx)
Date: Wed Oct 24 2001 - 17:37:04 MDT


A while back there was a short thread about how to test how
smart/intelligent AIs are. A number of interesting ideas came out of
this and suddenly, about an hour ago while sitting in my Statistics
II class, I realized a great way to test if an AI is of human level
intelligence: if it can choose the correct statistical method with
which to analyze data (computers are already exceedingly good at
doing the analysis and make certain analyses possible that humans
couldn't reasonably do on their own). In order to do this, one must
understand the content of the problem (what the numbers mean), the
nature of the data (qualitative or quantitative; parametric or
nonparametric, etc.), and what kind of results are expected (there's
more going on, but the other understandings could be seen as
subunderstandings of those I listed). This means the AI must be able
to read some kind of language, understand what it means, and have the
heuristics to take the information ve is given and make the right
choices about tests.

Personally, I've never been a huge fan of the Turing Test, but this
seems like a good alternative that requires the same kinds of skills,
just in a different domain.

-- 
Gordon Worley                     `When I use a word,' Humpty Dumpty
http://www.rbisland.cx/            said, `it means just what I choose
redbird@rbisland.cx                it to mean--neither more nor less.'
PGP:  0xBBD3B003                                  --Lewis Carroll


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT