Re: [sl4] Re: goals of AI

From: Stuart Armstrong (
Date: Tue Nov 24 2009 - 10:21:14 MST

> -- you say obviously, but it seems much less than obvious to me. There are
> aspects of our "intelligence" that I think may not truly be physical, not a
> neurochemical process, not a logic system that can be mathematically
> represented.

Let's say that your asking whether there's a "ghost in the machine",
something beyond what can be captured from a simple algorithm. It's a
very natural question to ask, especially seeing our genetic and
cultural baggage.

But it's also the wrong question to ask, along the lines of "is there
an invisible dragon whom no-one can detect in my garage?".

What you're asking is essentially:
There is something, which I can't really define, that prevents
intelligent AI's from being built. Prove to me that this something
doesn't exist.

The onus is unfortunately on you to define this thing, and say why
AI's are precluded by it. No-go theorem (things that say you can't do
something) normally require very strong justifications to be credible,
because they're not saying "this happens if we do this" or "this won't
work", but "of all the possible ways of doing this, most of which are
unimaginable to me or to anyone alive, NONE of them will work".

Compare what you need to justify "the moon moves according to Newton's
laws, with great probability" (a series of simple observations) and
"heavier than air flight can never happen, with great probability".

> What about motivations such as compassion, comfort, hate, love, fear of
> death,
> desire to defeat an enemy, etc.
> What about them
> These motivations are what drive us to "advance" and they color the
> solutions we create. I think they are worthy of examination to see how
> integral they might be.

Some version of these will no doubt be useful in AI - you wouldn't
want to build an intelligent machine with no motivations at all, would
you? Most probably, you couldn't.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT