From: keith (firstname.lastname@example.org)
Date: Thu Dec 01 2005 - 06:07:57 MST
It might not be so easy to control a machine whose
intelligence was based on language. Like in the novel 1984,
the meaning of words can change. Just look at the US
Perhaps language referring to objects and behaviors could be
tied down. But all the really important language humans use
refer to subjective feeling states, for which the machine
might have no computational referents.
If it has no aesthetics or goals of its 'own', other than
the admonition 'be nice'. It could only know it was being
'nice' by interrogating individual humans that have the
subjective capacity. The 'feelings' to know what 'nice' is.
Surely, no amount of 'cleverness' at knowing how bits and
pieces can fit together - either matter or information could
really change that.
If we actually discovered what 'pleasure' was, and
programmed the machine to take 'pleasure' in making humans
'feel nice', A friendly machine might engineer for us all a
kind of heaven I guess.
But perhaps a better goal, rather than friendliness, might
be to strive to make humans wise. Even if most people
'prefer' their delusions. 'Prefer' their gods and their
prejudices. Even if they don't actually 'like' the truth or
wisdom very much.
Perhaps growing up is more important, is a wiser choice,
than being nice and happy and friendly.
Perhaps then, within a common shared wisdom, there would be
less antagonism between the goals of the machine and those
of humans. There might then be less need to constrain the
goal system - the aesthetics - toward 'friendliness'.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT