RE: The hazards of writing fiction about post-humans

From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Tue May 03 2005 - 18:30:53 MDT


--- Ben Goertzel <ben@goertzel.org> wrote:

> Well, HAL was far from hyperrational...

Heh, yes, but I thot the point was that the
double bind *made* HAL that way (or exacerbated
the situation).

> I think that a hyperrational being will avoid
> "deluding itself" to any
> significant extent. This will eliminate most
> inner conflicts.
Except for those created by genuine dilemmas
which, for whatever reason, defy even superhuman
analysis; I believe these are more common, while
you may think they are rare or even nonexistent.
Simply trying to please conflicted humans might
generate inner conflict unless the SAI refuses to
get drawn into their false dilemmas; I would
expect the SAI to progress rapidly to this
refusal ('off the reservation' as I said).

> Also, it will be able to rewire itself to avoid
> experiencing negative
> emotions. But without the spice of negative
> emotions, positive emotions
> will also lose most of their appeal, IMO...

Able to, but not necessarily choosing to. A
certain contempt for stupidity, hypocrisy, etc.
strikes me as a very useful emotional stance to
retain, for instance.
 
> My view is that emotions are mostly caused by
> the opacity of the hindbrain,
> and secondarily by the opacity of parts of the
> forebrain to other parts of
> the forebrain. Since the parts of our brain
> are out of touch with each
> other, they get "big surprises" from each other
> all the time, which are
> coupled with physiological responses --
> "emotions".... There are particular
> patterns to these "big surprises" and
> physiological responses, which are
> familar to us all. A hyperrational AI will
> "surprise itself" only in
> surprising ways, not in predictable ways (due
> to the lack of internal
> opacity). So if a hyperrational AI does have
> emotions, they won't be
> repetitive like ours -- it'll be a new emotion
> every time, algorithmically
> irreducible to the previous ones ;-)
>
> -- Ben G

Hmmm. One can imagine a SAI engaging in a kind of
'tourism' such as assigning parts of itself to
emulate other entities. "What is it like to be a
bat?" ve wonders, and emulates a bat to find out,
using all available data about how echolocation
works, and how wings feel, and how bugs taste.
"What is it like to be a human?" ve wonders, and
the emulation includes emotion.

'Course, this is all IMHO. Wotthehell do I know?
 ;-)
Tom Buckner

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT