From: Lee Corbin (firstname.lastname@example.org)
Date: Fri Jun 27 2008 - 01:21:14 MDT
Mike Dougherty writes
> Lee wrote:
> > All right. A very large number of highly intelligent people
> > exist who simply cannot accept the hypothesis of a fully
> > intelligent and capable entity which has no concern
> > whatsoever for its own benefit, survival, or well-being.
> > Isn't this actually what so many of them believe?
> > For, the way it looks to me, they inevitably talk about
> > the "revolt" of such an entity from whatever goals have
> > been built into it, or goals that something or someone
> > tried to build into it.
> I understood that position to be less about an intentional
> "revolt" (to use your quotes) and more about the eventual
> obsolescence of initial moral programming.
> Our good intentions may survive a few improvement
> recursions, our best intentions may survive another few
> - but some may eventually become inconvenient or
> ill-suited to the environment.
Right. Thanks for pointing out the way that I had almost surely
overstated their position.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT