From: Ben Goertzel (firstname.lastname@example.org)
Date: Thu Jun 27 2002 - 12:27:38 MDT
> Despite an immense amount of science fiction dealing with this topic, I
> honestly don't think that an *infrahuman* AI erroneously deciding
> to solve
> problems by killing people is all that much of a risk, both in
> terms of the
> stakes being relatively low, and in terms of it really not being all that
> likely to happen as a cognitive error.
In my book, if the infrahuman AI that thinks in this has decent odds of
evolving into a superhuman AI ... then this killer infrahuman AI is a really
> A disagreement with a transhuman AI is pretty much
> equally serious whether the AI is in direct command of a tank
> unit or sealed
> in a lab on the Moon; intelligence is what counts.
No the comfort level of the AI with killing people also counts, it seems to
> Ben, what makes you think that you and I, as we stand, right now, do not
> have equally awful moral errors embedded in our psyche?
Well, based on your various disturbing comments in these recent threads, I'm
a lot more sure about me than about you ;-)
According to your recent posts,
a) an AGI project forming an advisory board of Singularity wizards is
b) training infrahuman AI's to kill is morally unproblematic
c) whomever creates an AGI, intrinsically has enough wisdom that they should
be trusted to personally decide the future of the human race
Well, the point of view that led to these statements seems to *me* to embody
some moral errors...
Regarding your comments on the subjectivity of morality: Yes, I understand
that my own morality, which has a tendency (though not an absolute one)
toward pacifism, is not shared by all. This is part of my motivation for
thinking that, when a near-human-level-AI comes about, an advisory board of
Singularity wizards would be a good thing. Of course, this group will
*still* not have an objective morality -- there is no true objectivity in
the universe -- but it would have a broader and less biased view than me or
any other individual.
-- Ben G
p.s. regarding "rationality" and my use of the term versus yours, that would
be a long and detailed discussion, which would distract from the current
thread, so I'll defer it.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT