From: Ben Goertzel (firstname.lastname@example.org)
Date: Mon May 09 2005 - 07:52:00 MDT
> So you're saying we really do need to know the
> human brain much better, *even though* FAI design
> will not mimic it, simply that the FAI will
> understand what you're trying to protect. Isn't
> it enough to give it a supergoal of "Keep the
> humans alive and comfortable and don't mess with
> the functioning of human brains until we know
> what makes them tick"?
> Tom Buckner
Sure, this makes some sense.
But there is certainly an issue of, how do you define "the humans"?
--Currently existing humans only?
--Currently existing humans and their offspring?
--Currently existing humans and their genetically modified offspring? How
much genemod before it's not considered human anymore?
--Uploaded, modified/enhanced humans?
--Uploaded humans artificially modified to be willing slaves?
Do you want the sysop to forcibly prevent us from creating new forms of
humans and near-humans until it's solve the puzzle of human sentience?
Or (my guess) do you want it to merely protect currently existing humans and
their natural-born offspring, until it figures things out? Clearly this
leaves the door open to massive ethical atrocities. But we may say that
it's OK to accept these atrocities for a while because the alternatives
might be even more atrocious.
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:09 MDT