From: Jeff Herrlich (firstname.lastname@example.org)
Date: Tue Apr 15 2008 - 21:41:45 MDT
"Exactly which animals and machines are sentient/conscious?"
Well, no human knows for certain, of course. But we don't need to know, only the AGI needs to be able to make that determination - which at least doesn't seem prohibitively complex.
"Are we asking "what should we do?" or "what will we do?"?"
Often, what we do is what we firmly conclude that we should do. Even if our dynamic decisions are deterministic, they are still decisions that follow from our deliberations. Even a causal process is still a process. The future still depends on our actions - eg. if we all decided not to bring about the Singularity, then it wouldn't happen.
Matt Mahoney <email@example.com> wrote:
--- Jeff Herrlich wrote:
> Why not make the beneficiaries all sentient/conscious beings?
Exactly which animals and machines are sentient/conscious?
Are we asking "what should we do?" or "what will we do?"? In the former case,
we are just making statements about our evolved ethical models. In the
latter, the answer is to let the various AI, human, and hybrid groups fight it
out and adopt the rules of the surviving groups.
-- Matt Mahoney, firstname.lastname@example.org
Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT