Re: [sl4] JOIN: Hello, etc.

From: Philip Hunt (cabalamat@googlemail.com)
Date: Sun Dec 07 2008 - 07:09:08 MST


2008/12/7 Stuart Armstrong <dragondreaming@googlemail.com>:
> Though I think that "with the added incentive of a small chance of
> saving the world" drives Eliezer to despair (I might be spectacularly
> wrong); I think he feels that saving the world is much more important
> than any considerations, and this is at risk if it's anything but a
> primary consideration in our thoughts

Indeed. Either AGI / the singularity happens, or it doesn't. If it
doesn't, then that's one existential risk humanity won't have to face
(though there are still others). If it does, it may well lead to one
entity controlling everything, and if that entity is hostile or
indifferent to humans, we're done for. If that entity is interested in
humans but has ideas for us we might not want, this is a "shriek" in
Nick Bostrom's terminology.

So getting AGI right, so the future is somethnig bearable, would seem
to be rather important. it also seems like AI research is a good thing
as it increases the chance that the first AGI will be Friendly (
http://www.intelligence.org/upload/CFAI//policy.html#comparative_computing
)

-- 
Philip Hunt, <cabalamat@googlemail.com>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT