Date: Thu Feb 28 2008 - 20:30:23 MST
Not to be to flippant but isn't ignorance of risk or carless nature responsible for some of our greatest moments as a species? At least in some part?
Sent via BlackBerry from T-Mobile
From: Gordon Worley <firstname.lastname@example.org>
Date: Thu, 28 Feb 2008 22:02:39
Subject: OpenCog Concerns
I just learned about OpenCog, and I'm concerned about the safety of it.
Safety is addressed only insofar as to say that risks will be minimal
in the near future because they are only attempting weak AI for now
and will worry more about risks and ethics when they need to. In and
of itself this attitude worries me, since it seems to assume that
there's essentially no possibility of even accidentally creating
something slightly smarter than EURISKO that goes FOOM! Further, even
if in the near term OpenCog doesn't pose a threat, the product of its
research may create something that, in 10 to 20 years time, could
serve as a jumping off point for someone who wants to throw a lot of
computing power at the problem and create strong AI by brute forcing
it. However, since this is coming from SIAI and given Eliezer's
conservative stance toward AI development, I can't help but wonder if
the risks aren't as large as I suspect they are.
If this has been discussed publicly elsewhere I'd appreciate a link to
those discussions, but if not I think we need to have one here.
How risky is OpenCog? Is this risk a good tradeoff in that it will
lead to the safe development of Friendly AI sooner?
-- -- -- -- -- -- -- -- -- -- -- -- -- --
e-mail: email@example.com PGP: 0xBBD3B003
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT