From: Ben Goertzel (firstname.lastname@example.org)
Date: Fri Feb 29 2008 - 15:02:05 MST
The risks and benefits of open-source AGI development are subtle, and
I won't try
to do justice to the issue in a brief email.
I'll just mention some obvious factors
Proactionary Principle: the need to balance the risks of action with the
risks of inaction
The added oversight that having a large group of smart people studying
an AGI system as it develops provides
On the other hand the obvious danger that some bad guys take the
open-source code and do something naughty with it
The fact that AGi ethics is incredibly badly understood right now, and
the only clear route to understand it better is to make more empirical
progress toward AGI. I find it unlikely that dramatic advances in AGI
ethical theory are going to be made in a vacuum, separate from
coupled advances in AGI practice. I know some others disagree on
I don't think it's obvious how the risks and benefits of doing open-source
AGI come out, all things considered.... And I admit that when the proper
course of action is unclear, I have a general bias toward learning more
rather than remaining ignorant... and developing OpenCog will allow
us to do that...
-- Ben G
On Thu, Feb 28, 2008 at 10:02 PM, Gordon Worley <email@example.com> wrote:
> I just learned about OpenCog, and I'm concerned about the safety of it.
> Safety is addressed only insofar as to say that risks will be minimal
> in the near future because they are only attempting weak AI for now
> and will worry more about risks and ethics when they need to. In and
> of itself this attitude worries me, since it seems to assume that
> there's essentially no possibility of even accidentally creating
> something slightly smarter than EURISKO that goes FOOM! Further, even
> if in the near term OpenCog doesn't pose a threat, the product of its
> research may create something that, in 10 to 20 years time, could
> serve as a jumping off point for someone who wants to throw a lot of
> computing power at the problem and create strong AI by brute forcing
> it. However, since this is coming from SIAI and given Eliezer's
> conservative stance toward AI development, I can't help but wonder if
> the risks aren't as large as I suspect they are.
> If this has been discussed publicly elsewhere I'd appreciate a link to
> those discussions, but if not I think we need to have one here.
> How risky is OpenCog? Is this risk a good tradeoff in that it will
> lead to the safe development of Friendly AI sooner?
> -- -- -- -- -- -- -- -- -- -- -- -- -- --
> Gordon Worley
> e-mail: firstname.lastname@example.org PGP: 0xBBD3B003
> Web: http://homepage.mac.com/redbird/
-- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI email@example.com "If men cease to believe that they will one day become gods then they will surely become worms." -- Henry Miller
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:26 MDT