Re: OpenCog Concerns

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Feb 29 2008 - 15:02:05 MST


The risks and benefits of open-source AGI development are subtle, and
I won't try
to do justice to the issue in a brief email.

I'll just mention some obvious factors

1)
Proactionary Principle: the need to balance the risks of action with the
risks of inaction

2)
The added oversight that having a large group of smart people studying
an AGI system as it develops provides

3)
On the other hand the obvious danger that some bad guys take the
open-source code and do something naughty with it

4)
The fact that AGi ethics is incredibly badly understood right now, and
the only clear route to understand it better is to make more empirical
progress toward AGI. I find it unlikely that dramatic advances in AGI
ethical theory are going to be made in a vacuum, separate from
coupled advances in AGI practice. I know some others disagree on
this.

I don't think it's obvious how the risks and benefits of doing open-source
AGI come out, all things considered.... And I admit that when the proper
course of action is unclear, I have a general bias toward learning more
rather than remaining ignorant... and developing OpenCog will allow
us to do that...

-- Ben G

On Thu, Feb 28, 2008 at 10:02 PM, Gordon Worley <redbird@mac.com> wrote:
> I just learned about OpenCog, and I'm concerned about the safety of it.
>
> http://www.opencog.org/
>
> Safety is addressed only insofar as to say that risks will be minimal
> in the near future because they are only attempting weak AI for now
> and will worry more about risks and ethics when they need to. In and
> of itself this attitude worries me, since it seems to assume that
> there's essentially no possibility of even accidentally creating
> something slightly smarter than EURISKO that goes FOOM! Further, even
> if in the near term OpenCog doesn't pose a threat, the product of its
> research may create something that, in 10 to 20 years time, could
> serve as a jumping off point for someone who wants to throw a lot of
> computing power at the problem and create strong AI by brute forcing
> it. However, since this is coming from SIAI and given Eliezer's
> conservative stance toward AI development, I can't help but wonder if
> the risks aren't as large as I suspect they are.
>
> If this has been discussed publicly elsewhere I'd appreciate a link to
> those discussions, but if not I think we need to have one here.
>
> How risky is OpenCog? Is this risk a good tradeoff in that it will
> lead to the safe development of Friendly AI sooner?
>
> -- -- -- -- -- -- -- -- -- -- -- -- -- --
> Gordon Worley
> e-mail: redbird@mac.com PGP: 0xBBD3B003
> Web: http://homepage.mac.com/redbird/
>
>
>

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
ben@goertzel.org
"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT