Re: OpenCog Concerns

From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Sun Mar 02 2008 - 21:51:50 MST


I also agree that there are significant risks with the open-source approach.

I think that some of those risks can be partially reduced by having a well-resourced, Safe-AI team building a closed-source AGI alongside improvements to the OpenCog model (eg. Novamente). IOW, keep the good guys "on top of the code".

Here are some other positive/neutral considerations:

- If done well, the OpenCog project can also help to spread the concept of Safe-AI among AGI researchers.

- The most gifted and prolific contributors could potentially be later incorporated into the parallel, closed-source, Safe-AGI team (eg. Novamente).

- Even if we didn't launch OpenCog, some other team in the future could launch their own open-source project. Perhaps it's better to capture interest early in a project that's specifically focused on safety (ie. corner the "market").

- Potential Safe-AI donors/investors can see some tangible evidence (actual code) that AGI may be coming soon.

- Going open-source would somewhat reduce the probability of enforced relinquishment. The cat is out of the bag - now we would have to actually deal with the issues effectively instead of just (dangerously) sweeping them under the rug.

- Might somewhat help public awareness of the issues - hopefully that can be molded into a good thing.

Overall, I think it's a close call. I think that the potential Risk:Benefit ratio is leaning ever so slightly in favor of Benefit. Careful execution can improve the odds. One thing that might help would be to include with OpenCog a folder that lays out the basic science and philosophy of Safe/Friendly AI. Why the AGI must be designed a certain way; and why a generic AGI will not automatically be safe, at all.

Jeff Herrlich

Gordon Worley <redbird@mac.com> wrote: I just learned about OpenCog, and I'm concerned about the safety of it.

http://www.opencog.org/

Safety is addressed only insofar as to say that risks will be minimal
in the near future because they are only attempting weak AI for now
and will worry more about risks and ethics when they need to. In and
of itself this attitude worries me, since it seems to assume that
there's essentially no possibility of even accidentally creating
something slightly smarter than EURISKO that goes FOOM! Further, even
if in the near term OpenCog doesn't pose a threat, the product of its
research may create something that, in 10 to 20 years time, could
serve as a jumping off point for someone who wants to throw a lot of
computing power at the problem and create strong AI by brute forcing
it. However, since this is coming from SIAI and given Eliezer's
conservative stance toward AI development, I can't help but wonder if
the risks aren't as large as I suspect they are.

If this has been discussed publicly elsewhere I'd appreciate a link to
those discussions, but if not I think we need to have one here.

How risky is OpenCog? Is this risk a good tradeoff in that it will
lead to the safe development of Friendly AI sooner?

-- -- -- -- -- -- -- -- -- -- -- -- -- --
                Gordon Worley
e-mail: redbird@mac.com PGP: 0xBBD3B003
   Web: http://homepage.mac.com/redbird/

       
---------------------------------
Never miss a thing. Make Yahoo your homepage.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT