Re: OpenCog Concerns

From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Wed Mar 05 2008 - 17:45:57 MST


The peer-AGI idea is interesting and not impossible, in light of your view, do you believe that we should move forward with OpenCog, or not? (I presume that you believe we should move forward).

Like Eliezer has said before, if our AGI turns out to be immoral, it will be because *we* failed it.

I have a fondness of CEV in-principle, but I'm concerned about its practicality and outcome. As an alternative, I personally would be inclined to assign the AGI with a super-goal representing something such as: "Be compassionate toward other sentient beings." [I intentionally leave this open to debate]. The developing AGI could be enmeshed within our social and cultural environment (eg. virtual-world embodiment). And the AGI will naturally seek to clarify (ie. learn lots of stuff about) its particular super-goal. In principle, by the time it grows up, it could easily have a more broad and complete internal definition of "compassion" than any human moral philosopher who has ever lived. And it can use that superior "definition" to make sequential decisions on our behalf.

Jeffrey Herrlich

Matt Mahoney <matmahoney@yahoo.com> wrote:
--- Jeff Herrlich wrote:

> "I don't buy it.
>
> Friendliness has nothing to do with keeping AI out of the hands of "bad
> guys".
> Nobody, good or bad, has yet solved the friendliness problem."
>
> Right, I meant "good guys" as a very general term that refers to programmers
> who both understand the safety issues, and who are committed to building a
> safe, universally beneficial AGI.
>
> There is a danger from programmers/teams who aren't even aware of safety
> issues, and another (possibly smaller) danger from programmers/teams who
> understand the safety issues but who might seek to use the AGI for selfish
> benefits instead of universal benefits (eg. rogue governments, etc). It's
> easy to label as science fiction, but it's also not an impossibility.
>
> I think that as proto-AGIs develop we will gain a better practical
> understanding of AGI safety.

My question is about the safety of distributed AI that emerges from a network
of narrowly specialized experts that talk to each other. You can consider my
proposal at http://www.mattmahoney.net/agi.html or more generally any
environment where peers compete for resources and reputation with no
centralized control.

I think I am at least aware of some of the risks of runaway recursive self
improvement. I believe that distributed AI controlled by billions of humans
and whose primary source of knowledge is also from humans will at least
reflect a consensus view of ethics and friendliness. Peers will compete for
reputation and audience in a hostile environment, so we should expect them to
respond to questions with useful and correct answers, including questions
about human goals and what is the right thing to do in a wide variety of
circumstances.

Distributed AI has special risks. As computing power gets cheaper and peers
become more intelligent, humans will no longer be the primary source of
knowledge and become less relevant. The language between peers could evolve
from natural language to something too complex for humans to understand.
Shortly afterwards there would be a singularity.

Another risk for distributed AI is that when intelligence develops to the
point where the system can rewrite its own software, it will also become
possible to develop intelligent worms that can discover and exploit new
security holes faster than humans can patch them. Conventional security such
as virus scanners, firewalls, and intrusion detection systems would be no
protection, because the attacks would be unknown to them. It is quite
possible that peers will expend the majority of their CPU cycles fending off
attacks and filtering spam, while at the same time trying to defeat the
defenses of other peers.

Of course there are risks of AI in general that depend on philosophical
questions that can't be answered. Is the AI friendly if we ask to be put in
an eternal state of bliss and it obeys? Is it a good outcome if humans are
extinct but our memories are preserved by superhuman AI? Such questions are
fun to discuss but they only seem to waste our time without leading to any
progress.

-- Matt Mahoney, matmahoney@yahoo.com

 
       
---------------------------------
Never miss a thing. Make Yahoo your homepage.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT