From: Matt Mahoney (firstname.lastname@example.org)
Date: Fri Feb 29 2008 - 09:24:04 MST
--- Gordon Worley <email@example.com> wrote:
> I just learned about OpenCog, and I'm concerned about the safety of it.
Ben Goertzel is at least aware of the problem (unlike most AI researchers).
However, OpenCog lacks a plan to acquire the resources (both computer and
human) to grow very big. Even if it is successful, it requires centralized
control over resources to ensure that agents cooperate. The human owner is
responsible for acquiring these resources, which makes it expensive.
I would be more concerned about the uncontrolled and self-propagating growth
of a competitive, distributed query/message posting service such as the one I
outlined in http://www.mattmahoney.net/agi.html
It is friendly only to the point that peers need to provide useful and
truthful information to acquire reputation and resources. Once hardware
advances to the point where peer intelligence exceeds human intelligence (e.g.
the peers can write and debug software), then humans will be left behind, at
the same time being totally dependent on it.
What distinguishes a "good" singularity from a "bad" singularity? I don't
have a good answer. My philosophy is that the definition of friendliness
depends on the ethics of who is asking the question. The question is
meaningless in a posthuman world. It is what it is. Evolution, physics, and
mathematics are strictly neutral.
-- Matt Mahoney, firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:01:07 MDT