From: Ben Goertzel (email@example.com)
Date: Fri Feb 29 2008 - 14:56:18 MST
On Fri, Feb 29, 2008 at 11:24 AM, Matt Mahoney <firstname.lastname@example.org> wrote:
> --- Gordon Worley <email@example.com> wrote:
> > I just learned about OpenCog, and I'm concerned about the safety of it.
> > http://www.opencog.org/
> Ben Goertzel is at least aware of the problem (unlike most AI researchers).
> However, OpenCog lacks a plan to acquire the resources (both computer and
> human) to grow very big.
The plan to acquire human resources for OpenCog is simple: think Linux...
1) An army of volunteers with various sorts of expertise
2) Once the utility of the OpenCog system for various practical purposes is
demonstrated, large companies may devote resources to it, just as IBM and
many others have done for Linux
Regarding compute resources, one approach that has some potential is
OpenCog@Home ... massive P2P distribution can take care of some but not
all essential cognitive operations. However, other than that, I do have faith
that once sufficiently impressive Ai capability is shown, resources can be found
to rent computer time from available compute clouds.
Also, subject to licensing terms,
OpenCog could be used directly by commercial or government entities,
which may have their own funding for hardware.
I really think hardware is not the problem.
> Even if it is successful, it requires centralized
> control over resources to ensure that agents cooperate. The human owner is
> responsible for acquiring these resources, which makes it expensive.
I don't understand what you mean by the above. If you mean that there is some
centralized control of cognition (as in the human brain) that is certainly true.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT