Re: Spontaneously emerging intelligence

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Tue Nov 01 2005 - 09:07:09 MST


Greg Yardley wrote:
> I'm much more concerned with Google's ability and capacity to work
> on AI in secret, while failing to sufficiently value the idea of
> friendly AI and rushing things in order to gain a crushing first-mover
> advantage in the marketplace. They have been on a hiring binge of
> late, have an extensive R&D budget, are notoriously private, and
> already use special-purpose machine learning for some of their
> projects.

Yes, Google could conceivably have some secret project that will
produce a UFAI. So could tens of government organisations and hundreds
(if not thousands) of companies. Unless you're prepared to spend your
time lobbying Google researchers (or any of these other organisations)
to try and make them aware of the dangers, something which strikes me
as having a very low chance of success, there's nothing to be gained
by worrying about it in public.

Generally unless you're actively involved in an AGI project or have
some other means of directly influencing one, debating FAI morality,
seed AI technology and Singularity strategy in general is a strictly
recreational activity. Speculation about secret projects that may or
may not be in progress isn't normally very satisfying in that regard.

As for 'emergence' from the Internet, speculation at the level of
fluffy generalities is absolutely useless. If you think there's a
specific mechanism that could conceivably produce an AGI distributed
over the Internet, describe it, and we might have something to
discuss.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT