Re: Singularity Institute: Likely to win the race to build GAI?

From: Joshua Fox (joshua@joshuafox.com)
Date: Tue Feb 14 2006 - 06:23:14 MST


Yes, I know that they are working on _Friendly_ GAI. But my question is:
What reason is there to think that the Institute has any real chance of
winning the race to General Artificial Intelligence of any sort, beating
out those thousands of very smart GAI researchers?

Though it might be a very bad thing for nonFriendly GAI to emerge first,
it seems to me by far more likely for someone else --there are a lot of
smart people out there -- to beat the Institute to the goal of GAI. And
if so, perhaps the Institute needs to put all its resources into
researching and evangelizing Friendliness, then teaming up with the
world's leading GAI researchers -- whether at MIT, Stanford, or wherever
they are -- to add Friendliness to their development program.

Joshua

Thomas Buckner wrote:
> --- Joshua Fox <joshua@joshuafox.com> wrote:
>
>
>> The writings at intelligence.org have made quite an
>> impression on me.
>>
>> Though I am no expert, it appears to me that
>> the Institute is a thought
>> leader in the definition and the creation of
>> FAI.
>>
>> But let me ask: Why does the Institute believe
>> that it has a reasonable
>> chance of leading the world in the construction
>> of a GAI?
>>
>
> What SIAI in general and Eliezer in particular
> are focused on is not merely making a GAI but
> making a Friendly one, that won't extinct,
> enslave, stultify or otherwise ruin us. It's that
> simple. By analogy, there are fifty groups trying
> to build a car, but who else is trying to develop
> brakes, belts, and airbags?
>
> Tom Buckner
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT