From: Matt Mahoney (email@example.com)
Date: Mon Feb 23 2009 - 07:37:15 MST
--- On Sun, 2/22/09, Roko Mijic <firstname.lastname@example.org> wrote:
> One way to hasten the development of FAI is for me to seek to do
> research within academia. A disadvantage of this strategy is that
> academia is an open community, and anyone can potentially look at the
> results that the field is producing and use them to create uFAI.
Unlikely. Nobody can build AI, much less FAI or uFAI. All the top people in the field like Yudkowsky, Minsky, and Kurzweil have realized the problem is too hard by themselves, so they are not actually writing any software. It has to be a global effort. I did a cost estimate in section 2 of http://www.mattmahoney.net/agi2.html
I don't have a solution to the friendliness problem. I believe AI will become uncontrollable once the majority of intelligence is in silicon. I think it will take about 30 years to extract that much knowledge from human brains, even with the optimistic assumptions of cheap, powerful computers and pervasive public surveillance. The world will be a much different place when you never know whether you are talking to a human or a computer, and you can't trust your computer.
-- Matt Mahoney, email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT