Re: Floppy take-off

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Aug 01 2001 - 19:08:09 MDT


James Higgins wrote:
>
> I don't think this should worry you. Think of it as the emergency backup
> plan. If all else fails, run the AI researcher software on a billion
> computers and see what it comes up with. Given enough time and resources I
> believe even a million or less human equivalent scientists could solve
> virtually any scientific problem. If this were our worse case scenario
> (and I don't think it is) I'd be having a celebration right now!

Intelligence can almost certainly be brute-forced. Friendliness rather
less so. Although a grand challenge when considered in isolation,
Friendliness should involve less effort than general intelligence *if* you
understand the principles involved in both.

Brute-force AI is a way of solving the problem without understanding it.
Furthermore, brute-forcing is a way of solving the problem without much
programmer effort. With Friendliness there is a certain amount of human
input, needed to provide the raw material, that cannot be easily
compressed. On a nanocomputer, a non-Friendliness-aware AI project run by
uncomprehending researchers can beat a Friendly AI project even if the
latter has a vastly better understanding of the nature of intelligence.

Any method that allows for a successfully self-improving AI where the
researchers don't really understand what's going on is a very serious
danger. In fact, given CFAI theory, I've made a total turnaround from my
position of few years back and now regard non-Friendliness-aware AI as a
disaster scenario. Not just an uncertain win, but a disaster scenario.

A basically Friendly seed - if previously developed over the course of
years - may be something you can drop into an ultra-nanocomputer and watch
a Friendly SI explode outward a few minutes later, although the lack of
continuing human input through the initial takeoff is an added significant
risk. But without that basically Friendly seed, brute-force AI on a
nanocomputer is a total disaster scenario. And probably a relatively easy
one to implement (relative to grey goo, military goo, or even biotech
plagues).

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT