Re: FAI prioritization

From: William Pearson (wil.pearson@gmail.com)
Date: Thu Apr 03 2008 - 02:12:04 MDT


On 03/04/2008, Rolf Nelson <rolf.h.d.nelson@gmail.com> wrote:
> For the duration of this thread, assume that FAI is the best use of
> time and resources for a rational altruist. How should resources be
> prioritized, in terms of marginal utility? Here are my current
> thoughts.
>
> My prioritization:
>
> 1. Outreach to mathematically-talented students and recent graduates.
> We know that some tiny minority of people exist who will self-motivate
> to help with FAI after a quick exposure to these talking points:
>
> * AGI may be possible within the next few decades
>
> * AGI can be dangerous. Suppose you initially give an AI the goal of
> making paperclips, with the plan that you will shut the AI down or
> modify its goals once you decide you have "enough paperclips."
> However, the AI would rather replace the Earth with paperclips
> according to its current goal, so it will spontaneously form a subgoal
> of preventing you from being willing and able to shut it off or
> rewrite its goals.

It is this that mathematically talented students are likely to find
silly in this sense

http://www.overcomingbias.com/2008/04/arbitrary-silli.html

Unless they have read lots of sci-fi. Note note that AI may go wrong,
just that it will be affective enough to paper clip the world.

Will



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT