Date: Sat Feb 09 2008 - 12:29:44 MST
2008/2/9, Peter C. McCluskey <email@example.com>:
> firstname.lastname@example.org (Rolf Nelson) writes:
> >Peter, overconfidence is indeed an ongoing risk with this venture (as,
> >indeed, it is with any venture, especially one that is attempting to build
> >new technology). In general, all things equal, simple solutions should be
> >preferred to complex solutions.
> >However, the ratio between AGI existential risk and killer-asteroid risk in
> >this century has got to be on the order of one to a million!* Despite this,
> >I would estimate asteroid-impact overall commands more resources than FAI
> >does.** I don't know how much you propose Bayesian shifting for
> >overconfidence, but surely it's not a shift of that magnitude.
> After reflecting on this for a while, I'm a good deal more uncertain
> than I was in my last email, but I still think it's at least a reasonable
> guess that the probability of a moderately smart person identifying a way
> advance FAI is more than a million times smaller than knowing how to
> advance asteroid detection. Your use of the word "surely" suggests that
> rather than just adjusting for overconfidence, you should rethink your
> reasoning more thoroughly.
> I'd say the number of smart people who have mistakenly thought they
> could create an important AI breakthrough suggests we should assume
> any one AGI effort should have a success probability somewhere around
> 0.01 to 0.0001. Constraining the goal to be friendly and to be complete
> before an unfriendly AU could easily reduce the probability by an order
> of magnitude or more. If many of the people offering resources to the
> project don't understand the design, then there is an incentive for people
> without serious designs to imitate serious researchers. How much you should
> adjust your estimates for this risk seems fairly sensitive to how well you
> think you understand what the project is doing and why it ought to work.
> I'd guess the typical member of this list ought to use somewhere between
> a factor of 2 and 10. So the most optimistic estimate I'm willing to take
> seriously is that a moderately smart person would do several hundred times
> better giving to FAI research than to asteroid detection, and I think it's
> more likely that giving to FAI research is 2 or 3 orders of magnitude less
> I suspect it's a good idea to make some adjustment for overconfidence
> at this point, but I'm having trouble thinking quantitatively about that.
> I'm tempted to add in some uncertainty about whether the AI designer(s)
> will be friendly to humanity or whether they'll make the AI friendly to
> themselves only. But that probably doesn't qualify as an existential risk,
> so it mainly reflects my selfish interests.
> Note that none of this addresses the question of how much effort one
> should spend trying to convince existing AI researchers to avoid creating
> an AGI that might be unfriendly.
> As for which tasks currently gets more resources, I find them hard to
> compare. It appears that more money is usefully spent on asteroid detection,
> and that money is the primary resource controlling asteroid detection
> results. It isn't clear whether money is being usefully spent on FAI or
> whether additional money would have any effect on it. I would not be
> surprised if something changes my opinion about that in the next few
> >Perhaps my own conclusions differs from yours as follows: first of all, I
> >have confidence in the abilities of the current FAI community; and second
> Can you describe reasons for that confidence?
> >all, if I didn't have confidence, I would try to bring about the creation
> >a new community, or bring about improvements of the existing community,
> Does that follow from a belief about how your skills differ from those
> of a more typical person, or are you advocating that people accept this
> as a default approach?
> There are a number of tasks for which the average member of this list is
> likely to be aware that he would have negligible influence, such as unifying
> relativity with quantum mechanics or inventing time travel. I suggest that
> FAI presents similar difficulties.
> I apologize for my delay in responding.
> Peter McCluskey | The road to hell is paved with overconfidence
> www.bayesianinvestor.com| in your good intentions. - Stuart Armstrong
> D O T E A S Y - "Join the web hosting revolution!"
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT