From: Peter C. McCluskey (firstname.lastname@example.org)
Date: Fri Jan 25 2008 - 19:59:38 MST
email@example.com (Rolf Nelson) writes:
>* If the resource is donated time and insights, then Peter expressed concern
>that you need to consciously pace yourself to avoid "burning yourself out".
>I find such burnout a dubious and vague concept, but maybe someone can point
>me to literature that shows this is a real concern.
I don't know much about literature on burnout, but there's a good deal
of literature on "learned helplessness" which partly describes what I
have in mind. "Consciously pace yourself" isn't quite the prescription
I had in mind. Most efforts at doing something constructive about AGI have
so far been a waste of time. Most people should realize that they don't
have the skills to improve on past AGI efforts, and should devote their
resources to existential risks which are easier to understand, such as
killer asteroids. My estimate of my ability suggests that if I spend a
few years obsessed with trying to design an FAI and/or reading every AI
related literature that has a tiny chance of providing FAI-related insights,
I will almost certainly realize that that effort has been wasted, and
become too frustrated to put much further effort into watching for people
who have the ability to build an FAI.
>* If the FAI community can use the investment to raise its profile and
>attract more resources, this could produce a ROI which would have to be
>weighed against the ROI of financial investments.
Is there an FAI community in the sense that would be relevant to producing
a useful ROI? There's certainly a group of people who want to create an FAI.
But I don't see a community that has the skills to make progress toward FAI.
>* Crowding: A $1 investment, in 2008 dollars, made in 2018 will likely have
>a lower marginal utility than a $1 investment made in 2008, because in 2018
This depends on assumptions about the marginal utility of investing $1
in 2008 which I suspect reflect overconfidence. Most money invested on AGI
in 2008 will be wasted due to lack of knowledge about how to usefully
>to, might "pop up" while you're waiting. For example, someone else (call him
>Bob) might publish an important and unexpected insight next year, which will
>enable Peter to figure out how to build an FAI, but Peter can't help out
>much this year since Bob hasn't published yet. I agree that this is a valid
>scenario, but it also has to be weighed against the possibility that Peter
>will gain some important insight this year that Bob can use to build an FAI,
I think it's much more likely that I will someday recognize and provide
resources to a person who will do something constructive about FAI with
those resources, than that I will provide anyone with important insights
about how to build an FAI or that I will figure out how to build an FAI.
I haven't yet recognized such a person.
You shouldn't assume that the same analysis applies to you. Most people
ought to stick to easier things like helping to detect killer asteroids.
But if you're on this mailing list, you're probably either too overconfident
in your abilities to heed that advice, or you have some skill which is
sufficiently unusual that I can't easily give you advice on how to use it.
-- ------------------------------------------------------------------------------ Peter McCluskey | The road to hell is paved with overconfidence www.bayesianinvestor.com| in your good intentions. - Stuart Armstrong
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:01:07 MDT