From: Rolf Nelson (firstname.lastname@example.org)
Date: Thu Jan 24 2008 - 21:06:58 MST
Peter McCluskey brought up the
saving resources to invest later in Friendly AI, rather than investing
resources now. Here are my thoughts on some factors that should go into any
such a decision.
* If the resource is money (rather than time), you can invest it in
financial instruments (tax-free, if you use a donor-directed fund) and, even
after adjusting for inflation and risk, can end up substantially increasing
the amount you have to spend.
* If the resource is donated time and insights, then Peter expressed concern
that you need to consciously pace yourself to avoid "burning yourself out".
I find such burnout a dubious and vague concept, but maybe someone can point
me to literature that shows this is a real concern.
* If the FAI community can use the investment to raise its profile and
attract more resources, this could produce a ROI which would have to be
weighed against the ROI of financial investments.
* Crowding: A $1 investment, in 2008 dollars, made in 2018 will likely have
a lower marginal utility than a $1 investment made in 2008, because in 2018
more money will likely be spent both on FAI and on irresponsible UFAI
projects. This is because of economic growth, and also because I believe FAI
and UFAI are likely to command an increasing percent of GDP in investment
from year to year. Also, if you condition your probabilities on the
assumption that investing in FAI is the rational thing for an altruist to do
in the first place, then you should expect that more people may realize this
in the future and that FAI investment may be more crowded in the future (and
thus marginal utility will be lower.)
* At an extreme, if you leave your money in a will to bequeath some time
after your death, you will not have the ability to perform oversight on how
the money is spent; the people spending your money will not be accountable
to you to spend it effectively and altruistically, and may be corrupted by
this lack of oversight.
* There is some unknown timeframe, beyond which investing resources will be
less useful because it is "too late"; as an extreme case, once the first
strong AI is built, any money saved up and unspent would be useless in
determining the outcome. Your estimate of the probability distribution of
this timeframe should weigh heavily in your decisions.
* Some set of events must take place before someone can build the first FAI.
The whole reason for investing your own resources, is based on the concept
that you will be contributing to one of the events in the causal chain, thus
making FAI more likely to happen. If I understand him right, Peter brought
up the possibility that, if you wait, one of these events in the causal
chain that's a precondition to the event that you're qualified to contribute
to, might "pop up" while you're waiting. For example, someone else (call him
Bob) might publish an important and unexpected insight next year, which will
enable Peter to figure out how to build an FAI, but Peter can't help out
much this year since Bob hasn't published yet. I agree that this is a valid
scenario, but it also has to be weighed against the possibility that Peter
will gain some important insight this year that Bob can use to build an FAI,
or that Peter will talk Alice into working on FAI this year, and Alice will
gain an important insight that Bob will need. So I don't see this particular
concern as significantly affecting the calculus one way or the other, unless
you have some particular reason to believe that your personal role is going
to be more important later (or earlier) in the causal chain, compared with
other peoples' roles.
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:24 MDT