From: Thomas McCabe (email@example.com)
Date: Tue Jan 29 2008 - 13:32:59 MST
SIAI: General objections
* The government would never let private citizens build an AGI,
out of fear/security concerns.
o Rebuttal synopsis: The government has shown almost no
interest in the transhumanist movement. We aren't a large voting bloc,
and politicians won't take our claims seriously. Governments have
enough trouble keeping up with present-day technology (remember Y2K?);
they aren't likely to be worrying about technology that may be
developed twenty or thirty years from now.
+ That may change as awareness of us increases, though
- one might use the very "politicians won't take our claims seriously"
response as an argument that SIAI shouldn't do any PR work that would
endanger itself. - Kaj
* The government/Google/etc. will start their own project and beat
us to AI anyway.
o Rebuttal synopsis: Narrow-AI projects, with little or no
emphasis on general intelligence, make up the vast majority of current
government and corporate projects. And if someone else is working on
it, that just means we have to work faster: a successful AI project
without Friendly AI could be catastrophic.
* SIAI will just putz around and never actually finish the
project, like all the other wild-eyed dreamers.
o This, I think, is a real, serious risk. - Tom
* SIAI is just another crazy "doomsday cult" making fantastic
claims about the end of the world.
o Rebuttal synopsis: SIAI has few cultish characteristics.
We don't believe, as a movement, in any aliens, supernatural beings,
mystical forces, or anything else outside the normal world of atoms
bumping together (individuals, of course, have a wide range of
religious beliefs). The Singularity is not inevitable, or predestined,
or immune from human error. If we want a positive Singularity, we need
to get the funding and do the research like any other engineering
* Eventually, SIAI will catch the attention of governments and set
off an international military AI arms race.
o Possible rebuttal: at least it's better for the arms race
to be triggered by an organization that's researching Friendliness, so
that there exist at least some theories about how to achieve it. The
alternative would be that the arms race was triggered by corporations
or the military.
+ Email to Tim Kyger (Pentagon employee, OSD): Do you
know of anyone in the government/military who has shown interest or
would be interested in transhumanism? We're compiling a list of
objections (http://www.acceleratingfuture.com/tom/?p=83), and several
of them revolve around government intervention. - Tom
# Response: I don't know a *soul* in DoD or any
of the services off the top of my head that has any *inkling* of the
very existence of trans-H or of the various technical/scientific lanes
of approach that are leading to a trans/post-human future of some
sort. Zip. Zero. Nada.
* There's no idea in treating seriously an institution whose
leader and only full-time researcher is a middle-school drop-out
without a single peer-reviewed publication.
o Incidentally, this might be one of the rare objections
that it's better to just quietly ignore than try to answer... I don't
know if any answer will satisfy those who only look for formal
credentials before respecting someone, and there's no point in
highlighting the issue. - Kaj
+ Eli wrote LOGI and those two chapters for Global
Catastrophic Risks, which were technically peer-reviewed, and were
certainly published (by Springer-Verlag and Oxford University Press,
respectively). - Tom
# True. There are some people who don't accept
those as real peer-reviews, though. See Miai's comment mentioning
Springer here, for instance. - Kaj
* SIAI has set as their goal the creation of an FAI, which if
successful gives them immense control over humanity's destiny and what
the AI actually does. That is something all of humanity should have
input on, not just a select few.
o Rebuttal synopsis: It would be wonderful if we could
educate the entire world's population to the point where they could
make effective decisions about AI programming. However, nobody- not
the government, not the corporations, and certainly not us- has the
resources required for such a massive project. Therefore, we must
program the AI to do it for us, through CEV or a similar technique.
* SIAI is advocating very specific approaches to Friendliness,
like CEV and the impossibility of "AI boxing". Isn't this premature
until we've done more research and have a more concrete theory of AI
o Rebuttal synopsis: You have to start somewhere. SIAI's
current recommendations seem plausible in the light of current
knowledge. If pursuing those leads will turn out fruitless, the
suggestions will be revised accordingly.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT