From: Richard Loosemore (email@example.com)
Date: Sun Aug 06 2006 - 09:18:35 MDT
Joshua Fox wrote:
> I note that the great majority of SL4 members have computer software as
> their primary profession. Why is this?
> After all, comparing other philosophies which seek to improve the world
> through some specific technical efforts, the professions of interested
> people and supporters do not show such homogeneity or connection to the
> implementation. For example, most AIDS activists, advocates for feeding
> the world's hungry, people interested in stopping global warming,
> supporters of third-world debt relief, and fans of space exploration do
> not specialize in medicine, agriculture, meteorology, finance, or
> spacecraft engineering respectively.
> While one might argue that familiarity with software gives one an
> understanding of AGI, in fact most software development is worlds away
> from true Singularity expertise.
> Anyone care to venture a guess as to why the great majority of
> Singularitarians (or at least of the SL4 members) work in a certain
> field? Why do we not see among Singularitarians more psychiatrists,
> psychologists, anthropologists, historians, or biologists, all fields
> which are to some extent relevant to the Singularity? For that matter,
> why do we not see more doctors, non-high-tech business people,
> musicians, or practitioners of other unrelated professions, as we do for
> other movements?
> This situation seems less than ideal. Though technical work must be done
> by experts, is not diversity of backgrounds valuable in non-specialist
> discussions of any intellectual pursuit as broad as this?
It's a good question, but I think the answer is relatively straighforward.
Computer science folks *know* what it means to program computers, and
they know something about the open ended potential of programmed computers.
Among folks who have never written a piece of software, I think there is
really quite a large understanding gap: I think that, more than with
any other science, there is widespread difficulty even understanding
what is involved, and what is its potential. They tend to think it is
To contrast with the other examples you give: AIDS activists can
extrapolate from their understanding of a hideous disease; Advocates
for feeding the world's hungry can imagine hunger magnified to the point
of death; People interested in stopping global warming can imagine
greenhouses and heat that melts glaciers, etc.; Supporters of
third-world debt relief can imagine what it is like to owe so much money
that all your salary goes to repayments, and fans of space exploration
just need to imagine romantic tales of ship exploration, extrapolated to
The three main differences, I think, are these:
1) The problem with AI is that people can imagine computers being
intelligent, but they have no idea how difficult or easy that might be,
so they don't know if it will happen tomorrow or in 2106 AD.
2) When asked to imagine superintelligent computers that work at 1000
times human thought speed, their brains recoil from the shear craziness
of the idea (not just ships extrapolated to space-ships, more like ships
extrapolated to wormhole/timemachine/universe surfing machines .... and
that is no longer mere extrapolation).
3) When asked to imagine AI, they think of every stupid horror movie and
horror story ever written, and since they have nothing but these
fictional examples, they cannot imagine AI dispassionately and
disconnect it from the raving fantasies of science fiction writers.
I think computer science people are better at 1 and 2, although they do
tend to panic about 3 as much as the rest of the population (hence this
List, which is dominated by considerations of Friendliness that are not
always very rational).
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:01:05 MDT