From: Yan King Yin (firstname.lastname@example.org)
Date: Sun Feb 29 2004 - 00:27:39 MST
>"Dramatic increases in collective human-machine intelligence are
>possible within 25 years. It is also possible that within the
>next 25 years single individuals acting alone might use advanced
>science and technology (S&T) to create and use weapons of mass
This is an important issue and I think WMD is only one of
the problems. I'm very concerned about another problem
which is economical, ie how do we cope with a jobless
economy where most jobs could be replaced by AIs.
There are jobs that are low-skilled from the common-sense
point of view but which are actually high-skilled in the
computational sense. An example would be nursing patients
who are unable to take care of themselves. Conversely,
there are high-skill jobs from the common-sensical point
of view that are actually low-skill ones computationally.
An example is humans acting like low-complexity expert
My point is, *IF* the singularity does not happen, then
a very natural scenario would be to replace human jobs
with AIs starting from the computationally easy ones
first, and gradually progressing from there. For example
we'll see fast food preparation being done by robots,
which is actually feasible with 2004 technology. Also it
would be the reasonable thing to do because of increased
efficiency, cost-efficiency, hygiene, etc. There is no
reason why it shouldn't be done anymore than why clothes
shouldn't be sewn by hand. *Except* that a hard-takeoff
Singularity will make this point moot.
Continuing the assumption that there won't be a hard
takeoff, jobs will become very scarce and laissez-faire
capitalism may not be fast enough to create enough new
jobs for the population. Then we'll get into a situation
where AI cannot be developed because of social problems.
So the soft takeoff is also problematic....
Find what you are looking for with the Lycos Yellow Pages
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:00:35 MDT