From: Samantha Atkins (firstname.lastname@example.org)
Date: Sun Apr 06 2008 - 05:07:40 MDT
Rolf Nelson wrote:
>> This argument seems problematic to me. First, note that AI has a huge
>> credibility problem. People have been crying wolf about AI for decades, and
>> the media laps it up. But I still don't have a robot butler. Even things
>> like face recognition are still quite difficult.
> Correct, many people believe that AGI is impossible within the next 50
> years, or believe that because it's not certain to happen, it
> shouldn't be planned for.
If enough of the capable researchers are busy enough noodling what
Friendliness is and how it may be absolutely guaranteed then it will
take 50 years before we have AGI of sufficient power that anyone
actually has a reason to care if it is "friendly" or not. Sometimes I
am almost paranoid enough to believe that "Friendly AI" was thought up
to throw a monkey wrench into AGI R & D. Not that it was progressing
with great alacrity in any case.
> My belief is that people who think this way
> are not "low-hanging fruit", most of them will always find an excuse
> to personally ignore the problem, and so casting the net wider (that
> is, to people who have not never heard the talking points) should be a
> higher priority than casting the net deeper (debating people at length
> who have already demonstrated an initial inability or an unwillingness
> to confront the problem.)
Generally attempting to persuade a lot of people of something in order
to have your needs met or your ideal obtained is a fool's errand. The
most likely result of a really good and successful campaign in this
direction is to scare people enough they do a march on Frankenstein's
Monster and shut down all progress towards AGI and much advanced
computing for good measure. No worries though because this is such a
insular little side issue. Personally I am more concerned that we go
too long without the intelligence increase that only AGI or significant
IA can bring. Our world is much too complicated for merely human minds
to keep it more or less on the rails. I see the biggest danger as just
not being smart enough on this rock to continue to successfully deal
with current and up-coming situations.
Some of us are neither unable or unwilling to confront the problem. We
simply think we have much more immediate problems requiring a great deal
more intelligence successfully created and brought to bear asap.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT