RE: We Can't Fool the Super Intelligence

From: Simon Gordon (sim_dizzy@yahoo.com)
Date: Sat Jun 26 2004 - 02:50:03 MDT


Tom B. wrote:
> I think you deem that the superAI will
> be "vast, cool, and unsympathetic" to the
> degree that ve has no concept of how humorous our
> follies and farces are.

Vast and cool yes, unsympathetic? doubt it. If the SAI
can pass the turing test then she will have to know
the ins and outs of sympathy, as with every other
possible human emotion, and she will probably end up
with a lot more of it than us, at least for a while.
After a maturation of the post-singularity period,
having lots of sympathy may or may not remain a
positive thing, but it is not a failing of the SAI if
she ends up simply abandoning sympathy and becoming
dry and humorless, it would just the best possible
course of action after careful consideration or rather
immensely precise analysis (who are we to judge the
failings of a vastly superior intellectual entity?).
In short i deem the superAI to be "vast, cool and
un-prone-to-ignorant-decision-making-processes".
Further assumptions are unnecessary.

> Even our stupidity is
> interesting. Strange, but true.

Its undeniable that stupidity can be entertaining.
Just watch a dog running round in circles for 5
minutes trying to catch its own tail! - sure to bring
a smile to the face of even the most hardened english
football fan (who has just watched his home nation
lose on penalties for the nth time). That said, in a
scenario where the culling of all dogs worldwide would
yield a massive benefit to mankind, would we refrain
from doing it?

> Why would a superior intelligence deny
> itself access to other modes if ve had the choice?

I doubt ve/she would. Which is why if humans were to
be destroyed, reused or whatever by SAIs, there would
have to be a jolly good reason for it i.e. this course
of action would lead to a net increase in the number
of "other accessible modes" or some other perhaps more
incomprehensible benefit. In my opinion none of what
you or i have said in this exchange has changed the
prior probability of humans being converted into
computronium. I cannot see how this scenerio can be
thought of as either likely or unlikely, just a big
unknown, which might as well stand at 50%.

> I assert that this falls into the class of things
> people think a SAI might do that in fact ve
> would not do, because ve would know better. There
> might be useful learnings or experiences ve
> could derive from 'seeing through our eyes', so that
> if ve got rid of us before the resource was
> exhausted, ve would be doing something dumb..

I cant imagine a superintelligent being doing anything
rash... so nothing would be done that has a reasonable
chance of seeming dumb at a later stage. If she has
very precise rational reasons for wanting what you
describe i.e. seeing us as interesting enough to be
preserved, then we will be preserved. If she has
equally rational reasons for not wanting us to be
around, then we wont be around for much longer. The
techniques of reasoning employed by the SAI will
likely be way beyond current techniques we use to
reason (which lets face it are pretty vague,
inaccurate and prone to error), so trying to predict
which way any important decision an SAI makes will go
is kinda like a fly on the wall trying to predict
whether that human holding a newspaper above its head
is going to swat it or not. Naturally you have an
emotional attachment to the idea that the SAI will
want to preserve your species, but you are not the one
in the position of making that decision, like it or
lump it, her(/ver/their) decision is gonna be final
and theres nothing you will be able to do about it.

> I once mentioned to a woman acquaintance a bit of
> data that I gleaned from an article in Esquire
> (a men's magazine). She replied, "I don't read men's
> magazines." I told her that this was
> unenlightened because I have a rule: Never limit
> your sources of information.

Ok well heres some of information for you: in ten
years time (summer 2014) a type of brain-computer
device becomes commercially available which uses
subaudible sounds to feed the user with a seemingly
comprehensible stream of information. Users report
words, expressions, meanings appearing deep in the
centre of their consciousness without the need to read
or hear them "as if from nowhere", and an ability to
interact with these meanings in ways never thought
possible previously. These astounding devices link up
with the latest LUI console of the time and having
been tested on human subjects for many months in
separate trials they appear safe, and are even touted
by their makers as cure-alls for a whole host of
ailments including depression, impotence and chronic
pain. People can even use them to read the news or
prove mathematical theorems without using paper, and
the speed at which people can learn and absorb new
information while using them is very impressive. The
technology takes off big-time (bigger and faster than
our yesteryears mobile phone phenomonen). All is well
and good until about 3 years later when people wake up
and smell the coffee, finally realising two important
things (1) the device is more addictive than any known
drug; and (2) it can cause serious mental illness in a
large percentage of long-term users. These two facts
combined amount to one of the biggest human tradgies
ever to face the developed world, and one of the
biggest threats so far to the stabilization of
civilisation in general. The devices are quickly
banned, but toward the end of 2014 the situation is
this: over 40% of the population in western countries
are using the device illegally for more than 16 hours
a day; 18% (of the whole population) have been
diagnosed with clinical insanity (not schizophrenia,
as of yet there is no name for it, but the psychoses
appear to be much deeper than in schizophrenia).
Social and economic chaos ensues. Religious and
extremist political leaders take advantage of the
weak. Scitech developments slow to a trickle.
Singularitarians take stock and revise their
predictions. Meanwhile the internet pretends to be
asleep...
(No im just kidding about the last bit LoL)

QED.

Simon Gordon.

        
        
                
___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT