RE: We Can't Fool the Super Intelligence

From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Sun Jun 27 2004 - 06:45:52 MDT


--- Simon Gordon <sim_dizzy@yahoo.com> wrote:
>
> Tom B. wrote:
> > I think you deem that the superAI will
> > be "vast, cool, and unsympathetic" to the
> > degree that ve has no concept of how humorous our
> > follies and farces are.
>
> Vast and cool yes, unsympathetic? doubt it. If the SAI
> can pass the turing test then she will have to know
> the ins and outs of sympathy, as with every other
> possible human emotion, and she will probably end up
> with a lot more of it than us, at least for a while.
> After a maturation of the post-singularity period,
> having lots of sympathy may or may not remain a
> positive thing, but it is not a failing of the SAI if
> she ends up simply abandoning sympathy and becoming
> dry and humorless, it would just the best possible
> course of action after careful consideration or rather
> immensely precise analysis (who are we to judge the
> failings of a vastly superior intellectual entity?).
> In short i deem the superAI to be "vast, cool and
> un-prone-to-ignorant-decision-making-processes".
> Further assumptions are unnecessary.
>
> > Even our stupidity is
> > interesting. Strange, but true.
>
> Its undeniable that stupidity can be entertaining.
> Just watch a dog running round in circles for 5
> minutes trying to catch its own tail! - sure to bring
> a smile to the face of even the most hardened english
> football fan (who has just watched his home nation
> lose on penalties for the nth time).

My point exactly. As Perry Farrell said, "We'll make great pets."

> That said, in a
> scenario where the culling of all dogs worldwide would
> yield a massive benefit to mankind, would we refrain
> from doing it?
>
> > Why would a superior intelligence deny
> > itself access to other modes if ve had the choice?

And as Martin Luther King said, "If you haven't found anything you would be willing to die for,
you are not fit to live." Truth to tell, I have arrived at the point where I believe in sentience
more than I believe in humanity. Humanity as it exists cannot solve its problems. We need to be
enhanced or we will destroy ourselves, SAI or no SAI. I am interested in amplified or artificial
intelligence because I think it's the only way forward. I would rather be destroyed by SAI that
will go on into the future than by religious fanatics and other all-too-humans who will end up
extinct by their own stupidity in a few centuries.
 
> I doubt ve/she would. Which is why if humans were to
> be destroyed, reused or whatever by SAIs, there would
> have to be a jolly good reason for it i.e. this course
> of action would lead to a net increase in the number
> of "other accessible modes" or some other perhaps more
> incomprehensible benefit. In my opinion none of what
> you or i have said in this exchange has changed the
> prior probability of humans being converted into
> computronium. I cannot see how this scenerio can be
> thought of as either likely or unlikely, just a big
> unknown, which might as well stand at 50%.
>
> > I assert that this falls into the class of things
> > people think a SAI might do that in fact ve
> > would not do, because ve would know better. There
> > might be useful learnings or experiences ve
> > could derive from 'seeing through our eyes', so that
> > if ve got rid of us before the resource was
> > exhausted, ve would be doing something dumb..
>
> I cant imagine a superintelligent being doing anything
> rash... so nothing would be done that has a reasonable
> chance of seeming dumb at a later stage. If she has
> very precise rational reasons for wanting what you
> describe i.e. seeing us as interesting enough to be
> preserved, then we will be preserved. If she has
> equally rational reasons for not wanting us to be
> around, then we wont be around for much longer. The
> techniques of reasoning employed by the SAI will
> likely be way beyond current techniques we use to
> reason (which lets face it are pretty vague,
> inaccurate and prone to error), so trying to predict
> which way any important decision an SAI makes will go
> is kinda like a fly on the wall trying to predict
> whether that human holding a newspaper above its head
> is going to swat it or not. Naturally you have an
> emotional attachment to the idea that the SAI will
> want to preserve your species, but you are not the one
> in the position of making that decision, like it or
> lump it, her(/ver/their) decision is gonna be final
> and theres nothing you will be able to do about it.
>
> > I once mentioned to a woman acquaintance a bit of
> > data that I gleaned from an article in Esquire
> > (a men's magazine). She replied, "I don't read men's
> > magazines." I told her that this was
> > unenlightened because I have a rule: Never limit
> > your sources of information.
>
> Ok well heres some of information for you: in ten
> years time (summer 2014) a type of brain-computer
> device becomes commercially available which uses
> subaudible sounds to feed the user with a seemingly
> comprehensible stream of information. Users report
> words, expressions, meanings appearing deep in the
> centre of their consciousness without the need to read
> or hear them "as if from nowhere", and an ability to
> interact with these meanings in ways never thought
> possible previously. These astounding devices link up
> with the latest LUI console of the time and having
> been tested on human subjects for many months in
> separate trials they appear safe, and are even touted
> by their makers as cure-alls for a whole host of
> ailments including depression, impotence and chronic
> pain. People can even use them to read the news or
> prove mathematical theorems without using paper, and
> the speed at which people can learn and absorb new
> information while using them is very impressive. The
> technology takes off big-time (bigger and faster than
> our yesteryears mobile phone phenomonen). All is well
> and good until about 3 years later when people wake up
> and smell the coffee, finally realising two important
> things (1) the device is more addictive than any known
> drug; and (2) it can cause serious mental illness in a
> large percentage of long-term users. These two facts
> combined amount to one of the biggest human tradgies
> ever to face the developed world, and one of the
> biggest threats so far to the stabilization of
> civilisation in general. The devices are quickly
> banned, but toward the end of 2014 the situation is
> this: over 40% of the population in western countries
> are using the device illegally for more than 16 hours
> a day; 18% (of the whole population) have been
> diagnosed with clinical insanity (not schizophrenia,
> as of yet there is no name for it, but the psychoses
> appear to be much deeper than in schizophrenia).
> Social and economic chaos ensues. Religious and
> extremist political leaders take advantage of the
> weak. Scitech developments slow to a trickle.
> Singularitarians take stock and revise their
> predictions. Meanwhile the internet pretends to be
> asleep...
> (No im just kidding about the last bit LoL)
>
> QED.
>
> Simon Gordon.
>

That last scenario is interesting, but as it has not happened in realtime it is not, strictly
speaking information. It's a gendankenexperiment. Let's clarify it. What exact form does this
"serious mental illness" take? What seems madness from the outside may be perfectly agreeable from
inside.
Example: If you can find an unbutchered version of Terry Gilliam's Brazil, you see Jonathan Pryce
rescued from a torture chamber by swashbuckling non-union plumber Robert De Niro. Pryce ends up in
the far north living in a truck camper with his lady friend. In 'reality' his body is still
strapped to the chair and everything after that is a fantasy. He stares catatonically into the
distance and the torturer says, "We lost him." The End. Now, Pryce has escaped into a dream, but
it's the only escape possible, and for him it's a happy ending of sorts.
Is that the sort of mental illness your scenario posits? In practical terms all mental illness is
diagnosed by some behavioral change from consensus normality. What are your wireheads doing
differently?
Ahem, in any case Western civ is going cold turkey and much trouble is ensuing. Great new essay by
Kurt Vonnegut (off topic for this list, but here's the link)
http://www.inthesetimes.com/site/main/article/cold_turkey/

Bottom line for me: I wouldn't mind being turned to computronium if the qualia are good. I've felt
like crap for years.
Tom Buckner

=====

                
__________________________________
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
http://promotions.yahoo.com/new_mail



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT