From: Christian Szegedy (firstname.lastname@example.org)
Date: Fri Feb 01 2002 - 02:45:21 MST
>In a message dated 1/31/2002 3:58:35 AM Pacific Standard Time,
><<Perhaps it is unetical to convince a human to upload and waste valuable
>computational resources, instead of letting them used by
>some well tuned AI much much more effectively.
>I may say so: if you save one human life by uploading, you kill a
>hyperintelligent/hypersensitive AI in the same turn.>>
>The consumption of power in convincing a human being to upload would be
>negligible to an SI, therefore it would be easier to build new SI's or SI
>components out of simply transcend-ifying human minds which are *already
>there*, and more ethical too. So both the SI and we win.
As Alan has already pointed out: you misunderstood my point: I meant the
used for simulating the human being. Good quality simulation is a costly
thing... Even if you have
large amount of resources, you (the AI) will try to use them as
efficient you can.
>people let themselves convinced by people on the same level rather than
>by more intelligent ones, let alone by machines. And even if a
>superintelligent AI would find very effective (but semantic wrong)
>for uploading by analysing the human memetic flora and the flaws of
>human thinking, would it be "ethical"to convince them that way?>>
>~shock level deficency detected~
>This SI "machine", as you put it, would be very, very far from our current
>conception from what a machine is. This "machine" would be much more like
>"God" (not in the Christian sense) than any "machine" seen up to this point.
>For example, it could take on the appearance of your long lost father or
>lover, or give you a feeling of complete bliss when in its "presence". And
>those are very anthropocentric examples. A true SI would be "more human than
>human", in the godliest sense we can comphrehend. Super empathic, super
>ethical, super nice, just an all around great guy to be around! =D Is it
>"ethical" to bring in a starving homeless child from off the street and
>clothe them and educate them, even if they are afraid of social interaction
>or family love, at first? This is analogous to the humanity+the Sysop
>situation (In case you couldn't guess. It seems someone missed my earlier
>analogy about human love being a fleck of gold and the Singularity being a
>block of gold the size of a house, amazingly.)
>If you wish to continue arguing this point further then please mail me
>directly, and spare the list. Thank you kindly.
It will turn out in this century whether it was a "shock level
deficiency" or the opposite of it.
I find the human-centric view-point most of you propose is basically
flawed. The remark
"let alone by machines" was not intended to express my view-point or
opinion but of those
people who are to be convinced (probably on a very low shock level,
whatever SL means. :) ).
However, the mail was meant to be quite sarcastic, unfortunately, you
have taken it too literally...
To make my point more clear: would you care about convincing your pets
to upload? First of
all, it does not make too much sense to upload a cat, secondly, if you
want to upload it, you will
simply do it and not convince it. But let's go further: if you uploaded
your cat to do an interesting
experiment, would you start to upload all cats?
I don't want these questions to be answered, because the real questions
will be much much
more complicated and far from being so binary than they seem to be today...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT