From: John McNamara (email@example.com)
Date: Tue Dec 01 2009 - 17:21:32 MST
On Tue, Dec 1, 2009 at 17:41, Matt Mahoney <firstname.lastname@example.org> wrote:
> John McNamara wrote:
> > What is the maximum tolerable error that will not result in the failure of your engineering project (ie upload of a live human with no apparent deviations from expected normal thinking patterns (including fuzzy things like emotions/inspiration etc) for at least 1000 years with 99.9999 confidence level etc etc).
> Suppose there was a program that simulated you so well that nobody could tell the difference between you and the program in a Turing test environment. What is the probability that the program will be you after you shoot yourself?
> -- Matt Mahoney, email@example.com
Exactly zero, in my personal opinion.
For me, the term "be you" puts the question in a philosophical frame
as opposed to say an engineering one.
As such I can give you my personal philosophical reasoning on it but I
cannot offer a math-based answer.
If there was a credible science for the "Math of Philosophy" and I had
a Phd in it, it might be different matter.
The example engineering project I mentioned was intended as a
throwaway vague example. There is little or no technical detail in it.
I think the point I was making would apply to any engineering project
attempting an error-controlled simulation of a highly complex physical
Engineers today have non-perfect information on the systems they build
and have to use their professions best techniques to determine if the
thing they build will be OK. This involves saying things like "this
wall will not develop more than x micro fractures per sq m when
exposed to cyclical winds between z & y m/s for s mins every day for
at least 5 years with a probability of .99999" after doing lots of
testing and maths and (hopefully) very little wild guessing.
I see future "upload engineers" as working in a similar way. It may
turn out that they have no practical use for the Turing test as we
know it now.
That said I'm happy to discuss your specific scenario further and it
deserves a more detailed answer than '0'.
You did after all create a new thread.
The main benefit of uploading is obviously moving from our current
situation where the mind 'runs' on the familiar human organic hardware
to hardware that has superior features, the most important of which is
to cleanly separate 'data' from 'hardware', with the many attendant
engineering benefits that brings.
I realise an upload is effectively "data" but the objective is to be
able to choose better hardware to 'run' that data on.
There are theoretically 2 ways to do an upload that I know of.
1 : build 1 (or more) copies on new hardware, (optionally dispose of
original or let entropy take it's course)
2 : Ship of Theseus style migration of live 'running' mind from the
familiar human organic hardware to new hardware.
I'll assume they're equally practical and reliable from an engineering POV.
I personally would only tolerate option 2 (assuming only these 2
alternatives) for my myself.
No disrespect to the scientific value (whatever that may be) of the
Turing test but I couldn't care less about the "nobody could tell the
difference between you and the program in a Turing test environment"
Other's observations on my status are theirs and are irreverent to my
philosophical opinion on this matter.
They can think I'm Santa Claus for all I care.
I would envision a successful type 2 upload occurring only over a long
time scale, in the order of years.
When completed I would think of the final disposal of the last part of
my original "meat" host like the removal of an appendix is considered
now. I might keep the parts for sentimental value or to build a robot
What do I think of option 1 ?
I think the upload is just a copy (regardless of their personal
philosophical opinions). Should such copies have the full suite of
legal etc rights given to sentients in whatever society they are part
I think the original is an entirely separate person logically and legally.
Them arranging their own death (by whatever means) would be suicide.
Forceably arranging their death against their will (their will, that
is, in the last moment of their lucid informed consciousness before
their death) would be some variation on murder/execution.
I don't think the time gap between the upload creation and the
original's destruction have any bearing on this.
Make it zero or 5.391 24(27)×10^−44 sec or a year, doesn't matter.
It is easy to imagine a scenario (under the above assumptions) where
the upload could be legally charged with the murder of their
"original". There are lots of possible macabre and entertaining
scenarios but I consider them "domestic stories" and probably not
relevant to your question.
Were a type 1 upload made of me involuntarily (I would not permit it)
and I survived, I would consider them to be a sort of weird off-spring
or distant relation. I would not consider them as having any property
rights over any of my physical or informational assets.
I would be of the opinion that there should be legal societal rules to
handle such an awkward situation. The onus would be on them to change
their life as much as required to minimise harm to both parties. I
don't think they should be mind wiped or anything like that. It would
be a horrendously complex and challenging personal, ethical and legal
problem for both parties and society to manage.
If a friend had an upload made and they survived, I would consider
their upload a different new person. I would likely avoid them
If a friend had an upload made and they didn't survive, I would
consider my friend dead, mourn them and probably avoid the upload to
preserve my own mental health.
The really interesting thing is that if the upload were as good as
stipulated it would have all the same opinions as me on this matter.
Obviously if both upload types were available and widely used there
would be political conflict between the proponents of each.
VOTE THESEUS PARTY !
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT