From: Lee Corbin (firstname.lastname@example.org)
Date: Sat Mar 15 2008 - 13:13:42 MDT
> Lee Corbin wrote:
> > Ah. Here you mean not only the computer science "shared memory"
> > but real human type shared memory.
> Yes. Though strained, I don't know what other analogy to use because
> I am assuming the merging of states isn't feasible for discussion unless
> we're talking about uploaded brains. I guess you could plan for some
> convoluted chemistry and physical manipulation of meat, but that seems
> too icky for discussion. :)
Yeah, it's a hell of a lot messier, that's for sure. But seriously, it's the
*principle* we are talking about, as you know, not the practical
difficulties of the feasibility.
>>> I expect that the uploaded person will be software running on
>>> top of some general virtual person hardware. If so, there will
>>> be no direct way to experience whether your memories are
>>> retrieved through a GLUT or somehow recreated on-the-fly
>>> from templates (something like compression/decompression
>>> of a world of context to a few relevant bits that can be used
>>> to reconstruct a most-likely scenario that we believe to be a
>>> real memory)
Nice point. Does the act, say, in the latter case of generating
the memories on-the-fly as you suggest contribute any to
consciousness. At least to me, that's a nice question.
>>> and a context switch back will appear to those inhabiting the
>>> suspended environment that the results of those independent
>>> threads have been computed instantly.
Honestly, I have no idea what you're getting at with that! As I said
>> That's not very clear, IMO. With ordinary raw threads or processes
>> running on a computer, sure, one moment the process has access to
>> data structures X and Y, and the next, equal access to Z. But that
>> entirely ignores the knotty problem of how memories are added to
>> people, as you say "instantly".
> what makes "ordinary raw thread" different inside your PC, in a
> computronium Jupiter Brain or the entire detectable universe?
> I don't mean this to be a rhetorical question. Given my previous
> paragraph (this post) - I would like you to describe what makes
> PC threads, their [an] equivalent analogue for the software mind
> running on virtual human hardware and the real-world mechanism
> (whatever it might be)[.]
I don't know that they're really any different in principle. Normally
the computer analogies work splendidly. But here, again, I just
note that suddenly being given by your spouse the complete
works of Shakespeare does in no way equate to your having
carefully read them and integrated them line-by-line into your
memories. As I said
>> Normally each new experience you have is immediately compared
>> on some sort of salience measure to everything else that has ever
>> happened to you, i.e., to all your other memories. That's why you
>> are "reminded" of things, some of which happened a long time ago.
>> Now if you get enough new experiences, the new memories that
>> are generated are slowly integrated into all your existing ones.
>> The computer analogy seems a little strained here, at least with
>> the kinds of algorithms we have today running on our machines.
> Now I have a greater appreciation for the trouble you see with
> close copies not being close enough to merge.
Well, could you have snipped a lot of the foregoing I wonder?
In addition to your ghastly HTML, I'm having some trouble here
knowing what we've agreed to and what we haven't, and, alas,
have not been snipping very conscientiously myself.
> If the salience measure for two different copies were sufficiently
> far apart that one would find a new fact compatible with prior
> experience enough to learn it while the other was unable to
> accept the new information because it was incompatible with
> prior experience.
Right. One of them might have just finished a class in algebra,
but not the other, and the fact that "2abc + 3a = 2a(bc + 1.5)"
might be totally incomprehensible to the latter.
> In an extreme case we could construct a scenario where the
> two copies were lead to believe completely incompatible beliefs
> (e.g.: religious conditioning)
> We may need to evolve some method of dealing with this. My
> guess would be that our normal memory pruning mechanism
> could be employed to simply erase/suppress any incompatibilities.
At first glance, that sounds awful. I myself definitely want to retain
good arguments for each side of a dilemma, for example.
> There is evidence (of varying effectiveness) to suggest that sleep
> facilitates mental housekeeping. There is also evidence of
> psychological defense mechanisms [that] will artificially create
> memories to block recall of traumatic events.
Yes, at the sacrifice of true knowledge on the subject's part.
> Perhaps the reintegration process will involve vetting what experience
> to keep from the copy? If you spawn a LeeCorbin_EmptyTrash
> process, it might not require the vast knowledgebase of your entire
> history (possibly only the history of events since the last time it was
> invoked) Now this task/process believes itself to be be LeeCorbin
> (so far as you would only authorize such process to an implicit trust
> as yourself).
Now how can that be? A LeeCorbin_EmptyTrash process wouldn't
have any access (nor need any access) to almost all of my history.
I already have reflexes that kick my leg when the doctor strikes my
knee. They're not really me, not in the slightest.
> After this sub-self has fulfilled its reason for existing and you have
> verified success, you may choose to reintegrate the complete
> experiential record of that process. In that case, you should have
> just done the task directly.
I should have? If I had done so, then as I made my weary way out
to the trash receptacle in the downpouring rain, various melancholy
thoughts might intrude of one kind or another. I prefer your plan:
I spawn a body that knows nothing but mindlessly taking out the
trash, and then reintegrate that memory, just to make sure the job
> At the opposite extreme, you don't subsume any of the experience
> because you are confident there is minimal novel experience
> associated with that task. The degree to which you care about
> the experience is probably related to how much of your Self you
> originally invested in the creation of the clone/sub-process.
> I realize this isn't exactly a copy or close duplicate (per the subject line)
> - Would you call a clone that has _only_ the last 2 minutes of task-
> specific knowledge to be a copy?
> Would you call it a completely different identity?
It hardly sounds like it's even a person at all, unless I've misread you.
> I think this question comes out of the discussion about what makes
> an identity: the model predicting their behavior, or the memory of
> prior situations? (tough call because past events are often the raw
> data upon which the model is based)
What makes an identity? Tough call all right! I'm not even sure
that a model predicting my behavior makes it me, seeing as how
we all act predictably from time to time. Moreover, a vast
intelligence could probably predict my behavior at the 98%
accuracy level, just the way that I might predict an ant's. But
it seems weird to say that I am the ant, or that that vast intelligence
> Is it possible to observe that I choose blue rather than red in
> 100 instances, so you remember only that I prefer blue - then
> delete your memory of the 100 instances and retain only the
> knowledge that I prefer blue?
Certainly. I think that that happens *all* the time. I may not
remember Joe's reasons, but I know that he's a Bush supporter.
> Upon my next choice will you be able to assess that I made a
> characteristic choice of blue?
Sure. It will accord well with my knowledge of your behavior,
or, to use trickier and more dangerous, but perhaps more accurate
language, "my model" of your behavior.
> Is there any value to incur the storage overhead of recording the
> details of every one of those 100 prior instances?
In many situations, yes. On closer examination later, more subtle
patterns may emerge. If I remember Joe's expressed reasons for
supporting Bush, certainly our future conversation will be more
> How much memory optimization do we already perform, that we
> will need to be able to do in an uploaded state?
How should I know? :-)
> Again, I apologize for the strained computer analogy
Oh, not at all.
> - but I continue to assume the most logical way any of these copies
> exist is after uploading.
I don't see them as more logical. The good old teleporter/scanner
device is just fine for TEs, no? But for ease of implementation,
nothing beats making a copy of an upload! :-)
P.S. Sorry, as I explained, for not snipping more. It will be a miracle
if anyone bothers reading this whole thing except you and me.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT