From: Cliff Stabbert (firstname.lastname@example.org)
Date: Thu Jul 04 2002 - 19:17:34 MDT
Thursday, July 4, 2002, 8:40:36 PM, James Rogers wrote:
JR> I don't think it really matters; I was discussing it more as a consequence
JR> of certain premises, not evaluating the consequences. With respect to
JR> possible consequences, it defines our relationship to an SI. If we actually
JR> had free will, a consequence of that is that humans would be inscrutable to
JR> an SI.
I'm not sure this follows. Predictability != lack of free will.
Let's presume a dog has free will (I assume we're not exclusively
talking about higher level, intelligent/analytical decision making,
but also simple "shall I lick my balls or chase the cat" choices),
we might still be able to predict its behaviour.
Conversely, presuming a 'photon' has no free will doesn't give
us the ability to predict its behaviour in a two-slit experiment.
JR> However, there are other mathematical consequences to this that do
JR> not map to the reality of human minds as we actually know them.
JR> I behave as though I have free will, but I also realize that this probably
JR> can't be the case in an omniscient objective sense.
I think this still presumes predictability (in that sense) = lack of free
will, which I don't see as necessarily the case.
The "free will" concept is pretty slippery in any case; one common
formulation is that "we could have chosen differently". But in the
end, of course, we couldn't -- all probability curves end up
collapsing one way or the other, and in retrospect, everything that
has occurred was "fated" to occur exactly the way it did, including
our "choices". In a sense, the whole concept of hypotheticals is
bogus, from the standpoint of hard science.
At least, until we build that lateral time machine.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT