From: Norm Wilson (firstname.lastname@example.org)
Date: Wed Jun 23 2004 - 06:27:35 MDT
Paul Fidika wrote:
> That's the problem with Jackson's so-called "knowledge
> argument"; he postulates a being whose power and
> knowledge vastly exceeds our own, and then supposes that
> he can intuit what that being does or does not know.
> Well how about a new argument (which, as far as I am
> aware, is original):
I believe you've successfully refuted the knowledge argument, as presented by Jackson. However, the question that the argument addressed remains: what is the relationship between physics and qualia? Of course, we find a *strong* correlation among physical states and the associated experience of qualia. In my previous post, I argued that using physics to deny the subjective experience of qualia is begging the question, so I don't believe that answers such as "qualia are an illusion" are correct. Perhaps within a closed physical description of the universe the experience of qualia is irrelevant (perhaps not), but that does not mean they're not *real*, it just means they are outside of the physical model. I know this smells like dualism, but in my mind it remains an open question. I believe that claims like "only things which can be explained by our current understanding of physics are real" are unfounded, unless you weaken the meaning of "real" to be "things that can be described by physics", which make
s it a tautology. The honest answer is that we do not know. I think it would be constructive for the group to honestly admit what we don't know, as it seems that a lot of debate goes on over claims that are presented as being stronger than they really deserve to be. While it's fun to make a strong claim and defend it vigorously, in the end you're right back where you started, with two deeply entrenched camps and a bunch of unsettled questions. In a practical sense, I think we should leave unanswered questions "open" when programming the seed AI; i.e., we shouldn't tell it that something is true when we really don't know the answer. The AI should reason with healthy skepticism and an appreciation for the effects of subjectivity on information, and it should always view its own understanding of things as incomplete.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT