From: Aaron McBride (firstname.lastname@example.org)
Date: Sat May 12 2001 - 12:37:04 MDT
(Let me clarify one thing. I was hinting that we would need quantum
computers for self-awareness, but I'm not ruling out conventional analog
systems either. It's just that quantum is so much faster and smaller, and
uses so much less energy, that if we do need more than digital, it's
probably the way to go.)
Looks like there are several issues here.
1) What can analog do that digital can't.
Analog can store infinitely complex values in a single unit. The value e
could be stored in full precision (from what I've read) in a quantum
computer. Digital systems are always limited by their resources (# of
flops, how many bits of storage does it have, etc...). Sure, a digital
computer can get very close to e, but it can never touch the value (yes, it
could store it represented as an infinite series or as an expression, or
whatever, but the value itself can't be touched). This is just one
example, probably not the best, but if it gets the point across that there
is a set of capabilities that analog possesses that digital doesn't, then
it serves its purpose. (Yes, I know that all chips are analog at heart,
but they have been designed to toss out the analog fuzziness to make nice
and shinny 0's and 1's as far as the logic goes.)
2) If the answer to 1 is not "nothing" then: Do we need what analog can do
to build a truly self-aware AI (not just one that models self-awareness).
This is the part that gut feeling comes in (it is not meant to be a
rigorous logical proof, just something to show that I have good reasons to
act on my feelings).
Probably the reason behind the leap in logic is: I am analog. I am
self-aware (you don't need to believe this). I suspect that others out
there are self-aware because they behave similarly to me, and they all
claim to be self aware. So, inductively I can be pretty sure (99.999?%)
that other 'Analogs' are self aware. Of course this in itself doesn't rule
out digitals being self aware but the test in different for them. I would
NOT expect a digital system that is self-aware to behave similarly to
me. I do not accept the Turing test (I've seen too many people fooled by
programs like Eliza).
In short: I know (personally) that analog works. I also believe that
analog can do things digital can't. I guess my leap is that I suspect that
those things that analog does are partly responsible for self-awareness.
3) Would we know that it's self-aware?
I doubt it. At least not until we've been uploaded, or come into intimate
contact with it. Then again, it may be plainly obvious, but as far as
coming up with an external test for true self-awareness (not modeled
self-awareness) I'm pretty skeptical.
Note: there are programs out there (some of them virii that can modify
their own code; I don't see this as sufficient for self-awareness).
3) (Practicality) Do we really need true self-awareness in the AI to end
the human era?
If an analog system could store a statement about the world (a bit of
knowledge) while using less resources than a digital system could in
modeling it, then why not use analog system? Analog is faster, and more
'true' (accurate). Wouldn't we want to take the shortest path to self-aware?
Out of curiosity, how many of the people in this group (%) can read
code? In case I want to use it as an example.
PS, if this is one of those reoccurring topics that mailing lists are
always dealing with, then it's probably worth talking about. At least from
the memes perspective.
At 09:23 AM 5/12/2001 -0400, you wrote:
> > I have a feeling that in order to create an AI that is truly self aware,
> > it's going to need some hardware components that work at a fundamentally
> > different level than current CPUs. A quantum leap you might say. ;) This
> > would allow the richness of thought that I find impossible in a purely
> > digital system.
> How do you justify this statement?
> I would say that the burden is on you to demonstrate, or at least
>what exactly isn't rich about digital logic, and it might even be useful to
>help us understand how an alternative system would work in a way that can't be
> Even to ask these questions I'm stretching, as there are already
> at least
>several programs which are self aware.
> > That's all for now.
> > -Aaron
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT