From: Josh Cryer (firstname.lastname@example.org)
Date: Tue Jul 06 2004 - 05:44:53 MDT
On Mon, 05 Jul 2004 09:57:34 -0700, fudley <email@example.com> wrote:
> On Jul 1, 2004 Josh Cryer wrote:
> > I do not believe that there exist useful things
> > that an AI could understand
> > which a sufficiently learned human could not.
> The corollary to that is that humans have reached the absolute pinnacle
> of possible perfection, a rather depressing thought when you think about
> it. But I don't believe it's true. Already computers understand Chess
> better than the best human players and people are not getting any
> smarter, in a few years when machines are a couple of trillion times
> fasterů., well, I'll leave it to your imagination.
> John K Clark
I do not believe that that is a corollary at all. This is a general
statement about usefulness and understanding. I don't think of
understanding as the ability to know all parts of a given field,
merely those abstractions which fit together to make the whole which
is useful. We don't need to constantly calculate PI to be able to use
it in transformations where it is useful, because we have already
calculated it. This is why I say "useful." I don't think it is
"useful" to calculate PI to some absurd length every time you use PI.
A super AI may, just for the heck of it, but even then I find that
unlikely (it would be a waste of resources, and even the dumbest AI
knows what a cache is).
Abstractions, they are what has brung us this far, and can take us far
further. Even if a super AI has created some extremely intriciate
thing, I believe that abstractions can make that thing tangible. If it
cannot be tangible, I believe that it probably wouldn't be useful (say
it creates some absurdly complex function which just... spits out
A super AI might say "here are the algorithms we use to manipulate
matter at the nanoscale, now go off and build a matter manipulator"
and I believe a human could understand it... eventually. Even if that
human has to abstract the process down into lots of little things
"here's a computer which runs a program which affects various chemical
reactions in a certain environment to manipulate matter for me."
I'd hate to say it, and I've been trying not to respond this way (I
personally don't have the time/energy for a big debate about this,
though, so I probably won't even be responding, though I'll be reading
of course), but I don't think there's much of a difference between a
super AI unconsciously doing some task, and a human using an extension
of self, say, a computer, to achieve a similar task. This is somewhat
why I am more inclined to lean toward Kurzweil as far as how the
Singularity is to come about, but the Singingularity Institute does
offer an interesting approach.
Just my very humble opinion, though.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT