From: Will Pearson (email@example.com)
Date: Mon Apr 22 2002 - 13:09:32 MDT
> Hey, Will. No offense or anything, but I don't think you're caught up on
> the SL4 background material just yet. Despite various continuing
> disagreements, there are certain terms that we try to use in a precise way.
Perhaps a simple faq with definitions of key term for new comers would be useful and easier to update.
> Similarly, there are constraints on which futures can be envisioned under
> certain background assumptions; it isn't all just magic. ("Magic" tends to
> occupy a certain balance between anthropomorphic characteristics and minor
> departures; a Singularity envisioned as magic will be too anthropomorphic.)
I don't understand the context of this statement. If you
> A "seed AI" is an AI designed for self-modification, self-understanding, and
> recursive self-improvement with the intent of growth to beyond human
> intelligence levels. You don't use a seed AI as a wearable computer,
> although perhaps someday you might use a non-self-improving fragment split
> off by a seed AI in a wearable computer, or wearable computers might be
> wired into a global network that is collectively a seed AI.
Am I getting confused between the definition of a recursive self-improving system and seed Ai, are they the same? Here is a question that will dispel my confusion, would a seed AI given the goal of adding two numbers in the most optimal way, change itself to a simple adding program? OR does the definition of SeedAi require friendlyness or other goals. I am also under the impression that the AI could choose to commit suicide(say if it knew it would do more harm than good), which might means it might not grow to transhuman levels, is this right? Is the need to go to transhuman levels embeded in the goal? I was confused by reading Ben Goertzel's websites about his own view of Ai, which doesn't specify the friendliness of the goal, but talks about SeedAI. Whatever, in the future I shall say recursively self-improving system, instead of seed ai. If that is okay?
> as an intelligence with at least hundreds or thousands of times the
> effective brainpower of a human.
I stand corrected.
> immediate subgoals of the user, rather than independently originating
> subgoals in the service of long-term goals, has insufficient planning
> ability to support seed AI. If programming the goals are too hard,
> programming a really simple unworkable system won't help.
You are an example of a system, that has somehow been created from systems that conformed to the subgoals of the systems (surviving from generation to generation) to create its own long term goals for the rest of humanity. Simplicity to complexity.
Please also look at Learning Classifier Systems (ZCS is a comparitively easy one to get into), for an example of a system that has simple subgoals (improving fitness of classifier), but they can work together to perform a super goal (maze navigation). Admittedly they aren't amazing systems, but it shows a facet of what can be done.
I think you say the recursivly self-improving systems are the 4th derivative, I personally think of them as second order (from logic, that is logic that works on themselves). As such, I think it is very difficult to say anything about the systems, until they have been run.
There are many paths to recursive self-improvement, just because you have thought about one a lot, does not mean you know them all.
-- _______________________________________________ Sign-up for your own FREE Personalized E-mail at Mail.com http://www.mail.com/?sr=signup
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT