From: Ben Goertzel (email@example.com)
Date: Sun Mar 03 2002 - 10:45:03 MST
> I have thought on this problem a little - the problem of how to
> monitor/track/debug a process distributed across multiple independent
> machines. Most of my 'implementable' ideas (as apposed to 'crazy' ideas)
> have been based on the 'transaction' paradigm. Processes churn away in
> multiple threads, multiple machines, at different speeds, and
> when they have
> a result of some kind, they 'commit' that result to a central
> location. At
> this point the result becomes visible to any other process that cares to
> look, and also to the operating console - or debug module - whatever it is
> you want to call it.
> Your reply didn't say how you solved this problem... did you? Or haven't
> you got there yet?
We did *not* solve the problem. We had lots of interesting ideas about it
We decided, for now, to *ignore* the problem by making a system sufficiently
space and time efficient that we could run all the components together on
*one* machine. However, barring some radical near-term hardware innovation,
defers the problem, it doesn't resolve it.
The transaction paradigm was our most recent and probably best approach,
The design we had for Webmind AI Engine 1.0 (a system never to be built now
--> A centralized repository called the MindDB
--> A collection of MindServers, carrying out particular mixes of AI
processes, periodically committing data (and gathering data) from the MindDB
a) it turns out some MindServers really need to be distributed themselves
b) committing *every* interim result achieved in a MindServer isn't
necessarily realistic performance-wise, so one needs to get artful in
figuring out how often each one should commit
The MindDB had many uses besides monitoring and debugging in this
architecture, of course.
We will likely do something like this with Novamente when the time comes (a
year from now?).
> This jives with both my own experience building
> boring-old-business-applications, also with my study of other's software
> solutions to difficult problems, and with personal experimentation.
> My current guestimate as to how CPU cycles will be allocated to run a
> General Intelligence is:
> 40% - What ve's thinking about
> - Keeping track of Data - Basically: Database services
> 40% - How ve's thinking about it
> - Keeping track of Process - Messaging/IPC, O/S, Load Balancing,
> 20% - Actual thinking
> - "AI" algorithms - Sensory Modalities, Concept
> Manipulation, Goals,
> Predictions, Decisions, Actions.
> It might seem strange/inefficient/just-plain-nuts to expend 5 units of
> 'effort' in order to make 1 unit of 'progress'... but I cannot see any way
> around it. <---(Super Geniuses please insert 'Way Around It' here)
> Ben, how closely does this correspond with your experience so far?
Well, you left out one interesting category, which is *adaptively optimizing
of AI algorithms* ...
I suppose my guess is a little less pessimistic than yours, more like
40% - actual thinking
10% - optimizing parameters of AI algorithms
20% - keeping track of data
30% - managing processing
But this presumes a very specialized, well-tuned system for doing the data
and processing management....
Anyway, I think your basic intuition is realistic. It's depressing at first
but in the end, maybe that's just what mind is like. A lot of what goes on
in the brain is not directly "thinking" either.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT