RE: early AGI apps

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Nov 09 2002 - 09:46:04 MST


Cliff,

Regarding your idea for an AGI application...

I think it's an interesting one, but a very hard one, suitable only for
AGI's at the near-human stage of general intelligence. Understanding the
pragmatics of what humans are trying to do at their PC's, is a tough
problem, requiring sophisticated modeling of human psychology....

There's only one way I can envision your product being doable in the short
term, and that's if it were installed on a HUGE number of computers all at
once, so that there was a vast amount of statistical data from which it
could make inferences about what a given user might be trying to do. In
other words, maybe it could work in the short term if M$ were to integrate
it into Windows ;-)

I think that there's a limited subset of your proposal that is more feasible
in the very short term though, and that's AI-driven system administration.
Stephanie Forrest and her students at UNM have done some interesting work on
"computer immune systems", which are simple statistical narrow-AI-ish
systems for intrusion detection. But there's a huge amount more that can be
done along these lines. The initial market would be companies owning server
farms, which are very complex to administer, rather than individual PC
owners....

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Cliff
> Stabbert
> Sent: Friday, November 08, 2002 11:53 PM
> To: sl4@sl4.org
> Subject: re: early AGI apps
>
>
> I wrote:
> CS> I think the Novamente approach -- more narrowly focused commercial
> CS> efforts -- are a very good approach to funding currently. I do have
> CS> a long-simmering but vague-in-details idea for some AI tech that
> CS> Novamente at its current stage may or may not be suited to, which
> CS> if implemented as a software package could be quite popular.
>
> Ben Goertzel:
> BG> Well, feel free to voice the details, either on the list or
> via private
> BG> e-mail, ben@goertzel.org ;)
>
> Peter Voss:
> > I'm interested in any and all potential early applications for
> AGI - both to
> > evaluate the performance of our a2i2 system, and for possible
> > implementation.
>
> I don't have much time at the moment to sketch this in greater detail,
> and my own limited notes on this are at the time unavailable. So what
> follows is *very* vague, brief and blue-sky...but it may spark some
> ideas.
>
> The main thrust is the use of AI in GUIs. There are two main aspects
> to that -- modeling the computer and modeling the user -- as well as
> some other features. As a product, I envision something that sits "on
> top" of the OS, or encapsulates it, and is always the "outermost"
> control the user can speak to -- the ultimate arbiter, the
> presidential hotline, and where the user communicates *about* what the
> computer does. (Another, somewhat less ambitious, implementation
> would be specific to and sit on top of a software package such as
> Microsoft Office).
>
> -------------
>
> I'm sure all of us have run into problems like the following: I was
> installing a piece of software under Windows. I have my PC set up
> with all apps on E:, so I chose E: as the install drive. It wouldn't
> let me proceed because, it said, the E: drive was full. I checked
> under Explorer -- and as I thought, I had plenty of space.
>
> So what was going on? I'm not certain -- it may have been that the
> installation program was looking at total minus used space and going
> by the 2GB value for total space (this was under Win95); it may have
> been using variables that were too small and getting overflow...it may
> have been any number of glitches.
>
> But it got me thinking -- we have two different programs (the
> installer and Explorer) telling me two different things. We have an
> "out of space" dialog box that some programmer went to the trouble of
> building, and the out-of-space check they coded for it...
>
> When you work on a modern GUI, it's really in a sense like you're
> having a dialogue. You tell it to do things, and it comes back with
> "dialog boxes" ("disk too full", etc.).
>
> The problem is that you are having a dialogue with a schizophrenic
> amnesiac, and what is more, one who communicates in a very inflexible,
> rigid, repetitive way.
>
> As a user, I want a single, coherent, consistent conversation that
> builds -- from question to question and day to day. A conversation
> that makes sense.
>
> -------------
>
> In essence what I want as a user is that the computer "understands"
> what it is telling me. Well, what does that mean? That it must have
> some internal representation of itself, that symbols such as "hard
> drive" and "free space" must be not just strings of characters, but
> have some semantic *value* -- some *referent*.
>
> I.e., the software should have an internal model of the computer in
> which terms such as the above are meaningfully linked to actual values
> from that model.
>
> Such a model, with some associated pseudo-natural-language processing,
> may be the "simpler" aspect of the software -- perhaps amenable to
> established technologies (DBs, natural query languages, expert
> systems, etc.).
>
> -------------
>
> When I tell the computer to save my file and the disk *is* full, it
> should come up with a more meaningful set of choices - it should have
> some basic understanding of why I want to save a file and e.g. offer
> me - do you want to clear some space or save to D: instead?
>
> Now, I have seen the above -- a dialog box asking me whether I want to
> clear space -- but it, again, was something some programmer separately
> designed and built code for.
>
> As a user, I want the computer to "know" that I want to save files,
> that if space is unavailable in one place I may want to save it
> elsewhere, or free space up.
>
> More fundamentally, I want to be able to teach it such things.
>
> -------------
>
> This implies the software needs to have an internal evolvable model of
> me, the user, as well.
>
> In the context of Microsoft Office, it would have some ideas of "what
> I do" (create new documents and edit existing ones, type and format
> stuff, print them, save them, file them, search them). It should
> learn and suggest things on its own (create "wizards" on the fly --
> I type "Dear George" and it pops up with "another letter to George,
> 'ey? Usual headings? File under 'Presidential correspondence'?")
>
> The model should evolve both under internal (user-confirmed) guesses
> and under user direction. The software should always have available
> -- no matter what the user is doing -- a way for the user to interrupt
> and tell it to "go meta". By this I mean a way for the user to tell
> the software "Now watch what I'm doing...whenever I do X, I want you
> to do Y and Z." or "See that dialog that popped up? Always click "No"
> on that one."
>
> Here's where the model gets more complex, because it needs to deal
> with analogies. "See how I'm taking each of the sentences from this
> paragraph and making them bullet points? Do it for the rest of the
> paragraph." is relatively simple. "Do the same thing to this line
> chart." is more complex (what is "the same thing"? Does each series
> (line) get split into its own chart? Each year?). This is also the
> point in the model where an ongoing dialogue becomes most important --
> being able to correct the assumptions and guesses the software makes,
> such corrections getting folded back into the model.
>
> -------------
>
> The language used to describe the internal structures of the computer
> and its software, actions associated with them, etc. should allow for
> sharing between users. In other words, once one user has "told" the
> system about what PhotoShop does (via interactive querying by the
> software, details too long to go into), others wouldn't need to.
>
> -------------
>
> So, vague and scatter-shot. There's a lot left out of the above and
> when I can dig up all my notes and have the time, I might put together a
> more coherent and detailed presentation of this; the AGI list is
> probably a more suitable venue.
>
> The reason I see this as a viable application for (early) AGI is
> because on the one hand it requires limited, quantifiable, shareable
> knowledge (domain expertise about software combined with the ability
> to intercept both user events and OS API calls) and on the other,
> provides an "evolution-driving" environment: the user's requirements.
>
> There is IMO a widespread need for such tech in user interfaces, thus
> plenty of people who would want to use it. And possibly, if this or
> some variant on it was structured in the right way (perhaps dealing
> with the browsing experience, say), a huge number of users could be
> leveraged to drastically accelerate the evolution of the software.
>
>
> --
> Cliff
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT