From: Ben Goertzel (email@example.com)
Date: Sun Jun 30 2002 - 11:30:46 MDT
> > However, by intelligently restricting the grammar of outgoing
> > requests, one can go very far in this direction.
> I do not see how it helps. You can realtime-block known exploit patterns,
> but you can't meaningfully restrict the requests which happen to remotely
> trigger an hitherto unknown vulnerability (though you can probably detect
> a blatant brute-force search for buffer overruns -- stealth scans,
> especialy distributed, fall completely under your radar). As soon as a
> single vulnerability is found, the entire class of systems (even static
> diversity is pretty much nonexistant in the current landscape and software
> model) is few steps away from being under your control. With current state
> of the art even moderately smart but distributed attacker can take over
> >90% of all online nodes without trying hard.
To me, your arguments are reasons why no security is perfect, but not
reasons why measure that "help" are impossible
> > The Net is a tremendous information resource for a growing AGI
> > baby.... Not using it at all, is simply not an option.
> I notice you dismissed most basic security measures I mentioned as way
> premature. While I agree that they're currently ridiculous, clearly there
> is a threshold were they need to be engaged. Do you have a guard in place
> triggering on the threshold of some sum of behaviour observables, apart
> from what your intuition tells you?
Of course, it is our intuitions that define the sum of behavior observables,
As discussed extensively in my e-mails to Brian, we have such a "guard"
system designed and will implement and test it prior to running the system
as an autonomous, goal-directed AI [though we may never get there if I
continue to spend so much time sending e-mails on this list!! aha -- now I
see your strategy!!]
> All current search engines maintain large fraction of the web in their
> cache. I think it should be easy to arrange to have an air-gapped AI
> reading a large fraction of that. Google has been known to try strange
> things in R&D. Clearly there's a tremendous market to nave e.g. a natural
> language interface finding facts in iterative user sessions.
> Have you tried talking to them? This, of course, also/especially applies
> to Cyc.
In fact, I have talked to people at several major search engines recently,
though not about this issue -- rather, about the possibility of improving
their search performance with some relatively simple AI tech.
At the present time, none of them have any serious interest in AI
technology, nor would they agree to have an experimental AI system interface
with their DB's.
Also, the search engines don't always retain the full text of pages they
index; in many cases they just retain an 'index' of the page...
This kind of partnership is of great interest to me, but we'll have to
demonstrate that Novamente can somehow be useful to *them* before they'll be
interested in such a partnership. This means we'll have to have a Novamente
that can do something useful with human natural language. Based on our
teaching programme, this means it will occur *after* the system has a modest
level of infrahuman AGI -- because we don't plan to force-feed it English
language knowledge, but rather to *teach* it English...
> > If a diverse committee of transhumanist-minded individuals
> agreed that going
> > ahead with a Novamente-lauchned singularity was a very bad idea, then, I
> > would not do it.
> Fair enough. Notice that transhumanists are self-selected. If you would
> consult a commitee of Singularitarians that believe that Singularity is
> inherently good, whether we people make it, or not, then your answer is
> entirely predictable. It is very easy to engineer the outcome towards what
> you want to hear by jiggling the composition, and apply what constitutes
> an acceptable and unacceptable member.
I actually don't know anyone who thinks the Singularity should be gone ahead
with full-speed even if it seems likely to extinct humanity. I think this
hypothesized guy may be one of those "straw men" you read about sometimes ;)
> A simplistic
> picture: longterm evolution is intrinsically unknowable. Cumulating
> probabilities is an extremely naive model.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT