Re: Shock level 3 or 4 thinking?

From: Kevin (maitrikaruna@yahoo.com)
Date: Sun Mar 14 2004 - 12:02:25 MST


MessageHi all,

I think Ben is right..applying past trends, especially socialogical ones, to a post-singularity era is fraught with problems. If the singularity is realized with the capabilities that most on this list ascribe to it, then all the old rules/patterns go out the window. This is a paradigm (overused word ..I know) shift like none seen before. Of course AGI's will be faced with the standard questions of "What do I do now?", but there ability to answer that question will utilize thought processes and volumes of information/inputs that humans cannot find a historical comparison for...and all this "thought" may be going on in a cold reasoning, detached manner that we can only slightly compare to certain humans...it is this point that could cause the most concern for an AGI that is not unfriendly by intent, just ultimately destructive because it has no feeling of the effects of its actions on everything else..

-Kevin
  ----- Original Message -----
  From: Ben Goertzel
  To: sl4@sl4.org
  Sent: Sunday, March 14, 2004 1:24 PM
  Subject: RE: Shock level 3 or 4 thinking?

  Hi,

  I agree that it's important to think about the period leading up to the Singularity, as well as the Singularity itself.

  As for what impact the things we do prior to the Singularity will have on post-Singluarity reality, we really can't know that, but ideas like Friendly AI are based on the idea that "Well, if we ARE going to have an impact, which is at least plausible, then let's be sure it's a good one."

  As for the list of questions you cite, it's not clear to me which of these issues will face a superhuman AI post-Singularity. As an example, you pose the question "What to keep from the present/past and what not to keep?" but of course it's possible that there will be an effectively infinite abundance of resources so that this question won't even be worth asking anymore...

  What makes me think you don't "get" the Singularity fully is partly that you keep using analogies from human history. This is not much like anything else in human history. The Bolsheviks are not a good analogy -- nothing in human history is a good analogy. Because human history is about HUMANS and the Singularity is largely not about humans (though humans are an important part of it of course)

  -- Ben G

    -----Original Message-----
    From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Philip Sutton
    Sent: Sunday, March 14, 2004 1:41 PM
    To: sl4@sl4.org
    Subject: Shock level 3 or 4 thinking?

    Hi Ben,

> BG: Basically, what this reply says to me is that you don't believe
> there will be a Singularity in the same sense that many of us do. Your
> Singularity is more in the spirit of the more mild-mannered of
> Kurzweil's statements. In my view, the Singularity will be an
> incomparably larger change than any of these previous changes that
> you're describing.

> BG: You don't seem to accept the possibility of truly fundamental
> change in the nature of reality and/or mind. ........ your vision of
> the future seems SL3 not SL4

    I went back to Eliezer's reference that you quoted and read some of the links from that document to get a better feel for whether I'm thinking more in the SL3 or the SL4 mode. (http://yudkowsky.net/sing/shocklevels.html).

    I agree with you that, post-singularity, we cannot know what things will be like in terms of the nature of reality and/or mind. So in that sense I think it's fair to say that, at least, intellectually I can glimpse the SL4 discontiuity. But it's hard to come to terms with it (a) because it's fundamentally unknowable and yet (b) what can be imagined vaguely could shake up the status quo in every direction so strongly.

    Maybe what I've been trying to say can be better understood as being relevant to the *lead up* to the singularity proper. I think the lead up will not be 'just' the time between pre- and post-singularity but will be a significant time in its own right (perhaps with the power to send significant reverberations into what follows the singularity).

    Whether or not there is ever more than one AGI I think the introduction of one or more AGIs into the universe will add massively to the complexity of the universe (or the parts of the universe where AGI/AGIs have not been before). AGIs will be capable of faster and faster thought for any given level of complexity of thinking. But it may well be that AGIs choose sometimes or often to expand the complexity of their thought rather than simply shrinking the absolute amount of time taken to reach conclusions. So it's not a foregone conclusion that subjective time will shrink for all issues, and at all times, in the dramatic way that some people have speculated. AGIs when first emerging with have a prodigeous amount to learn and assimilate. And developing new deep knowledge and insights will, I imagine, continue to be, at times, a non- trivial task. Maybe a measure of wisdom of new self-improving AGIs will be the extent to which they take advantage of the acceleration of subjective time to enable better-thought-out actions in absolute time.

    I still feel that a super AGI or populations of AGIs will still be faced with key questions like:
    - what to learn and what not to learn?
    - what to invent and what not to?
    - what to change and what not to change?
    - when to change things and when not to?
    - what to build on and what to abandon?
    - what to keep from the present/past and what not to keep?

    Ben, do you think these questions will become obsolete? Surely evolution is unlikely to become obsolete and if that is the case wouldn't these questions remain relevant?

    I can't see that raising issues like these is a sign of a failure to move from shock level 3 to shock level 4 thinking/mind sets??

    The model of the approach to the singularity where we have life more or less as we have known it and then in a flash we have the post singularity world seems to me to be almost designed to wish away the hard thinking about how to move up to or make the transition to the singularity.

    It reminds me of the early Bolsheviks who refused to speculate on the future post-revolution because the world would be remade. And look where that got us! :)

    If we can assume that evolution is not abolished as a meta-process by the singularity then we can assume that history still counts as a shaper of the future. If that's true then the lead up to the singularity is very important - so I think we need to expand our thinking about this phase. And as workable AGIs develop (not super AGIs initially) they can be recruited to work on the issue of understanding options for the lead-up to the singularity. This joint working process automatically then makes the lead up to the singularity even more complex and interesting.

    Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT