Re: SIAI: Why We Exist and Our Short-Term Research Program

From: Алексей Турчин (avturchin@mail.ru)
Date: Wed Aug 01 2007 - 07:28:41 MDT


I am really inspired by work of SIAI
I could suggest you that I will represent you institute in Russian Federation. I have already translated (with permission of Eliezer) several his works on Russian and widely publish them in the Net. More 1000 people read them. We have here several GAI projects, but unfortunately they do not understand the importance of Friendliness. I contacted with some projects and tell them the issue.

Here are Russian translations:

Э. Юдковский
Систематические ошибки в рассуждениях, потенциально влияющие на оценку глобальных рисков.
http://www.proza.ru/texts/2007/03/08-62.html

Э.Юдковский
Искусственный интеллект как позитивный и негативный фактор глобального риска.
http://www.proza.ru/texts/2007/03/22-285.html
Элиезер Юдковский
Вглядываясь в Сингулярность.
http://www.proza.ru/texts/2007/07/08-42.html
Э.Юдковский.
Таблица критических ошибок Дружественного ИИ.
http://www.proza.ru/texts/2007/07/09-228.html

Siai рекомендации по созданию дружественного ИИ.
http://www.proza.ru/texts/2007/07/13-272.html

-----Original Message-----
From: "Tyler Emerson" <emerson@intelligence.org>
To: sl4@sl4.org, wta-talk@transhumanism.org, extropy-chat@lists.extropy.org
Date: Tue, 31 Jul 2007 17:21:28 -0700
Subject: SIAI: Why We Exist and Our Short-Term Research Program

>
> Dear all:
>
> Here is a new overview of SIAI, focusing on why we think our mission
> is an important one, and where we're looking to focus research efforts
> in the short-term.
>
> http://www.intelligence.org/blog/2007/07/31/siai-why-we-exist-and-our-short-term-research-program/
>
> Let me know what you think: emerson@intelligence.org. I look forward to
> any thoughts you have.
>
> I hope you enjoy it!
>
> Best regards,
>
> --
> Tyler Emerson
> Executive Director
> Singularity Institute for Artificial Intelligence
> P.O . Box 50182, Palo Alto, CA 94303 USA
> 650-353-6063 | emerson@intelligence.org | singinst.org
>
> ***
>
> SIAI: Why We Exist and Our Short-Term Research Program
>
> Why SIAI Exists
>
> As the 21st century progresses, an increasing number of
> forward-thinking scientists and technologists are coming to the
> conclusion that this will be the century of AI: the century when human
> inventions exceed human beings in general intelligence. When exactly
> this will happen, no one knows for sure; Ray Kurzweil, for example,
> has estimated 2029.
>
> Of course, where the future is concerned, nothing is certain except
> surprise; but the mere fact that so many knowledgeable people (such as
> Stephen Hawking, Douglas Hofstadter, Bill Joy, and Martin Rees) take
> the near advent of advanced AI as a plausible possibility, should
> serve as a "wake-up call" to anyone seriously concerned about the
> future of humanity.
>
> The potential of advanced AI, for good or evil, has been amply
> explored in science fiction literature and cinema. In the early 90's,
> Vernor Vinge coined the term "technological singularity" to refer to
> the difficulty of predicting or understanding what will happen after
> the point at which humans are no longer the most intelligent and
> capable minds on Earth.
>
> It's easy to be passive about this issue. Technology is advancing, and
> none of us have the power to stop it. There are also plenty of more
> pressing issues around us, so there may seem no clear need to worry
> about something that may happen in 2029, or 2020, or 2050.
>
> Everyone involved with SIAI, however, believes that this kind of
> passivity is both shortsighted and dangerous. As a starting point,
> futuristic predictions are not always overoptimistic sometimes they
> wind up overpessimistic instead. Jetsons-style spacecraft aren't here
> yet, but the Internet is, and hardly anyone foresaw that until it came
> about. It's important to also note that the 22 years until Kurzweil's
> 2029 prediction is not very long at all. Advanced AI is a big thing to
> understand, and it's also something that can be done either safely or
> unsafely. The time to start thinking very, very hard about how to do
> it safely is this year, not next year, or five years from now. The
> potential dangers of creating advanced AI the wrong way are very
> severe; and the potential rewards of creating it the right way are at
> least equally tremendous.
>
> Our core, long-term mission at the Singularity Institute is to figure
> out how to develop advanced AI safely to help bring about a world in
> which the vast potential benefits of this technology can be enjoyed by
> all of humanity. We want to create a rigorous scientific,
> mathematical, and engineering framework to guide the development of
> safe advanced AI.
>
> In our view, this is the most critical issue facing humanity. We are
> on the verge of creating minds exceeding our own. Unfortunately, the
> amount of societal resources presently going into figuring out how to
> do this right is absurdly tiny. SIAI is the only organization on the
> planet right now that's squarely focused on this incredibly important
> problem. By reading this, you are among the .01% who have even heard
> about this issue; and that estimate may be high.
>
> The Most Important Question Facing Humanity
>
> There are many ways to work toward figuring out how to develop
> advanced AI. Engineering specific AI systems is valuable, as it helps
> us gain experimental knowledge of semi-advanced AI systems, while
> they're still at an infra-human level. Studying human brain and
> cognition is valuable, since after all, at the present time, the human
> mind is the only highly generally intelligent system we have at our
> disposal to study. Other disciplines like ethical philosophy and
> mathematical decisions theory also have a lot to contribute.
>
> However, there is one question we feel is absolutely critical to the
> goal of figuring out how to develop advanced AI the right way, which
> remains essentially unexplored within academia and industry. SIAI's
> short-term research mission is to resolve this one question as
> thoroughly as possible. Compactly stated, the question is this:
>
> How can one make an AI system that modifies and improves itself, yet
> does not lose track of the top-level goals with which it was
> originally supplied?
>
> This question is simple to state but devilishly difficult to resolve
> it's not even an easy thing to formalize in the language of modern
> mathematics and AI.
>
> To understand the significance of this question, think about this:
> What is the most likely way for humans to create an AI system that's a
> lot smarter than humans? The answer is: To create an AI system that's
> a little smarter than humans and ask it to figure out how to make
> itself a little bit smarter; and so on, and so on.
>
> This is not an original idea, it's been around since at least the
> 1930's, in various forms. However, we are approaching a time when it
> can actually happen. The pressing question is, then: If we embody the
> initial "a little smarter than humans" AI system with some nice goals
> (including helping humans rather than harming them), how do we know
> the subsequent systems it creates, and the ones its creations create,
> etc., will still embody these goals?
>
> The current focus of SIAI's Research Program is to move toward a
> rigorous understanding and hopefully a clear resolution of this
> question.
>
> SIAI's Short-Term Research Program
>
> We aim to resolve this crucial question by simultaneously proceeding
> on two fronts:
>
> 1. Experimentation with practical, contemporary AI systems that modify
> and improve their own source code.
> 2. Extension and refinement of mathematical tools to enable rigorous
> formal analysis of advanced self-improving AI's.
>
> These directions are not disjoint; they have great potential to
> cross-pollinate each other, just as theoretical and empirical science
> have done throughout the ages. On a technical level, part of the
> cross-pollination will occur because both our experimental and our
> theoretical work is grounded in probability theory: probabilistic AI
> and probabilistic mathematics.
>
> A Practical Project in Self-Modifying AI
>
> For the practical aspect of the SIAI Research Program, we intend to
> take the MOSES probabilistic evolutionary learning system, which
> exists in the public domain and was developed by Dr. Moshe Looks in
> his PhD work at Washington University in 2006, and deploy it
> self-referentially, in a manner that allows MOSES to improve its own
> learning methodology.
>
> MOSES is currently implemented in C++, and is configured to learn
> software programs that are expressed in a simple language called
> Combo. Deploying MOSES self-referentially will require the
> re-implementation of MOSES in Combo, and then the improvement of
> several aspects of MOSES's internal learning algorithms.
>
> Hitherto MOSES has proved useful for data mining, biological data
> analysis, and the control of simple embodied agents in virtual worlds.
> In a current project, Novamente LLC and Electric Sheep Company are
> using it to control a simple virtual agent acting in Second Life.
> Learning to improve MOSES will be the most difficult task yet posed to
> MOSES, but also the most interesting.
>
> Applying MOSES self-referentially will give us a fascinating concrete
> example of self-modifying AI software far short of human-level
> general intelligence initially, but nevertheless with many lessons to
> teach us about the more ambitious self-modifying AI's that may be
> possible.
>
> Toward a Rigorous Theory of Self-Modifying AI
>
> Studying self-modification in the context of a particular contemporary
> AI algorithm such as MOSEs is important, but ultimately it only takes
> you so far. One of the values of mathematics is that it lets you
> explore important issues in advance of actually observing them
> empirically. For instance, using mathematics, Einstein understood the
> nature of black holes long before they were ever empirically observed.
> Similarly, we may use mathematics to understand things about advanced
> self-modifying probabilistic AI systems, even before we have worked
> out the details of how to create them (and before we have sufficient
> hardware to run them).
>
> Theoretical computer scientists such as Marcus Hutter and Juergen
> Schmidhuber, in recent years, have developed a rigorous mathematical
> theory of artificial general intelligence (AGI). While this work is
> revolutionary, it has its limitations. Most of its conclusions apply
> only to AI systems that use a truly massive amount of computational
> resources more than we could ever assemble in physical reality.
>
> What needs to be done, in order to create a mathematical theory that
> is useful for studying the self-modifying AI systems we will build in
> the future, is to scale Hutter and Schmidhuber's theory down to deal
> with AI systems involving more plausible amounts of computational
> resources. This is far from an easy task, but it is a concrete
> mathematical task, and we have specific conjectures regarding how to
> approach it. The self-referential MOSES implementation, mentioned
> above, may serve as an important test case here: if a scaled-down
> mathematical theory of AGI is any good, it should be able to tell us
> something about self-referential MOSES.
>
> This sort of work is difficult, and the time required for success is
> hard to predict. However, we feel very strongly that this sort of
> foundational work inspired by close collaboration with computational
> experiment is the most likely route to achieving true understanding
> of the fundamental question posed above: How can one make an AI system
> that modifies and improves itself, yet does not lose track of the
> top-level goals with which it was originally supplied?
>
> Hiring Plan
>
> SIAI is currently a small organization, with one full-time Research
> Fellow (Eliezer Yudkowsky) and part-time involvement by a number of AI
> researchers, including Director of Research Dr. Ben Goertzel. We are
> seeking additional funding so as to enable, initially, the hiring of
> two doctoral or post-doctoral Research Fellows to focus on the above
> two areas (practical and theoretical exploration of self-modifying
> AI).
>
> These two Fellows would work under the supervision of Dr. Ben
> Goertzel; and in collaboration with Eliezer Yudkowsky as well. They
> would also benefit from interaction with the group of AI luminaries
> who are involved with SIAI, including SIAI Director Ray Kurzweil and
> SIAI Advisors Neil Jacobstein and Dr. Stephen Omohundro.
>
> Two Research Fellows, of course, represent a rather small allocation
> of society's overall resources one could argue that, in fact, a
> substantial percentage of our collective resources should be allocated
> to exploring issues such as those that concern SIAI, given their
> potentially extreme importance to the future of humankind. But many
> great things start from small initiatives, and we believe that the
> right two researchers, focused squarely on these issues, can make a
> huge difference in advancing knowledge and better directing AI R&D in
> the right direction.
>
> Part of our goal is to make progress on these issues ourselves,
> in-house within SIAI; and part of our goal is to, by demonstrating
> this progress, interest the wider AI R&D community in these
> foundational issues. Either way: the goal is to move toward a deeper
> understanding of these incredibly important issues.
>
> Toward a Positive Singularity
>
> Advanced self-modifying AI is almost sure to happen in this century
> as Ray Kurzweil, Bill Joy, and others have foreseen. The big question
> is whether we succeed in creating it with rigor, care, and foresight.
>
> SIAI doesn't claim to have the answers not yet, anyway. What we do
> have is a systematic, well-defined research program, aimed at focusing
> on the most essential questions. With sustained effort, maybe a little
> brilliance and luck, and a lot of help, we may well create an
> understanding that will help the human race navigate its way in the
> coming decades to a positive Singularity. If you are aligned with this
> vision, we hope you will help us.
>
> Why is it advantageous to invest in SIAI now rather later? There's a
> clear, rational answer to this question: If you invest now, you will
> increase the probability that we can scale SIAI and its community of
> friends and supporters to a level where there's a sufficiently-sized
> body of capable researchers who can work full-time on these critical
> issues. SIAI is the only organization focused on these problems right
> now, thus we are a nucleus around which a certain amount of talent has
> already accrued, and around which additional talent can be accrued
> over time. If you invest later, you will likely have reduced the
> probability that SIAI will be able to reach a sufficient critical mass
> to effectively confront these issues before it's way too late. SIAI
> must boot-strap into existence a scientific field and research
> community for the study of safe, recursively self-improving systems;
> this field and community doesn't exist yet. This is going to be hard;
> it's going to take time, but the sooner SIAI can grow, the greater the
> chance we'll have of being able to catalyze a critical mass in-time to
> deal with these problems before we're in a nose-dive situation that we
> can't reverse.
>
> One of the best ways to support SIAI is by contributing to the
> Singularity Challenge, which will allow us to grow the organization.
> If you donate or email us a pledge by August 6th, we can ensure your
> gift is matched. We hope many of you reading will do this; and thank
> you!
>
> http://www.intelligence.org/challenge/
>
> If you want to get involved with SIAI, or if you have resources to
> share (such as expertise, talent, promotion, or contacts), then please
> email us: institute@intelligence.org.
>

Посетите мой Живой Журнал www.livejournal.com/users/turchin - и узнайте то, что я думаю прямо сейчас - и ещё то, что хотел сказать вам, но не успел :)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT