RE: Reminder: SIAI Challenge expires this Sunday | SIAI February news

From: Christopher Healey (CHealey@unicom-inc.com)
Date: Sat Feb 18 2006 - 14:33:05 MST


All,

I was one of these potential donors that had contacted Tyler about making up any shortfall.

After sending my inquiry to him, I realized that taking this position had more to do with my perception ( read: emotional reward) of doing the most good, rather than doing the most good *in reality*. The latter is my ultimate goal.

I have just placed my donation, in the maximum amount I would have been able to match if a shortfall was in the cards. In other words, if I can donate it under a Challenge shortfall, I can donate it under any conditon, and so I have.

If you're one of the others who contacted Tyler, or are somebody who's been thinking of donating (but *really* want to feel you're achieving maximum impact), I'd ask you each to reassess your position in this light.

Thank You,

-Chris Healey

-----Original Message-----
From: owner-sl4@sl4.org on behalf of Tyler Emerson
Sent: Sat 2/18/2006 3:35 PM
To: sl4@sl4.org; wta-talk@transhumanism.org
Subject: RE: Reminder: SIAI Challenge expires this Sunday | SIAI February news
 
I've had some potential donors say they're waiting to donate at the end to
help cover any remaining amount. If you *can* contribute, please don't wait.
Please recognize the Prisoner's Dilemma effect here: everyone waiting to
give at the end on Sunday to match what they expect will be a small amount,
which causes a large amount to be left, which discourages waiters from
giving at all, leaving the Institute with a large amount of unmatched funds.

~~
Tyler Emerson | Executive Director
Singularity Institute for Artificial Intelligence
P.O. Box 50182 | Palo Alto, CA 94303 U.S.
T-F: 866-667-2524 | emerson@intelligence.org
www.intelligence.org | www.singularitychallenge.com

> -----Original Message-----
> From: Tyler Emerson [mailto:emerson@intelligence.org]
> Sent: Friday, February 17, 2006 4:33 PM
> To: 'sl4@sl4.org'
> Subject: Reminder: SIAI Challenge expires this Sunday | SIAI February news
>
> The Singularity Institute's 2006 $100,000 Challenge with Peter Thiel, the
> former CEO of PayPal, will expire this Sunday, February 19. Any donation
> you
> make will be matched dollar-for-dollar. So far, SIAI has matched $93,486.
>
> You can donate and track our progress here:
> http://www.singularitychallenge.com
>
> Personal checks postmarked by Sunday will be matched. If you send a
> donation
> by check, please let us know so we can keep the donation total accurate.
>
> Matching the Challenge is really crucial for our growth. Below is a
> summary
> of our present projects, which I hope gives you a sense of our dedication.
>
> Your tax-deductible Challenge gift will support:
>
> * The production and promotion of the Stanford Singularity Summit, a
> conference to educate up to 1700 people in Silicon Valley about the
> singularity hypothesis - representing an unprecedented chance to promote
> and
> further the nascent fields of singularity and global risk studies. A
> well-executed conference will increase the interest of gifted students,
> raise the legitimacy of the research, and expand the range of investors.
>
> Summit homepage teaser:
> http://www.intelligence.org/summit/
>
> * Our second full-time Research Fellow. We are looking for someone
> exceptional to collaborate with Yudkowsky on the challenge of a workable
> theory for self-improving, motivationally stable Artificial Intelligence.
>
> http://www.intelligence.org/employment/researchfellow.html
>
> * A remarkable Development Director and Communications Director. I'm now
> looking for skilled and dedicated collaborators to scale the Institute.
>
> http://www.intelligence.org/employment/development.html
> http://www.intelligence.org/employment/communications.html
>
> * Our forthcoming monthly speaker series, the Future of Humanity Forum, at
> Stanford Hewlett Teaching Center (its main lecture hall seats 500). The
> Forum will be an ongoing series to complement and expand on the
> Singularity
> Summit. Date and time for the inaugural event will be announced shortly.
>
> * Yudkowsky's Friendly AI theory and design work, conference
> presentations,
> and published writing. He completed recently his two chapter drafts for
> Nick
> Bostrom and Milan Cirkovic's Global Catastrophic Risks (forthcoming): the
> first on cognitive biases potentially affecting judgment of global risks,
> the second on the unique global risks of Artificial Intelligence.
>
> * The Singularity Institute Partner Network. Later this year, we'll begin
> approaching potential inaugural partners to be our cornerstone for
> building
> a network of companies, foundations, individuals, and organizations
> committed to advancing beneficial AI, singularity, and global risk
> studies.
>
> * Medina's academic presentations. In January, Medina was awarded full
> financial support to attend a workshop on Bayesian inference,
> nonparametric
> statistics, and machine learning at the Statistical and Applied
> Mathematical
> Sciences Institute in North Carolina. One of the most popular academic
> conferences on the interdisciplinary study of the mind, Tucson VII -
> Toward
> a Science of Consciousness, has accepted his proposal for a talk on the
> ethics of recursive self-improvement in April. He will also speak on
> Artificial General Intelligence ethics at AGIRI's first workshop on moving
> from narrow AI to AGI, and at Stanford Law School on a new problem for
> personhood ethics in light of human enhancement technologies, both in May.
>
> * Our organizational identity and website overhaul, after the Summit.
>
> Further details:
> http://www.intelligence.org/challenge/index.html#our_work_details
>
> ~~
>
> If you aren't familiar with our work, please see:
>
> What Is the Singularity?
> http://www.intelligence.org/what-singularity.html
>
> Why Work Toward the Singularity?
> http://www.intelligence.org/why-singularity.html
>
> ~~
>
> Additional news:
>
> * Yudkowsky will give a talk on February 24 at the Bay Area Future Salon
> at
> SAP Labs (attendance ranges from 70-100), to discuss the implications of
> recursive self-improvement for Friendly AI implementation, and the unique
> theoretical challenge that recursive self-improvement poses.
>
> Time:
> 6:00-7:00PM (networking, refreshments), 7:00-9:00PM (talk, discussion)
>
> Location:
> SAP Labs, Building D
> 3410 Hillview Avenue
> Palo Alto, CA 94304
>
> Details:
> http://www.futuresalon.org/2006/02/hard_ai_future_.html
>
> * Peter Thiel has joined The Singularity Institute's Board of Advisors:
>
> http://www.intelligence.org/advisoryboard.html
>
> ~~
>
> Comments are welcomed on the Summit teaser. Note that "What Others Have
> Said" has some known headshot-display and text issues that will be fixed.
>
> Feedback:
> emerson@intelligence.org
>
> ~~
>
> Thank you to everyone helping with the Challenge!
>
> Sincerely,
>
> ~~
> Tyler Emerson | Executive Director
> Singularity Institute for Artificial Intelligence
> P.O. Box 50182 | Palo Alto, CA 94303 U.S.
> T-F: 866-667-2524 | emerson@intelligence.org
> www.intelligence.org | www.singularitychallenge.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT