Re: [sl4] Re: Paper: Artificial Intelligence will Kill our Grandchildren

From: Günther Greindl (guenther.greindl@gmail.com)
Date: Mon Jun 16 2008 - 03:10:26 MDT


Anthony,

have you read this (is funny):
http://ieet.org/index.php/IEET/more/2181/

Is the evolutionary level at which we have arrived really so good? Think
about this again.

Cheers,
Günther

Anthony Berglas wrote:
>
>
>> One of the assumptions you make in the paper is that there will be
>> lots of AI's with lots of different motives, and that those with the
>> motives of world domination at the expense of everything else will
>> prevail. But realistically, people will program AI's to help
>> themselves or their organisations gain wealth and power, and achieving
>> that goal would involve preventing other people with their AI's from
>> gaining the upper hand. In general it's only possible to prevail if
>> you alone have the superior technology. This argument doesn't apply if
>> there is a hard take-off singularity, in which case our only hope is
>> to make the first AI reliably Friendly.
>
> My assumption is actually a little sharper. Namely
> If an AI is good at world domination, then it would be good at world
> domination.
>
> Whether such an AI will exist is a separate question. But many people
> desire it -- beat the competitor. So the source of goals is not unlikely.
>
> I am adding this last point to my paper, due to feedback from this list.
>
> Thanks,
>
> Anthony
>
>
>
>> --
>> Stathis Papaioannou
>
> Dr Anthony Berglas, anthony@berglas.org Mobile: +61 4 4838 8874
> Just because it is possible to push twigs along the ground with ones nose
> does not necessarily mean that is the best way to collect firewood.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT