From: Robin Hanson (firstname.lastname@example.org)
Date: Sat Nov 10 2007 - 14:23:44 MST
After ten years as an AI researcher, my inclination has been to see progress as very slow toward an explicitly-coded AI, and so to guess that the whole brain emulation approach would succeed first if, as it seems, that approach becomes feasible within the next century.
All the replies on SL4 as of 10:40AM Pacific seem pretty good to me. Why are you asking after "rapid" progress? It doesn't seem to be the key question. .... An analysis of dollar responses to public issues" makes the point that in many cases, people have no anchors, no starting points, for questions like "How much should this company be penalized for crime X?" .... On one memorable occasion, an AI researcher said to me that he thought it would take 500 years before AGI. ... I suspect that, especially among AI researchers, the question "How long will it be before we get AGI?", is more of an attitude expression than a historical estimate ... Naturally, building AGI will seem *very* hard if you can't imagine any way to do it (the imaginability heuristic) and so they'll give a response near the upper end of their scale. ... The key realization here is that building a flying machine would also *feel* very hard if you did not know how to do it. ... As for knowledge itself, that is a matter of pure basic research, and if we knew the outcome we wouldn't need to do the research. How can you put a time estimate on blue-sky fundamental research delivering a brilliant new insight? ... We have no reason to believe that timing is predictable even in principle
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT