Monday, May 8, 2023

Thursday, December 4, 2014

Faith based denial of the Singularity

Here is another article comparing concern about and interest in the singularity to religion.  The argument boils down to:  early AI researchers, at the very beginning stages of investigating the challenges of building intelligent machines, predicted the singularity would happen really soon, but it didn't.  Therefore, the singularity will never happen or, is so far off that, it is not worth thinking about and planning for.  That is precisely his argument and, as stupid as it is, I know a lot of smart people who feel that way.

I would concede that people who speculate that the singularity is coming soon don't have much solid evidence to point to.  But the same is true of people who speculate that it is a long way off.  One difference is that those speculating it will come soon do at least articulate coherent arguments about the trends that support their speculations.

For instance, deep learning neural nets appear to be scaling very well with increased data and processing power and have made brain inspired AI architectures a part of everyday life.  Long standing trends of increasing power of information technologies mean they will soon have power equivalent to our best estimates of the power of the human brain.  And, two massive projects to understand the brain, on the scale of sequencing the genome, are just getting underway.  Is it foolish to think that a combination of (1) a better understanding of how the brain works and (2) computing power equivalent to the processing power of the brain, might allow us to build computers that can perceive and think as well as the human brain?

A twenty or thirty year time frame for the singularity may not be a sure thing, but it certainly isn't foolish to think it may happen in that time.  What is foolish is certainty that it won't happen in that time frame, merely because it hasn't happened yet.  As comforting as that certainty may be, it is pure faith.






Thursday, November 27, 2014

Stuart Russell nails it with analogy between development of nuclear power and AI

Russell makes the following analogy in the same edge.org conversation where Elon Musk predicts a dangerous AI event in 5 to 10 years:
"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."

So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."

Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.

Good mainstream press coverage of Superintelligence and early precedent for regulating AI research

These articles in the mainstream press are giving decent coverage of the dangers and opportunities of AI research without making big mistakes or presenting weak counter arguments:

http://www.nytimes.com/2014/11/06/fashion/artificial-intelligence-as-a-threat.html?_r=0
http://www.ft.com/cms/s/2/abc942cc-5fb3-11e4-8c27-00144feabdc0.html#axzz3HmDBVMbY
http://www.computerworld.com/article/2840815/ai-researchers-say-elon-musks-fears-not-completely-crazy.html

It seems like Bostrom, Hawking and Musk are quickly starting to have an impact on our public debate and awareness of these issues.  I think that is pretty hopeful.  If this becomes an area of intense research and concern (similar to climate change) perhaps we will survive the singularity after all!

On a related note, this seems like an interesting early precedent for regulating scientific research that poses existential threats, such as AI research:

http://www.nytimes.com/2014/10/18/us/white-house-to-cut-funding-for-risky-biological-study.html?_r=0

Tuesday, November 18, 2014

The AI Gold Rush

Seven days ago, Geoff Hinton, perhaps the most important figure in the deep learning movement, made the following comment on his reddit AMA:
A few years ago, I think that traditional AI researchers (and also most neural network researchers) would have been happy to predict that it would be many decades before a neural net that started life with almost no prior knowledge would be able to take a random photo from the web and almost always produce a description in English of the objects in the scene and their relationships. I now believe that we stand a reasonable chance of achieving this in the next five years.
Today, a mere seven days later, the NY Times reports that multiple teams have solved the problem of generating English descriptions of objects in a scene and their relationships. Actually, there were five different teams that all published similar breakthroughs right about the same time.

It's starting to feel like we are in the midst of an AI gold rush were people are realizing that deep learning algorithms can be easily applied to solve many longstanding AI problems, and everyone is racing to do it first.  Deep learning has revealed a massive amount of low hanging fruit.

Was Hinton's 5 year estimate of the time to it would take develop AI capable of describing relationships in pictures too much by the entire 5 years?  Perhaps this is what Elon Musk was referring to when he said AI progress is now moving "at a pace close to exponential":
The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fastit is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year time frame. 10 years at most. This is not a case of crying wolf about something I don't understand. 
I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...
I've been reading and writing about the pitfalls of super-intelligent AI for many months, but my gut feeling has been that we probably have at least a decade or two to figure out some of the issues.  Now I'm wondering if I overestimated how long it will take by a decade or two...

Monday, October 27, 2014

More thoughts on implications of singularity for how we live our lives today

In my last post I asked, what implications should our awareness of the coming singularity have for how we live our lives today?  It's hard to imagine a change any bigger.  As far as implications for our hopes and dreams for the future, it is in the league of the second coming of Christ, or a meteor extinction event.  If I really believe it will happen in my lifetime (which is my sincere best guess) it seems like it should have a pretty profound impact on how I live my life today, and yet, I struggle to identify any shift in my day to day routines, or even my long term plans (i.e. saving and investing, family planning, etc.).

I've also frequently pondered and discussed with friends what a good singularity outcome would be.  My current opinion is that ideally "I" (and my loved ones) would survive and get to experience a transformation of our minds, but it is not clear how different that is from not surviving.  I'll explain.

In the scenario where I personally survive and get to make choices about how to evolve my consciousness, I would almost certainly choose to augment it.  Who wouldn't choose to be a little bit smarter; to have a slightly better memory; to have greater insight into their lives and the world around them; to be able to learn new skills and hobbies more quickly and deeply?  Every day I bump up against the limits of my mental capabilities.  If there were a safe way to improve my mind, I would do it.  I already make this choice daily.  For instance, one motivation for my daily exercise routine is because I feel that it improves the clarity and sharpness of my thinking.  Certainly, I would be more hesitant to change or augment the structure of my brain, but if, over time, its safety and effectiveness had been sufficiently demonstrated, then I wouldn't have any philosophical objection.

At first, however, I would probably be reluctant to make large modifications that would undermine my sense of having survived the improvement.  Going directly from me to a god like intelligence seems indistinguishable from dying.  The person I am now would not survive.  The god like intelligence would be a very different "person", with radically different purpose and perspective on the world.  So I might choose not to take that jump.  But a small jump that lets me perform better and make more progress on the tasks and agenda that I have right now wouldn't be that concerning (just like exercising every day to improve my mental clarity doesn't make me suspect that the couch potato I otherwise could be has committed suicide).

Nevertheless, if I continually chose to make small improvements to my intellect, over time it would have the cumulative effect of a fundamental transformation of my personality.  Many small changes over time eventually add up to very large changes.  The me I am today would be dead.

Of course that is already true to some degree.  I am not the same person I was twenty years ago, and I generally do not morn the death of my twenty years younger self.  The existence and continuity of the self can be thought of as largely an illusion.  Sense of self, and the drive for survival is a useful tool of evolution that facilitates the propagation of our genetic material, but upon closer inspection it has always been difficult to pin down a coherent philosophical justification for it.

But, since I am human, and I do suffer from (or revel in) the illusion of self, I would prefer a singularity where that illusion is not immediately and completely destroyed.  I would prefer the opportunity to gradually augment myself and experience a slow transition (and the awe of a rapidly growing understanding of the universe) rather than a sudden transition that destroys who I am today in a single moment.  I would prefer this, even knowing that the me of today will ultimately die in both scenarios.

Now consider a slightly different singularity scenario where the me of today is simply eliminated rather than evolved.  Instead of my consciousness being suddenly or gradually augmented, it is immediately extinguished and the matter of my body is recycled for use in the consciousness of someone else whose consciousnesses does get augmented.  In this scenario I end up dead just like I end up dead in the scenario of sudden transformation, so there isn't much to distinguish them.  Perhaps some trace of me would survive in the sudden augmentation scenario, but if I died today, some trace of me would continue to exist in the hearts and minds of my loved ones, but that provides me little comfort.  A faint trace of my existence is cold comfort for death.  So does it matter whether I "survive" the singularity?  In all the scenarios above (gradual transformation, sudden transformation, and simple elimination) I die--even in the gradual transformation scenario the me of today eventually dies.

I prefer the scenario of gradual transformation though it's hard to come up with a good justification apart from that it sounds fun and gratifying.  One might argue that with the gradual transformation I haven't died at all, just like I don't really feel like the me of twenty years ago is dead.  I'm changed, but there has been continuity of my personality throughout, and in fact, there are important elements of my personality that have not changed.  These constant elements of my personality, however, undermine the analogy.  Human biology places constraints on how much someone changes over the course of twenty years... a core personality generally survives.  There is no particular reason to suspect that a gradual transformation in the singularity, unlimited by the constraints of human biology, will leave any of my personality intact.  Thus, in the post singularity, the me of twenty years ago may be nearly 100% gone, with virtually none of my distinct personality surviving.  In that case, the only thing that a gradual transformation achieves is an exciting, interesting and wonderful death. Though to be fair, a good death is nothing to be scoffed at, especially since I've already said that I would chose that over living forever unchanged.

(One thing I might choose to do if I survived would be to appropriate the memories of the rest of humanity.  My own personal memories are certainly cherished and useful to me, but there is no reason everyone else's memories wouldn't also be valuable.  If I could acquire everyone's memories with a trivial expenditure of my resources I might as well do that.  But in that case it really makes no difference at all that "I" survived the singularity, because I then become an amalgamation of all humanity.  Similarly, even if I don't survive the singularity, if someone else does, and she incorporates my memories, then perhaps in the only way I could survive, I have survived.)

Interestingly, if the line of thinking in this blog post is valid, it really calls into question the enthusiasm of singularity proponents like Ray Kurzweil, who are hoping to achieve immortality in the singularity.  Immortality is an illusion, and Ray Kurzweil will die no matter what.  What he should be excited about is having an exciting and wonderful death.

What does all this have to do with how belief in the singularity should impact our day to day lives right now?  Well if you believe that the singularity is coming in your lifetime it implies that you will die and the human race will go extinct.  Even if humanity "survives" as the seeds of future intelligences, there is a good chance those intelligences will bear little resemblance to anything human.  So these are the last decades of human existence.  Whatever happens there probably will not be a human appreciation of the beautiful moments of existence.  We are a species moments away from extinction.  Exactly what impact that understanding has on an individual will probably vary significantly from person to person.  For me, it evokes a sense of love and expansiveness.  These are our final moments... lets make them our best.  Lets be kind to one another and see the beauty of each person's unique take on what it means to be human.  Lets revel in our own peculiar human appreciation of existence.  And lets work together to launch the next stage of intelligence with a purpose and motivation that we can be proud of.  This is our opportunity to leave our mark on the universe.  The next stage of intelligence can either be a monomaniacal chess playing robot or something else that is more deeply moving to our human sensibilities.

At base, what are those human sensibilities that I care about?  Kindness, love, curiosity, exploration, joy, wonder.  If that is the mark I want us to have on the universe, then, in these final moments of human existence, what better than to work towards embodying those attributes on a day to day basis?  Perhaps I would hope to do that regardless of my beliefs about the singularity, but I think it does make a difference.  Should I prioritize saving for retirement or taking time to do someone an extra kindness?  I think my beliefs about whether there will be a retirement make a big difference there. (Though perhaps that isn't the best example because I think there is a decent chance that I will just barely eke out a retirement before the singularity).

Tuesday, October 14, 2014

How should anticipated advances in AI change how we live right now?

Usually when we anticipate massive changes in the future it has a profound impact on how we are living right now.  Much of our daily lives are taken up preparing for the future: earning money, educating our children, exercising, preparing healthy foods, etc.  Each of those things has some immediate benefits, e.g. there is a certain amount of money I need right now to survive.  Only a small percentage is saved for next month, year, or retirement.  Similarly, educating my children brings some degree of immediate joy because I enjoy spending time with them and watching their joy at mastering new skills.  But a large fraction of educating my children has less to do with our immediate joy and more to do with planning for the future.  So, one would think, that a radically transformed expectation for the future would lead to some pretty significant changes to my daily life.  But for the most part it hasn't.  My daily routine (or depending on my mood, daily grind) is basically the same.

If I was solidly convinced that 20-30 years would bring the second coming of Christ, or a nuclear holocaust, or the collapse of civilization, or a social revolution that made property and money irrelevant, it would surely change my day-to-day priorities today.  The development of superintelligence is, in my estimation, without question an equivalent or greater change--a change that will have a more profound impact on what it means to be human, alive, me.  So why is my day-to-day mostly unchanged?

Part of the answer is probably that I'm not convinced about the timeline.  Perhaps the tipping point into superintelligence will come right after I die of old age instead of in my late 50s or 60s.  I don't want to be a pauper in my 70s and 80s.  Well... truth is I don't spend much time planning for retirement, but I do spend a lot of time trying to increase my wealth, in part because I think wealth might be an important factor in determining whether the singularity and the pre-singularity turn out well for me and my family.  So perhaps a large part of why my day-to-day remains the same is because my day-to-day without the expectation of the singularity is largely taken up with earning money, and it just so happens that earning money seems like it might be important even in light of the singularity.

So how should expectation of the singularity change how I live right now?

Here is one idea:  it should change what I'm teaching my kids.

What else?