Review: Kurzweil's The Singularity is Near

Although Bayle and I are always surprised when we see how many people are actually reading Neurodudes every day (“you really like us! you really do!”), I think we realized we had hit a new milestone when Ray Kurzweil’s book agent called to give us an advance copy of his new book. Let me be clear here: We will gladly review any AI-/neuro-related books you send us. Free books are great! (Heck, we’ll even do an occasional historical biography, if you send us one.)

There’s a lot to say about Kurzweil”s new book, The Singularity is Near (book website; book on Amazon). This book is similar to his previous books (Age of Intelligent Machines, Age of Spiritual Machines) in style and research but the thesis here is that we are on the precipice of a major change in human civilization: We are soon going to create entities of superior intelligence in all aspects to our own selves. This is the Singularity.

Full book review after the jump

TSIN is based on the same fundamental idea that his previous books emphasize: The acceleration of the rate of innovation. Ray contends that the time between major milestones, specifically with respect to human-created computational ability, is getting faster and that most people don’t realize that this means the “future” will be here faster than one predict.

Unlike the previous books, Ray talks a lot more about biology in this work and, in particular, neuroscience. He sprinkles the book with quotes from well-known researchers talking about, well, stuff they normally don’t talk about like predictions about the future of neuroscience. (Early on, Kurzweil makes a smart remark about how scientists are trained to be very skeptical and pride themselves on underestimation of the impact of new technologies.)

Most of the neuroscience research is not terribly novel for those who regularly read journals or attend conferences, but that is not what you should be reading this book for. Ray, in the best traditions of the multidisciplinary “renaissance scientist” (perhaps an almost extinct species in these times of ultra-over-specialization), excels at assembling many disparate ideas from different disciplines together. That alone can be a recipe for disaster, but Ray does a nice job of combining ideas and technologies with his constant back of the envelope calculations to show the multiplicity of routes to his central thesis.

There are a few chapters specifically on neuroscience and there are some very nice insights in these chapters. A commendable discussion of levels of analysis in neural systems is presented and elaborates on the difficulties of doing simple estimates, based on number of neurons or synapses, of the computational power of the brain. Sure, the connections within a cortical region might be well understood but what about the local connections within a few hundred microns? Similarly, we might model a set of neurons and their connections but how about the extracellular diffusion of neuromodulators near that synapse or local electric fields or countless other influences? It was a nice surprise to see a relatively accessible book bringing up these issues, even if only briefly.

Ray also tackles the important divide of analytic versus neuromorphic methods in computational neuroscience, a question that I doubt many computational neuroscientists have given careful thought to. He sides with the neuromorphic approach and seems to suggest that studying the genetic basis of the brain might be more beneficial than studying the brain itself since the design of the brain and many essential features are captured by this compact representation.

This is not to say that the book is without flaws. There are many contentious ideas that ever-optimistic Ray (I think that’s a good thing, by the way) presents as fact: Reversible computing leads him to believe that eventually all computation will require no energy. Memory might be more than connection patterns and neurotransmitter concentrations, and I mean a lot more. And, as we’ve discussed here before, we are far away from any kind of neuromorphic hippocampus, despite what some may claim. Also, it’s hard to judge how seriously we’re supposed to take some of the time estimates, especially when there’s little justification for the particular date — sending nanobots through the bloodstream to monitor every neuron’s activity noninvasively by 2020? Maybe. (As Ray points out, “there are more than 50,000 neuroscientists in the world, writing articles for more than 300 journals.” Who knows…) Of course, the biggest one is the Singularity itself, which he pins at 2045 based on extrapolating computation per dollar trends. Maybe.

The book also includes several sections on computation and related application-oriented fields (nanotech, robots) that I’ll skip over but the best part of the book might be Ray’s answers to his critics. From the wacky (Penrose’s quantum mechanics in the neural cytoskeleton) to the deeply philosophical (Searle’s Chinese Room argument against strong AI), it is clear that he has thought about the viability of his ideas and is prepared to take on the obvious criticisms that others might lob at him.

I don’t think I would have gone to graduate school in neuroscience if I didn’t believe, like Ray, that the Singularity is near. Just how near, I’m, unfortunately, not sure.

17 thoughts on “Review: Kurzweil's The Singularity is Near

  1. >This book is similar to his previous books (Age of Intelligent Machines, Age of Spiritual Machines)
    >in style and research but the thesis here is that we are on the precipice of a major change in human
    >civilization: We are soon going to create entities of superior intelligence in all aspects to our own
    >selves. This is the Singularity.

    Doesn’t the singularity also include some degree of the unification of human intelligence with machine intelligence, quasi-Matrix style, or am I getting Kurzweil confused with sci-fi dystopia descriptions?

    –Stephen

    Like

  2. personally, i believe the singularity is not due for 100 years or more. Why?

    * superhuman ai? after 50 years, i still don’t even have a robot doing
    my dishes. despite the combined might of capitalism and academia
    focusing for almost a decade on the problem making search engines
    “understand” text even a little bit, slightly modified keyword search
    is our best way to query the web. despite years of specialized
    research into machine vision, only now are machines beginning to be
    able to recognize the simplest objects in normal surroundings with
    reasonable accuracy. oh yeah, and almost no one is even working on
    the problem of human-level A.I. anymore.

    * intelligence-augmenting neuroscience? after hundreds of years, we
    still don’t even know the level of description on which neurons code
    data; for example, we don’t know for most parts of the brain if spike
    timing is meaningful or if the rate sufficies. And most
    neuroscientists are still doing experiments poking one cell at a
    time — and they aren’t even bothered by that.

    * Besides, even if it seemed
    imminent, these things take longer than they seem; examples: although
    researched for years, no “real” household robots yet; no moon hotel
    yet; no flying cars yet

    So I think the singularity is not near. But it may (or may not) happen eventually, and it might be a good idea to start thinking about and planning for it now in case it does.

    one good argument that the “singularity is near” camp has, though, is that if there’s 500 crazy things being developed, and the chance of any of them coming through within our lifetime is low, it’s still possible that the chance of AT LEAST ONE of them coming through soon is high.

    Like

  3. Stephen: I think there are multiple types of “singularities”. I think that either A.I. or augmented human intelligence could lead to a situation that counts as a singularity. Basically when you create a positive feedback loop in intelligence augmentation, that’s a singularity. So you have have smart computer that can program smarter computers, or people who can make themselves smarter (and then since they’re smarter they can figure out how to make themselves even smarter…). The singularity folks consider the rise of civilization as itself a singularity seen on a longer timescale. Another way of looking at it: AI or neuro-based intelligence augmentation will be the tail end of the “civilization” singularity.

    BTW, here’s the introduction of the singularity concept (by Vernor Vinge, who is also btw an awesome scifi author): http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html

    Like

  4. Bayle – Your speculations as to why the singularity won’t happen for 100+ years are based on assumptions that are quite eloquently rebutted in Kurzweil’s book, which I assume you haven’t read. Exponential growth appears linear in its early stages. Not that I agree with everything in the book, but the “I don’t have a smart robot yet” argument is actually a pretty weak one. As Kurzweil points out, true speech recognition will be one of the last AI problems we solve, because as a class of problem it requires near-human intelligence to replicate. But that says nothing about timing, it speaks only to the order that must be followed to solve the problem.

    Like

  5. Chris, you’re right, I haven’t read the book — I should have pointed that out. The comment I posted came out of an email discussion between Neville and I when we were asked to review the book. Neville’s the one who read it, not I — I posted the comment hoping to get feedback (like yours) about what the book says about these issues.

    Like

  6. So if we see a linear trend, what should cause us to assume that this is the “linear beginning” of an exponential (or at least S-shaped) curve? Wouldn’t it be simpler to extrapolate based on the assumption of a linear trend?

    But in my mind, my argument against is mostly heuristic. Basically, people have predicted before that “everything would change” in short order by technology. But what happened before is that the “tone” of human life and society didn’t change, or rather changed only over a period of multiple generations. My intuition is therefore that the “ordinariness” or “banality” of the world (one manifestation of this is, “the complexity and difficulty of making any really neat invention actually work”) is a strong force. Basically, I think that people often postulate, on the basis of deductive-style thinking starting with an “axiom” of the possibility of a new technology, that everything will change. But the world’s banality/muddiness makes any single possibility less important than it seems to be from the perspective of deductive reasoning.

    Therefore, my heuristic is: if you’re predicting that the basic “tone” of life will radically change in a short period of time, you’re probably wrong. Also, if you’re predicting that some technology will change everything, you’re probably wrong — often the order of magnitude of the number of issues needed to work out the tech is drastically underestimated (consider that a few decades ago, “machine vision” was assigned as a summer project to a single undergraduate by one of the fathers of A.I. — sadly I forgot the details of this story though), and even after the technology “comes to fruition”, there are tons of kinks (technological, economic, and social) that have to be worked out before it can be used the way it was intended (why don’t we have ubiquitous computing yet? why don’t i have a PC in my pocket, sunglasses-screens, and finger position-sensors, despite prototypes of all of these being around for awhile? “kinks”).

    Like

  7. >So if we see a linear trend, what should cause us to assume that this is the “linear beginning”
    >of an exponential (or at least S-shaped) curve? Wouldn’t it be simpler to extrapolate based on
    >the assumption of a linear trend?

    The central thesis of the book is summed up in this Kurzweil essay (which appears as a chapter in the book)… :

    http://www.kurzweilai.net/articles/art0134.html?printable=1

    RK answers your question in the very first part of the article. Relevant quote (just the starting place for his argument):

    “Most long range forecasts of technical feasibility in future time periods dramatically underestimate the power of future technology because they are based on what I call the “intuitive linear” view of technological progress rather than the “historical exponential view.” To express this another way, it is not the case that we will experience a hundred years of progress in the twenty-first century; rather we will witness on the order of twenty thousand years of progress (at today’s rate of progress, that is).”

    -K-

    Like

  8. Hmm, I guess I just feel that if you were able to plot “the tenor of life” or “how fast people’s lives seem to be changing” or “how much technology is altering the way people actually live” the same way Kurzwel plots raw computing power, you’d find that the curve wouldn’t predict dramatic changes for over a century. I think that that “the tenor of life” would be some formula dependent upon linear social and economic paradigm factors (by “economic paradigm” I mean not “rate of GDP increase” but rather things like “the speed that people become comfortable with new economic ideas such as idea of being paid wages to do a job, or the idea of insurance”) as well as exponential technological factors, and that the linear factors are much stronger.

    I guess I do think that perhaps eventually the technological factors will be so high that they will dominate even the “stronger” linear factors and cause a singularity, but I just don’t see it happening so soon — this is all intutitive of course. But my intuition about this is pretty strong, and since it gets “commonsense” points I weight it more than Kurzweil’s purely intellectual argument. His argument is, at the root, based on looking at the shapes of graphs of the advancement of various technologies, and assuming that other graphs will have that same shape.

    (other graphs? besides the computing power graph? yes, most notably, Kurzweil must assume that our ability to write intelligent algorithms that make use of the computing power grows quickly; in the sections of the essay you posted called “The Software of Intelligence”, “Reverse Engineering the Human Brain”, “How to Use Your Brain Scan”, and “Downloading the Human Brain”, he presents examples of projects along these lines that causes his personal intuition to think that the software is within reach in our lifetimes; my intuition disagrees, I think those same projects are much more preliminary and farther from the goal than he seems to — but again, notice that this is just my intuition against his)

    More on the “tenor of life” argument; what does that have to do with the rate of technological advancement? Well, to be honest, my intution is simply that “the tenor of life wants to only change slowly” — I am taking that as an axiom, finding that a singularity contradicts it, and then concluding that a singularity is not possible. However, I can manufacture a connection: for technological advance to cause the rate of technological advance to itself increase, I postulate that the technological advance has to cause society to change somewhat to become more efficient. But the speed of that feedback loop is limited by the rate of social and economic paradigmatic change (for example, what if the the internet enables a new economic organization in which “virtual corporations”, social networking, and consulting is the norm, rather than large corporations and conventional long-term employment, and that the new form turns out to be drastically more profitable? What would eventually happen is that society would switch to this new form, and the greater profitability would enable more research, which would raise the rate of tech advancement. But this switch is likely to be slow because of the rate-limiting effect of social and “economic paradigm” change).

    I’m not totally ruling out the chance that I’m wrong. I think there’s probably a 10-20% chance that I’m wrong about everything and that there will be a singularity in our lifetimes. It would be neat, if so.

    Like

  9. Looked at Solow’s growth model Bayle ? Technology is considered a exogenous variable not explained by the model but it is the most important variable explaining growth besides capital intensity. Solow’s conclusion (in the model) is that we can see distinct periods of rapid growth, but due to diminishing returns, growth (GDP), will eventually reach a steady-state (higher level of GDP/capita).

    Kurzweil claims a ‘Law of accelerating returns’ due to technological innovation, it’s interesting.

    Like

  10. Get it folks. I was a competitve debater in both High School and College and I have been a software engineer for the past 27 years. One, his arguments and proof of the “singularity” and the accelerating pace of technmological change are rock solid. Two, as an engineer who has been at the center of technological innovation for all of my adult life(grudging admitting I am 46) his technology forecasts have a serious ring of truth to them. Last, he was able to answer my ultimate counter to the singularity, that bieng Godel’s Incompleteness Therom bieng a ultimate block to strong AI. READ THE BOOK… Get ready for the “singularity”… IT IS COMMING WHEATHER YOU BELIEVE IT OR NOT!!!!

    Like

  11. Lauren, i believe you in that sense we might come up with something new, tech not known today, but i doubt Kurzweil’s law of accelerating returns. Somewhere in the near future Moore’s law will hit the ceiling, the exponential technological growth (which R. Kurzweil log-plots in his book as a stragith line, past 100 years) will not be what we have seen for the last say 20-30 years, there will be a slow down. The straight line in his logarithmic plots will start to point down.

    Like

  12. Well, not read the book yet, arriving tomorow I think, even if he is off with hs predictions, least he has got people thinking more about it…thats always a good thing. One of the arguments against seems to be that there is so much left to be done in order to have the singularity, but Kurzweil, I think in his previous book or an article he wrote, said that the majority of the tech will happen at the last minute, its just the nature of the exponential.

    Like

  13. I don’t personally believe that the conservatism of human societies is going to be able to hold back the Singularity. For one thing, people’s ability to be conservative about new technologies is limited to those technologies with which they consciously interact. Which is to say, it’s really only a limitation on interfaces, not the underlying technologies.

    Take “telephones,” for instance. They’ve had a remarkably stable interface since their introduction, but they are now an entirely different technology in almost every other respect. Cellular phones are light years ahead of land lines, but on the other hand, they still have the arrangement 123, 456, 789, *0#. The VoIP revolution is taking hold now precisely because it can change the underlying technology with minimal disruption to the interface.

    Note, however, that the telephone interface has in fact changed. Rotary phones became touch tone. Caller ID was added. The dial tone began to signal the presence of voice mail. There is a huge element of conservatism, true– new interface ideas tend to prove acceptable only to the extent that they fail to interfere with the existing interface– but since there are powerful incentives to incorporate new & more powerful technologies into the system, this amounts merely to a requirement of backwards compatibility. You can put in a phone system that does any old crazy thing, and as long as you have to press “star” first, it’s no trouble to anyone who wants to pretend like nothing’s changing.

    The next few generations of interface will change this equation even further in the direction of fluidity. The main reason why interfaces with technology have been as conservative as they have, is that they are essentially unintuitive. If you learn by rote a particular arrangement of buttons, you want to stick with it. As interfaces become more intelligent, perceptive and interactive they will become vastly more intuitive, allowing people to interface effectively with a larger amount of computation.

    Taking the phone system again as an example, there are already visionaries working on creating the technologies that will finally replace the dialtone. The most likely interface is a voice which says something like, “Hello Bob, You Have Twelve Messages! What Can I Do For You Today?” That may not sound that magnificent– though you must admit it’s preferable to a mildly annoying tone with only historical meaning– but it will improve rapidly as the underlying system becomes more intelligent.

    Voice recognition will slowly start to replace dialing. Address books and automated lookups will slowly start to replace memorizing or writing phone numbers. The conservativism will be chipped away at very quickly as things change in ways that make them undeniably easier. The so-called technical aspects of the technology– the medium-tech aspects, the hump of interface complexity– will start to fade away.

    And human will begin to join with machine.

    Like

Leave a comment