Books about science and especially computer science often suffer from one of two failure modes. Treatises by scientists sometimes fail to clearly communicate insights. Conversely, the work of journalists and other professional writers may exhibit a weak understanding of the science in the first place.
Luke Dormehl is the rare lay person — a journalist and filmmaker — who actually understands the science (and even the math) and is able to parse it in an edifying and exciting way. He is also a gifted storyteller who interweaves the personal stories with the broad history of artificial intelligence. I found myself turning the pages of “Thinking Machines” to find out what happens, even though I was there for much of it, and often in the very room.
Dormehl starts with the 1964 World’s Fair — held only miles from where I lived as a high school student in Queens — evoking the anticipation of a nation working on sending a man to the moon. He identifies the early examples of artificial intelligence that captured my own excitement at the time, like IBM’s demonstrations of automated handwriting recognition and language translation. He writes as if he had been there.
Dormehl describes the early bifurcation of the field into the Symbolic and Connectionist schools, and he captures key points that many historians miss, such as the uncanny confidence of Frank Rosenblatt, the Cornell professor who pioneered the first popular neural network (he called them “perceptrons”). I visited Rosenblatt in 1962 when I was 14, and he was indeed making fantastic claims for this technology, saying it would eventually perform a very wide range of tasks at human levels, including speech recognition, translation and even language comprehension. As Dormehl recounts, these claims were ridiculed at the time, and indeed the machine Rosenblatt showed me in 1962 couldn’t perform any of these things. In 1969, funding for the neural net field was obliterated for about two decades when Marvin Minsky and his M.I.T. colleague Seymour Papert published the book “Perceptrons,” which proved a theorem that perceptrons could not distinguish a connected figure (in which all parts are connected to each other) from a disconnected figure, something a human can do easily.
What Rosenblatt told me in 1962 was that the key to the perceptron achieving human levels of intelligence in many areas of learning was to stack the perceptrons in layers, with the output of one layer forming the input to the next. As it turns out, the Minsky-Papert perceptron theorem applies only to single-layer perceptrons. As Dormehl recounts, Rosenblatt died in 1971 without having had the chance to respond to Minsky and Papert’s book. It would be decades before multi-layer neural nets proved Rosenblatt’s prescience. Minsky was my mentor for 54 years until his death a year ago, and in recent years he lamented the “success” of his book and had become respectful of the recent gains in neural net technology. As Rosenblatt had predicted, neural nets were indeed providing near human-level (and in some cases superhuman levels) of performance on a wide range of intelligent tasks, from translating languages to driving cars to playing Go.
Dormehl examines the pending social and economic impact of artificial intelligence, for example on employment. He recounts the positive history of automation. In 1900, about 40 percent of American workers were employed on farms and over 20 percent in factories. By 2015, these figures had fallen to 2 percent on farms and 8.7 percent in factories. Yet for every job that was eliminated, we invented several new ones, with the work force growing from 24 million people (31 percent of the population in 1900) to 142 million (44 percent of the population in 2015). The average job today pays 11 times as much per hour in constant dollars as it did a century ago. Many economists are saying that while this may all be true, the future will be different because of the unprecedented acceleration of progress. Although expressing some cautions, Dormehl shares my optimism that we will be able to deploy artificial intelligence in the role of brain extenders to keep ahead of this economic curve. As he writes, “Barring some catastrophic risk, A.I. will represent an overall net positive for humanity when it comes to employment.”
Many observers of A.I. and the other 21st-century exponential technologies like biotechnology and nanotechnology attempt to peer into the continuing accelerating gains and fall off the horse. Dormehl ends his book still in the saddle, discussing the prospect of conscious A.I.s that will demand and/or deserve rights, and the possibility of “uploading” our brains to the cloud. I recommend this book to anyone with a lay scientific background who wants to understand what I would argue is today’s most important revolution, where it came from, how it works and what is on the horizon.
“Heart of the Machine,” the futurist Richard Yonck’s new book, contains its important insight in the title. People often think of feelings as secondary or as a sideshow to intellect, as if the essence of human intelligence is the ability to think logically. If that were true, then machines are already ahead of us. The superiority of human thinking lies in our ability to express a loving sentiment, to create and appreciate music, to get a joke. These are all examples of emotional intelligence, and emotion is at both the bottom and top of our thinking. We still have that old reptilian brain that provides our basic motivations for meeting our physical needs and to which we can trace feelings like anger and jealousy. The neocortex, a layer covering the brain, emerged in mammals two hundred million years ago and is organized as a hierarchy of modules. Two million years ago, we got these big foreheads that house the frontal cortex and enabled us to process language and music.
Yonck provides a compelling and thorough history of the interaction between our emotional lives and our technology. He starts with the ability of the early hominids to fashion stone tools, perhaps the earliest example of technology. Remarkably the complex skills required were passed down from one generation to the next for over three million years, despite the fact that for most of this period, language had not yet been invented. Yonck makes a strong case that it was our early ability to communicate through pre-language emotional expressions that enabled the remarkable survival of this skill, and enabled technology to take root.
Yonck describes today’s emerging technologies for understanding our emotions using images of facial expressions, intonation patterns, respiration, galvanic skin response and other signals — and how these instruments might be adopted by the military and interactive augmented reality experiences. And he recounts how all communication technologies from the first books to today’s virtual reality have had significant sexual applications and will enhance sensual experiences in the future.
Yonck is a sure-footed guide and is not without a sense of humor. He imagines, for example, a scenario a few decades from now with a spirited exchange at the dinner table. “No daughter of mine is marrying a robot and that’s final!” a father exclaims.
His daughter angrily replies: “Michael is a cybernetic person with the same rights you and I have! We’re getting married and there’s nothing you can do to change that!” She storms out of the room.
Yonck concludes that we will merge with our technology — a position I agree with — and that we have been doing so for a long time. He argues, as have I, that merging with future superintelligent A.I.s is our best strategy for ensuring a beneficial outcome. Achieving this requires creating technology that can understand and master human emotion. To those who would argue that such a quest is arrogantly playing God, he says simply: “This is what we do.”