The AI Revolution: Why Deep Learning is Suddenly Changing Your Life

A CONCEPTION OF HOW DEEP LEARNING MIGHT BE USED TO IDENTIFY A FACE.

Decades-old discoveries are now electrifying the computing industry and will soon transform corporate America.

Over the past four years, readers have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies.

Most obviously, the speech-recognition functions on our smartphones work much better than they used to. When we use a voice command to call our spouses, we reach them now. We aren’t connected to Amtrak or an angry ex.

In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Chinese search giant Baidu says customers have tripled their use of its speech interfaces in the past 18 months.

Machine translation and other forms of language processing have also become far more convincing, with Google, Microsoft, Facebook, and Baidu unveiling new tricks every month. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu. Google’s Inbox app offers three ready-made replies for many incoming emails.

Then there are the advances in image recognition. The same four companies all have features that let you search or automatically organize collections of photos with no identifying tags. You can ask to be shown, say, all the ones that have dogs in them, or snow, or even something fairly abstract like hugs. The companies all have prototypes in the works that generate sentence-long descriptions for the photos in seconds.

Think about that. To gather up dog pictures, the app must identify anything from a Chihuahua to a German shepherd and not be tripped up if the pup is upside down or partially obscured, at the right of the frame or the left, in fog or snow, sun or shade. At the same time it needs to exclude wolves and cats. Using pixels alone. How is that possible?

The advances in image recognition extend far beyond cool social apps. Medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, to diagnose cancer earlier and less invasively, and to accelerate the search for life-saving pharmaceuticals. Better image recognition is crucial to unleashing improvements in robotics, autonomous drones, and, of course, self-driving cars—a development so momentous that we made it a cover story in June. Ford, Tesla, Uber, Baidu, and Google parent Alphabet are all testing prototypes of self-piloting vehicles on public roads today.

But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

The most remarkable thing about neural nets is that no human being has programmed a computer to perform any of the stunts described above. In fact, no human could. Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences.

In short, such computers can now teach themselves. “You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia, which began placing a massive bet on deep learning about five years ago. (For more, read Fortune’s interview with Nvidia CEO Jen-Hsun Huang.)

Neural nets aren’t new. The concept dates back to the 1950s, and many of the key algorithmic breakthroughs occurred in the 1980s and 1990s. What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well. “This is deep learning’s Cambrian explosion,” says Frank Chen, a partner at the Andreessen Horowitz venture capital firm, alluding to the geological era when most higher animal species suddenly burst onto the scene.

That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014. (In late September, five corporate AI leaders—Amazon, Facebook, Google, IBM, and Microsoft—formed the nonprofit Partnership on AI to advance public understanding of the subject and conduct research on ethics and best practices.)

Google had two deep-learning projects underway in 2012. Today it is pursuing more than 1,000, according to a spokesperson, in all its major product sectors, including search, Android, Gmail, translation, maps, YouTube, and self-driving cars. IBM’s IBM 0.45% Watson system used AI, but not deep learning, when it beat two Jeopardy champions in 2011. Now, though, almost all of Watson’s 30 component services have been augmented by deep learning, according to Watson CTO Rob High.

Venture capitalists, who didn’t even know what deep learning was five years ago, today are wary of startups that don’t have it. “We’re now living in an age,” Chen observes, “where it’s going to be mandatory for people building sophisticated software applications.” People will soon demand, he says, “ ‘Where’s your natural-language processing version?’ ‘How do I talk to your app? Because I don’t want to have to click through menus.’ ”

Some companies are already integrating deep learning into their own day-to-day processes. Says Peter Lee, cohead of Microsoft Research: “Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”

The hardware world is feeling the tremors. The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations. This past August, Nvidia announced that quarterly revenue for its data center segment had more than doubled year over year, to $151 million. Its chief financial officer told investors that “the vast majority of the growth comes from deep learning by far.” The term “deep learning” came up 81 times during the 83-minute earnings call.

Chip giant Intel isn’t standing still. In the past two months it has purchased Nervana Systems (for more than $400 million) and Movidius (price undisclosed), two startups that make technology tailored for different phases of deep-learning computations.

For its part, Google revealed in May that for over a year it had been secretly using its own tailor-made chips, called tensor processing units, or TPUs, to implement applications trained by deep learning. (Tensors are arrays of numbers, like matrices, which are often multiplied against one another in deep-learning computations.)

Indeed, corporations just may have reached another inflection point. “In the past,” says Andrew Ng, chief scientist at Baidu Research, “a lot of S&P 500 CEOs wished they had started thinking sooner than they did about their Internet strategy. I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.”

Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

Think of deep learning as a subset of a subset. “Artificial intelligence” encompasses a vast range of technologies—like traditional logic and rules-based systems—that enable computers and robots to solve problems in ways that at least superficially resemble thinking. Within that realm is a smaller category called machine learning, which is the name for a whole toolbox of arcane but important mathematical techniques that enable computers to improve at performing tasks with experience. Finally, within machine learning is the smaller subcategory called deep learning.

One way to think of what deep learning does is as “A to B mappings,” says Baidu’s Ng. “You can input an audio clip and output the transcript. That’s speech recognition.” As long as you have data to train the software, the possibilities are endless, he maintains. “You can input email, and the output could be: Is this spam or not?” Input loan applications, he says, and the output might be the likelihood a customer will repay it. Input usage patterns on a fleet of cars, and the output could advise where to send a car next.

Deep learning, in that vision, could transform almost any industry. “There are fundamental changes that will happen now that computer vision really works,” says Jeff Dean, who leads the Google Brain project. Or, as he unsettlingly rephrases his own sentence, “now that computers have opened their eyes.”

Does that mean it’s time to brace for “the singularity”—the hypothesized moment when superintelligent machines start improving themselves without human involvement, triggering a runaway cycle that leaves lowly humans ever further in the dust, with terrifying consequences?

Not just yet. Neural nets are good at recognizing patterns—sometimes as good as or better than we are at it. But they can’t reason.

The first sparks of the impending revolution began flickering in 2009. That summer Microsoft’s Lee invited neural nets pioneer Geoffrey Hinton, of the University of Toronto, to visit. Impressed with his research, Lee’s group experimented with neural nets for speech recognition. “We were shocked by the results,” Lee says. “We were achieving more than 30% improvements in accuracy with the very first prototypes.

In 2011, Microsoft introduced deep-learning technology into its commercial speech-recognition products, according to Lee. Google followed suit in August 2012.

But the real turning point came in October 2012. At a workshop in Florence, Italy, Fei-Fei Li, the head of the Stanford AI Lab and the founder of the prominent annual ImageNet computer-vision contest, announced that two of Hinton’s students had invented software that identified objects with almost twice the accuracy of the nearest competitor. “It was a spectacular result,” recounts Hinton, “and convinced lots and lots of people who had been very skeptical before.” (In last year’s contest a deep-learning entrant surpassed human performance.)

Cracking image recognition was the starting gun, and it kicked off a hiring race. Google landed Hinton and the two students who had won that contest. Facebook signed up French deep learning innovator Yann LeCun, who, in the 1980s and 1990s, had pioneered the type of algorithm that won the ImageNet contest. And Baidu snatched up Ng, a former head of the Stanford AI Lab, who had helped launch and lead the deep-learning-focused Google Brain project in 2010.

The hiring binge has only intensified since then. Today, says Microsoft’s Lee, there’s a “bloody war for talent in this space.” He says top-flight minds command offers “along the lines of NFL football players.”

Geoffrey Hinton, 68, first heard of neural networks in 1972 when he started his graduate work in artificial intelligence at the University of Edinburgh. Having studied experimental psychology as an undergraduate at Cambridge, Hinton was enthusiastic about neural nets, which were software constructs that took their inspiration from the way networks of neurons in the brain were thought to work. At the time, neural nets were out of favor. “Everybody thought they were crazy,” he recounts. But Hinton soldiered on.

Neural nets offered the prospect of computers’ learning the way children do—from experience—rather than through laborious instruction by programs tailor-made by humans. “Most of AI was inspired by logic back then,” he recalls. “But logic is something people do very late in life. Kids of 2 and 3 aren’t doing logic. So it seemed to me that neural nets were a much better paradigm for how intelligence would work than logic was.” (Logic, as it happens, is one of the Hinton family trades. He comes from a long line of eminent scientists and is the great-great-grandson of 19th-century mathematician George Boole, after whom Boolean searches, logic, and algebra are named.)

During the 1950s and ’60s, neural networks were in vogue among computer scientists. In 1958, Cornell research psychologist Frank Rosenblatt, in a Navy-backed project, built a prototype neural net, which he called the Perceptron, at a lab in Buffalo. It used a punch-card computer that filled an entire room. After 50 trials it learned to distinguish between cards marked on the left and cards marked on the right. Reporting on the event, the New York Times wrote, “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

The Perceptron, whose software had only one layer of neuron-like nodes, proved limited. But researchers believed that more could be accomplished with multilayer—or deep—neural networks.

Leave a Reply

Your email address will not be published. Required fields are marked *