The deep-learning software driving the modern artificial intelligence revolution has mostly run on fairly standard computer hardware. Some tech giants such as Google and Intel have focused some of their considerable resources on creating more specialized computer chips designed for deep learning. But IBM has taken a more unusual approach: It is testing its brain-inspired TrueNorth computer chip as a hardware platform for deep learning.
Deep learning’s powerful capabilities rely on algorithms called convolutional neural networks that consist of layers of nodes (also known as neurons). Such neural networks can filter huge amounts of data through their “deep” layers to become better at, say, automatically recognizing individual human faces or understanding different languages. These are the types of capabilities that already empower online services offered by the likes of Google, Facebook, Amazon, and Microsoft.
In recent research, IBM has shown that such deep-learning algorithms could run on brain-inspired hardware that typically supports a very different type of neural network.
IBM published a paper on its work in the 9 September 2016 issue of the journal Proceedings of the National Academy of Sciences. The research was funded with just under $1 million from the U.S. Defense Advanced Research Projects Agency (DARPA). Such funding formed part of DARPA’s Cortical Processor program aimed at brain-inspired AI that can recognize complex patterns and adapt to changing environments.
“The new milestone provides a palpable proof of concept that the efficiency of brain-inspired computing can be merged with the effectiveness of deep learning, paving the path towards a new generation of chips and algorithms with even greater efficiency and effectiveness,” says Dharmendra Modha, chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.
IBM first laid down the specifications for TrueNorth and a prototype chip in 2011. So, TrueNorth predated—and was therefore never specifically designed to harness—the deep-learning revolution based on convolutional neural networks that took off starting in 2012. Instead, TrueNorth typically supports spiking neural networks that more closely mimic the way real neurons work in biological brains.
Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before they fire. To achieve precision on deep-learning tasks, spiking neural networks typically have to go through multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks such as image recognition or language processing.
Deep-learning experts have generally viewed spiking neural networks as inefficient—at least, compared with convolutional neural networks—for the purposes of deep learning. Yann LeCun, director of AI research at Facebook and a pioneer in deep learning, previously critiqued IBM’s TrueNorth chip because it primarily supports spiking neural networks. (See IEEE Spectrum’s previous interview with LeCun on deep learning.)
The IBM TrueNorth design may better support the goals of neuromorphic computing that focus on closely mimicking and understanding biological brains, says Zachary Chase Lipton, a deep-learning researcher in the Artificial Intelligence Group at the University of California, San Diego. By comparison, deep-learning researchers are more interested in getting practical results for AI-powered services and products. He explains the difference as follows:
To evoke the cliche metaphor about birds and airplanes, you might say the computational neuroscience/neuromorphic community is more concerned with studying birds, and the machine learning community more interested in understanding aerodynamics, with or without the help of biology. The deep learning community is generally bullish on the benefits of specialized hardware. [Therefore,] the neuromorphic chips don't inspire as much excitement because the spiking neural networks they focus on are not so popular in deep learning.
To make the TrueNorth chip a good fit for deep learning, IBM had to develop a new algorithm that could enable convolutional neural networks to run well on its neuromorphic computing hardware. This combined approach achieved what IBM describes as “near state-of-the-art” classification accuracy on eight data sets involving vision and speech challenges. They saw between 65 percent and 97 percent accuracy in the best circumstances.
When just one TrueNorth chip was being used, it surpassed state-of-the-art accuracy on just one out of eight data sets. But IBM researchers were able to boost the hardware’s accuracy on the deep-learning challenges by using up to eight chips. That enabled TrueNorth to match or surpass state-of-the-art accuracy on three of the data sets.
The TrueNorth testing also managed to process between 1,200 and 2,600 video frames per second. That means a single TrueNorth chip could detect patterns in real time from between as many as 100 cameras at once, Modha says. This assumes each camera uses 1,024 color pixels (32 x 32) and streams information at a standard TV rate of 24 frames per second.
Such results may be impressive for TrueNorth’s first major foray into deep-learning testing, but they should be taken with a grain of salt, Lipton says. He points out that the vision data sets involved very minor problems with the 32 x 32 pixel images.
Still, IBM’s Modha seems enthusiastic about continuing to test TrueNorth for deep learning. He and his colleagues hope to test the chip on so-called unconstrained deep learning, which involves gradually introducing hardware constraints during the training of neural networks instead of constraining them from the very beginning.
Modha also points to TrueNorth’s general design as an advantage over those of more specialized deep-learning hardware designed to run only convolutional neural networks. It will likely allow the running of multiple types of AI networks on the same chip.
“Not only is TrueNorth capable of implementing these convolutional networks, which it was not originally designed for, but it also supports a variety of connectivity patterns (feedback and lateral, as well as feed forward) and can simultaneously implement a wide range of other algorithms,” Modha says.
Such biologically inspired chips would probably become popular only if they show that they can outperform other hardware approaches for deep learning, Lipton says. But he suggested that IBM could leverage its hardware expertise to join Google and Intel in creating new specialized chips designed specifically for deep learning.