July 16, 2004.
That's the day everything changed for Dr. Dharmendra Modha.
Most people don't remember the exact day they realized what they wanted to do with the rest of their lives. Maybe it was a crisp fall day halfway through high school, or college or even middle school.
But that’s not the case for Modha. His “day” was July 16, 2004—and he remembers it vividly.
By 2004, Modha was already well on his way to being considered a computing pioneer. He joined IBM after receiving his bachelor’s from the India Institute of Technology in computer science and his Ph.D. in electrical computing engineering at the University of California, San Diego.
Once at IBM, Modha has a series of extremely successful projects. He invented a code that went into every IBM disk drive; he invented algorithms to visualize data in tens of thousands of dimensions, which eventually became part of Watson; and he invented caching algorithms for large storage systems, which has generated billions of dollars for IBM over the years.
“But then, I became acutely aware of the finiteness of life,” Modha recalled to R&D Magazine. “I wanted to do something that could have a paradigm-shifting effect on the field of computing. Something that would make the world better in a deep sense. But it had to have maybe just a sliver of chance of working. A very high-risk, high-leverage project.”
After meditating for a year on what to do next, Modha came up with just what he wanted—the crazy, almost impossible idea to build a brain-inspired computer.
But, can someone really build a computer inspired by the brain? After all, the human brain boasts about 100 trillion (1014) synapses and 100 billion (1011) neurons firing anywhere from five to 50 times per second.
The point was never to compete with existing computers, Modha explains. “It was always, how can we complement today’s computers?”
Cognitive computing, or brain-inspired computing, aims the emulate the human brain’s abilities for perception, action and cognition. Traditional computers are symbolic, fast and sequential with a focus on language and analytical thinking—much like the left brain.
The neurosynaptic chips Modha and his team design are much more like the right brain—slow, synthetic, capable of addressing the five senses as well as pattern recognition.
Today’s chip—called TrueNorth—features 1 million neurons, 256 million synapses, consumes 17 milliwatts of power and is about 4 square centimeters in size.
Based on an innovative algorithm just published in September, TrueNorth can efficiently implement inference with deep networks to classify image data at 1,200 to 2,600 frames per second while consuming a mere 25 to 275 milliwatts. This means the chip can detect patterns in real-time from 50 to 100 cameras at once—each with 32x32 color pixels and streaming information at the standard TV rate of 24 fps—while running on a smartphone battery for days without recharging.
“The new milestone provides a palpable proof-of-concept that the efficiency of brain-inspired computing can be merged with the effectiveness of deep learning, paving the path towards a new generation of cognitive computing spanning mobile, cloud and supercomputers,” Modha explained.
The novel algorithm builds off the scaled-up platform IBM was able to deliver to Lawrence Livermore National Laboratory in March 2016. Called NS16e, the configuration consists of a 16-chip array of TrueNorth processors designed to run large-scale networks that do not fit on a single chip. The NS16e System interconnects TrueNorth chips via a built-in chip-to-chip message-passing interface that does not require additional circuitry or firmware.
Both the algorithm and the scaled-up version of TrueNorth is the culmination of 12 ½ years of research and development, dating all the way back to that July day in 2004.
The beginning and the middle
Once the project received a green light and funding from IBM in 2006, Modha quickly identified three elements that were crucial to the success of his computer: neuroscience, supercomputing and architecture.
After all, to build a brain-inspired computer, one must first understand how the brain works. Modha and his team consumed every bit of published information available about the brain, including 30 years of research regarding neurons. They ended up mapping out the largest, long distance line diagram of the brain—which consisted of 383 regions in the macabre monkey brain, illustrating 6,602 connections.
Besides being “the most beautiful illustration” Modha as ever seen, the map successfully provided the researchers with a platform to study the brain as a network.
The team turned to supercomputing simulations next. Luckily, they didn’t have to go far as IBM owns some of the most important milestones in supercomputing history, including the development of the Blue Gene/L, Blue Gene/P and Blue Gene/Q.
Modha carried out a series of increasingly larger and increasingly more complex simulations on the largest Blue Gene supercomputers IBM has to offer. The largest simulation was done on the Blue Gene/Q— it was able to simulate a brain-like graph at a scale of 100 trillion synapses, or 1014.
While that’s the same scale as the number of synapses in the human brain, there did exist a discrepancy—the simulation ran 1500x slower than real-time, even when using much simpler connectively and computation than the brain.
“We figured a hypothetical computer designed to run the brain’s 100 trillion synapses in real-time would require 12 gigawatts of power,” Modha said, explaining what he learned from the supercomputer simulations. “That’s enough to power NYC and LA. In contrast, the human brain consumes just 20 watts. So, there’s a billion-fold disparity behind modern computers compared with what the brain can do. And that’s really what led us to the third element.”
The third element was perhaps the riskiest, and thereby the most rewarding. Modha wanted to turn 70+ years of computing on its head by designing a brand new architecture that was completely different than the traditional von Neumann architecture.
Described in 1945 and prevalent in most of today’s computers, von Neumann architecture refers to an electronic digital computer that shares a bus between program memory and data memory. This shared bus leads to a limited throughput (data transfer rate) between the CPU and memory compared with the amount of memory. This means power must increase as the communication rate (clock frequency) increases.
Of course, Modha turned to the brain for inspiration on how to design a new architecture. His research turned up a neuroscience hypothesis that the brain is composed of canonical, cortical microcircuits, or tiny circuits that compose the fabric of the cerebral cortex. Applying this to computing, Modha sought to design an architecture based on tiny modules that could be tiled to create an overall system—which is precisely what TrueNorth is.
“To prove the hypothesis, in 2011, we demonstrated a tiny little module, a neurosynaptic core with 256 neurons, the scale of a worm brain,” Modha explained. “This tiny little module formed the foundation. Then we shrank this core in area by an order of magnitude, in power by two orders of magnitude, then tiled 4,096 of these tiny cores to create the chip that is now called TrueNorth.”
TrueNorth’s brain-inspired architecture consists of a network of neurosynaptic cores that are distributed and operated in parallel. Unlike von Neumann architecture, TrueNorth’s computation, memory, and communication are integrated, which results in a cool operating environment (allowing the chips to be stacked) and low power operation. Individual cores can fail and yet, like the brain, the architecture can still function. Cores on the same chip communicate with one another via an on-chip event-driven network. Chips communicate via an inter-chip interface leading to seamless scalability.
This version of TrueNorth—literally a supercomputer the size of a postage stamp with the power of a hearing aid battery—debuted in 2014.
Success through collaboration
Collaboration—both internally and externally—was absolutely vital to the success of TrueNorth.
“If it takes a village to raise a child, it takes a community to bring something like this from a doodle on the back of a napkin to reality,” said Modha.
Externally, IBM and Modha collaborated with more than 200 universities, government labs, companies and non-profits—Lawrence Livermore National Laboratory, Samsung, Stanford University, Cornell University, Columbia University—the list goes on and on.
Internally, Modha’s group worked with multiple labs within IBM, including the semiconductor lab, which played a vital role in designing emerging material for TrueNorth.
Additionally, Modha says his immediate team was deeply collaborative, and very flat. There was no established hierarchy in order to emphasize an environment in which all members’ creativity was considered. No matter how young, how inexperienced, how new to the project, or how different, all perspectives were considered before the team collectively settled on a unified direction.
In August 2015, Modha held a three-week “TrueNorth Bootcamp,” where he began to unveil and teach the ecosystem he developed. Representatives from more than 40 universities, government labs and non-profits representing five continents were in attendance.
“This was key because what we developed is not a point technology,” Modha said. “We developed a substrate, a platform that is going to revolutionize computing from IoT, smartphones, mobile tech, embedded computing, robotics, cars, cameras, imaging machines to cloud and supercomputing. This isn’t about one application or one algorithm or one architecture, it’s really about a pervasive platform that could truly touch on all aspects of computing. It's bringing to bear the creativity of the community here to truly push the boundaries of innovation and possibility.”
Another aspect that was vital to the project—and will remain so as research continues—is long-term motivation. This research has gone on for more than a decade, and endured in an R&D climate that expects short-term gains in a long-term field.
From an organizational perspective, the environment at IBM is designed to “enable, nurture and protect long-term efforts” to make the world a better place, according to Modha.
From a team perspective Modha said he has come to the conclusion that the second law of thermodynamics (the entropy of the universe tends to a maximum) is evil.
“The key to managing a project over the long-term is two-fold—know where we are headed so one can continue to construct complexity in the desired direction, and know where we are not headed so as to prevent the second law of thermodynamics from creeping in and creating heat and entropy that does not further purposeful motion.”
The third perspective is personal motivation. And for Modha, it goes all the way back to that decision he made on July 16, 2004—create something that endures through time and uncovers fundamental principles of computing at the forefront of knowledge. And make it available in the service of making the world work better.
“That’s what motivates me personally every day.”
Along with TrueNorth, IBM developed an end-to-end ecosystem for developing applications on these brain-inspired chips that includes a simulator, a programming language, sample algorithms/applications, a library and a teaching curriculum.
It currently sits in the hands of 430 researchers at more than 40 institutions worldwide, but—ever the collaborator—Modha is looking to expand the user base even further in the coming year.
Chip-wise, Modha says the next steps are very clear. In the next five to 10 years, he and his team want to create a brain-in-a-box—a supercomputer the size of a shoebox with 10 billion neurons and one hundred trillion synapses that consumes less than 1 kilowatt of power.
“I have a palpable sense that we are at a turning point in the history of computing,” Modha said. “The technological and practical possibilities are immense and could touch every sphere of science, technology, business, government and society. I am optimistic that the enduring value of our work will be the inspiration of a completely different way of thinking about computing. We are not there yet. TrueNorth is a direction and not a destination. The end goal is building intelligent business machines that enable a cognitive planet, while transforming industries.”