Neuromorphic computing: The long path from roots to real life

This article is part of the Technology Insight series, made possible with funding from Intel.

Ten years ago, the question was whether software and hardware could be made to work more like a biological brain, including incredible power efficiency. Today, that question has been answered with a resounding “yes.” The challenge now is for the industry to capitalize on its history in neuromorphic technology development and answer tomorrow’s pressing, even life-or-death, computing challenges.

KEY POINTS

  • Industry partnerships and proto-benchmarks are helping advance decades of research towards practical applications in real-time computing vision, speech recognition, IoT, autonomous vehicles and robotics.
  • Neuromorphic computing will likely complement CPU, GPU, and FPGA technologies for certain tasks — such as learning, searching and sensing — with extremely low power and high efficiency.
  • Forecasts for commercial sales vary widely; with CAGRs of 12-50% by 2028.

From potential to practical 

In July, the Department of Energy’s Oak Ridge National Laboratory hosted its third annual International Conference on Neuromorphic Systems (ICONS). The three-day virtual event offered sessions from researchers around the world. All told, the conference had 234 attendees, nearly double the previous year. The final paper, “Modeling Epidemic Spread with Spike-based Models,” explored using neuromorphic computing to slow infection in vulnerable populations. At a time when better, more accurate models could guide national policies and save untold thousands of lives, such work could be crucial.

Virtual attendees at the 2020 ICONS neuromorphic conference, hosted by the U.S. Department of Energy’s Oak Ridge National Laboratory

Above: Virtual attendees at the 2020 ICONS neuromorphic conference, hosted by the U.S. Department of Energy’s Oak Ridge National Laboratory.

ICONS represents a technology and surrounding ecosystem still in its infancy. Researchers laud neuromorphic computing’s potential, but most advances to date have occurred in academic, government and private R&D laboratories. That appears to be ready to change.

Sheer Analytics & Insights estimates that the worldwide market for neuromorphic computing in 2020 will be a modest $29.9 million — growing 50.3% CAGR to $780 million over the next eight years. (Note that a 2018 KBV Research report forecast 18.3% CAGR to $3.7 billion in 2023. Mordor Intelligence aimed lower with $111 million in 2019 and a 12% CAGR to reach $366 million by 2025.) Clearly, forecasts vary but big growth seems likely. Major players include Intel, IBM, Samsung, and Qualcomm.

Researchers are still working out where practical neuromorphic computing should go first. Vision and speech recognition are likely candidates. Autonomous vehicles could also benefit from human-like learning without human-like distraction or cognitive errors. Internet of Things (IoT) opportunities range from the factory floor to the battlefield. To be sure, neuromorphic computing will not replace modern CPUs and GPUs. Rather, the two types of computing approaches will be complementary, each suited for its own sorts of algorithms and applications.

Familiarity with neuromorphic computing’s roots, and where it’s headed, is useful for understanding next-generation computing challenges and opportunities. Here’s the brief version.

Inspiration: Spiking and synapses

Neuromorphic computing began as the pursuit of using analog circuits to mimic the synaptic structures found in brains. The brain excels at picking out patterns from noise and learning . A neuromorphic CPU excels at processing discrete, clear data.

For that reason, many believe neuromorphic computing can unlock applications and solve large-scale problems that have stymied conventional computing systems for decades. One big issue is that von Neumann architecture-based processors must wait for data to move in and out of system memory. Cache structures help mitigate some of this delay, but the data bottleneck grows more pronounced as chips get faster. Neuromorphic processors, on the other hand, aim to provide vastly more power-efficient operation by modeling the core workings of the brain.

Neurons send information pulses to one another in pulse patterns called spikes. The timing of these spikes is critical, but not the amplitude. Timing itself conveys information. Digitally, a spike can be represented as a single bit, which can be much more efficient and far less power-intensive than conventional data communication methods. Understanding and modeling of this spiking neural activity arose in the 1950s, but hardware-based application to computing didn’t start to take off for another five decades.

DARPA kicks off a productive decade

In 2008, the U.S. Defense Advanced Research Projects Agency (DARPA) launched a program called Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, “to develop low-power electronic neuromorphic computers that scale to biological levels.” The project’s first phase was to develop nanometer-scale synapses that mimicked synapse activity in the brain but would function in a microcircuit-based architecture. Two competing private organizations, each backed by their own collection of academic partners, won the SyNAPSE contract in 2009: IBM Research and HRL Laboratories, which is owned by GM and Boeing.

In 2014, IBM revealed the fruits of its labors in Science, stating, “We built a 5.4-billion-transistor chip [called TrueNorth] with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. … With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.”

A 4x4 array of neuromorphic chips, released as part of DARPA’s SyNAPSE project. Designed by IBM, the chip has over 5 billion transistors and more than 250 million “synapses.”

Above: A 4×4 array of neuromorphic chips, released as part of DARPA’s SyNAPSE project. Designed by IBM, the chip has over 5 billion transistors and more than 250 million “synapses.”

By 2011, HRL announced it had demonstrated its first “memristor” array, a form of non-volatile memory storage that could be applied to neuromorphic computing. Two years later, HRL had its first neuromorphic chip, “Surfrider.” As reported by MIT Technology Review, Surfrider featured 576 neurons and functions on just 50 mW of power. Researchers built the chip into a sub-100-gram drone aircraft equipped with optical, infrared, and ultrasound sensors and sent the drone into three rooms. The drone “learned” the layout and objects of the first room through sensory input. From there, it could “learn on the fly” if it was in a new room or could recognize having been in the same room before.

HRL’s 2014 neuromorphic-driven quadcopter drone.

Above: HRL’s 2014 neuromorphic-driven quadcopter drone.

Other notable research included Stanford University’s 2009 analog, synaptic approach called NeuroGrid. Until 2015, the EU funded the BrainScaleS project, which yielded a 200,000-neuron system based on 20 systems. The University of Manchester worked to tackle neural algorithms on low-power hardware with its Spiking Neural Network Architecture (SpiNNaker) supercomputer, built from 57,600 processing nodes, each of eighteen 200 MHz ARM9 processors. The SpiNNaker project spotlights a particularly critical problem in this space: Despite using ARM processors, the solution still spans 10 rack-mounted blade enclosures and requires roughly 100 kW to operate. Learning systems in edge-based applications don’t have the liberty of such power budgets.

Intel’s wide influence

Intel Labs set to work on its own lines of neuromorphic inquiry in 2011. While working through a series of acquisitions around AI processing, Intel made a critical talent hire in Narayan Srinivasa, who came aboard in early 2016 as Intel Labs’ chief scientist and senior principal engineer for neuromorphic computing. Srinivasa spent 17 years at HRL Laboratories, where (among many other roles and efforts) he served as the principal scientist and director of the SyNAPSE project. Highlights of Intel’s dive into neuromorphic computing included the evolution of the Intel’s Loihi, a neuromorphic manycore processor with on-chip learning, and follow-on platform iterations such as Pohoiki Beach.

A close-up of Intel's Nahuku board, which contains 8 to 32 Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019.

Above: A close-up of Intel’s Nahuku board, which contains 8 to 32 Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019.

The company also formed the Intel Neuromorphic Research Community (INRC), a global effort to accelerate development and adoption that includes Accenture, AIrbus, GE, and Hitachi among its more than 100 global members. Intel says it’s important to focus on creating new neuromorphic chip technologies and a broad public/private ecosystem. The latter is backed by programming tools and best practices aimed at getting neuromorphic technologies adopted and into mainstream use.

Srinivasa is also CTO at Eta Compute, a Los Angeles, CA-based company that specializes in helping proliferate intelligent edge devices. Eta showcases how neuro-centric computing is beginning to penetrate into the market. While not based on Loihi or another neuromorphic chip technology, Eta’s current system-on-chip (SoC) targets vision and AI applications in edge devices with operating frequencies up to 100 MHz, a sub-1μA sleep mode, and running operation of sub-5μA per MHz. In practice, Eta’s solution can perform all the computation necessary to count people in a video feed on a power budget of just 5mW. The other side of Eta’s business works to enable machine learning software for this breed of ultra-low-power IoT and edge device — a place where neuromorphic chips will soon thrive.

In a similar vein, Canadian firm Applied Brain Research (ABR) also creates software tools for building neural systems. The company, which has roots in the University of Waterloo as well as INRC collaborations, also offers its Nengo Brain Board, billed as the industry’s first commercially available neuromorphic board platform. According to ABR, “To make neuromorphics easy and fast to deploy at scale, beyond our currently available Nengo Brain Boards, we’re developing larger and more capable versions with researchers at the University of Waterloo, which target larger off-the-shelf Intel and Xilinx FPGAs. This will provide a quick route to get the benefits of neuromorphic computing sooner rather than later.” Developing the software tools for easy, flexible neuromorphic applications now will make it much easier to incorporate neuromorphic processors when they become broadly available in the near to intermediate future.

These ABR efforts in 2020 exist, in part, because of prior work the company did with Intel. As one of the earliest INRC remembers, ABR presented work in late 2018 using Loihi to perform audio keyword spotting. ABR revealed that “for real-time streaming data inference applications, Loihi may provide better energy efficiency than conventional architectures by a factor of 2 times to over 50 times, depending on the architecture.” These conventional architectures included a CPU, a GPU, NVIDIA’s Jetson TX1, the Movidius Neural Compute Stick, and the Loihi solution “[outperforming] all of these alternatives on an energy cost per inference basis while maintaining equivalent inference accuracy.” Two years later, this work continues to bear fruit in ABR’s current offerings and future plans.

Benchmarks and neuromorphic’s future

Today, most neuromorphic computing work is done through deep learning systems processing on CPUs, GPUs, and FPGAs. None of these is optimized for neuromorphic processing, however. Chips such as Intel’s Loihi were designed from the ground up exactly for these tasks. This is why, as ABR showed, Loihi could achieve the same results on a far smaller energy profile. This efficiency will prove critical in the coming generation of small devices needing AI capabilities.

Many experts believe commercial applications will arrive in earnest within the next three to five years, but that will only be the beginning. This is why, for example, Samsung announced in 2019 that it would expand its neuromorphic processing unit (NPU) division by 10x, growing from 200 employees to 2000 by 2030. Samsung said at the time that it expects the neuromorphic chip market to grow by 52 percent annually through 2023.

One of the next challenges in the neuromorphic space will be defining standard workloads and methodologies for benchmarking. Benchmarking applications such as 3DMark and SPECint have played a critical role in helping technology adopters to match products to their needs. Unfortunately, as discussed in the September 2019 Nature Machine Intelligence, there are no such benchmarks in the neuromorphic space, although author Mike Davies of Intel Labs makes suggestions for a spiking neuromorphic system called SpikeMark. In a technical paper titled “Benchmarking Physical Performance of Neural Inference Circuits,” Intel researchers Dmitri Nikonov and Ian Young lay out a series of principles and methodology for performing neuromorphic benchmarking.

To date, no convenient testing tool has come to market, although Intel Labs Day 2020 in early December took some big steps in this direction. Intel compared, for example, Loihi against its Core i7-9300K in processing “Sudoku solver” problems. As the image below shows, Loihi achieved up to 100x faster searching.

Researchers saw a similar 100x gain with Latin squares solving and achieved solutions with remarkably lower power consumption. Perhaps the most important result was how different types of processors performed against Loihi for certain workloads.

Loihi pitted not only against conventional processors but also IBM’s TrueNorth neuromorphic chip. Deep learning feedforward neural networks (DNNs) decidedly underperform on neuromorphic solutions like Loihi. DNNs are linear, with data moving from input to output in a straight line. Recurrent neural networks (RNNs) work more like the brain, using feedback loops and exhibiting more dynamic behavior. RNN workloads are where Loihi shines. As Intel noted: “The more bio-inspired properties we find in these networks, typically, the better the results are.”

The above examples can be thought of as proto-benchmarks. They are a necessary, early step towards a universally accepted tool running industry standard workloads. Testing gaps will eventually be filled, new applications and use cases will arrive. Developers will continue working to deploy these benchmarks and applications against critical needs, like COVID-19.

Neuromorphic computing remains deep in the R&D stage. Today, there are virtually no commercial offerings in the field. Still, it’s becoming clear that certain applications are well suited to neuromorphic computing. Neuromorphic processors will be far faster and more power-efficient for these workloads than any modern, conventional alternatives. CPU and GPU computing isn’t disappearing; neuromorphic computing will merely slot in beside them to handle roles better, faster, and more efficiently than anything we’ve seen before.

By VentureBeat Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here