datasheets.com EBN.com EDN.com EETimes.com Embedded.com PlanetAnalog.com TechOnline.com   UBM Tech
UBM Tech

IBM's SyNAPSE carves out neural network simulation the company insists FPGAs can't follow

-September 03, 2013

Neural network processing is a specialized massive-parallel VLSI world that only occasionally involves FPGA archtitectures. But when Dharmendra Modha, principal SyNAPSE investigator at IBM Research, began dismissing the potential of FPGAs in August interviews on synaptic chips, some clearing of the neural-network air seemed necessary.

IBM has turned up the heat on the DARPA-funded SyNAPSE project because a "corelet" programming model was developed and announced at a neural-network conference in Dallas in August. The SyNAPSE hardware appears to be optimal for the parallel tasks and sparse distributed memory seen in neural networks, but it is useful to remember that the renaissance in neural-network studies has been going on for nearly 25 years. Most work involves software simulation, but processors designed as nodes have been promoted every few years or so, since the early 1990s.

Carver Mead, the grandfather of EDA software and mixed-signal chips, has been touting analog VLSI for 20 years as the best means of simulating the long-term potentiation utilized by synaptic connections in development of human memory.  Analog VLSI better represents the spiking of electrical current in a human neuron, Mead says.

Since many neural-network researchers seem frightened of mixed-signal VLSI, there have been attempts at applying fuzzy logic and quantum computing methods to the synaptic nodes that constitute the core processing elements in neural computing. Others are more traditional, utilizing small, low-power cores like ARM, or power-efficient FPGAs like Spartan or Igloo, as ultimate nodes.  In fact, promoters of FPGAs for neural networks have published a book to match Mead's own.

Let's remember, even the massively-parallel RISC devices of yesteryear, like the Inmos Transputer, served as synaptic processors. IBM is rolling SyNAPSE over well-trod territory.

Originally, it was assumed that Exclusive-OR operations, and other difficult algorithms encountered in neural-network back-propagation (an excitement pattern of synapses encountered any time a neural network has "hidden layers") would require a powerful FPGA like Virtex or Stratix.  More recently, the low-power advantages of powerful RISC-based mid-range FPGAs like Spartan or Cyclone seem to be favored.

When European and North American neural network research is taken as a whole, it seems the standalone ARM chip is still the favored network nodejust as ARM is favored in many embedded and wireless worlds. This may be due to the ease of microprogramming of ARM RISC, and the wealth of software available for the core.

This is not to dismiss SyNAPSE (Systems for Neuromorphic Adaptive Plastic Scalable Electronics).  IBM has worked with HRL (the former Hughes Research) and Hewlett-Packard on a new neural processor architecture since 2008. The "corelet" element in a SyNAPSE is composed of decidedly non-Von Neumann computing elements such as a "liquid state machine" and a "stackable classifier."  Since IBM also is working on some analog I/O for SyNAPSE that uses the best of Carver Mead's concepts, it's certain the architecture will play a key role in neural networks of the future.

But let's remember, FPGAs like the Tabula ABAX utilize unusual multi-dimensional multiplexing that might be best in neuron simulation.  Similarly, tiny Cortex ARM cores in massively parallel boards or hybrid chips might make great neural-network simulators.

It's no surprise that Modha wants to claim some bragging rights for the new chip in town, and SyNAPSE deserves applause. But it's very likely that FPGAs, as well as standalone embedded RISC processors, will continue to play a role in brain simulations of the future.

Also see:

Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES