← Back to Blog
Neuromorphic Computing

Neuromorphic Computing: Building Chips That Think in Spikes

The human brain runs on roughly 20 watts - less than a dim light bulb - yet it outperforms the most powerful supercomputers at tasks like vision, language, and motor control. Meanwhile, training a single large language model can consume megawatts of electricity over weeks. The disparity is not just a matter of scale; it is a matter of architecture. Conventional computers shuttle data endlessly between memory and processor. Brains do not. They compute with spikes - brief electrical pulses that carry information in their timing as much as their presence. Neuromorphic computing asks a radical question: what if we built hardware that works the same way?

How Biological Neurons Compute

In 1952, Hodgkin and Huxley published their Nobel Prize-winning model of the action potential - the brief, all-or-nothing electrical pulse that neurons use to communicate [1]. A neuron sits at a resting membrane potential. Incoming signals from other neurons incrementally raise or lower this voltage. Only when the accumulated input crosses a specific threshold does the neuron "fire" - generating a spike that propagates down its axon to the next set of neurons.

Two properties of this mechanism are crucial. First, it is event-driven: a neuron that receives no input consumes almost no energy. There is no clock ticking, no idle polling loop. Second, it is temporal: information is encoded not only in which neurons fire but in when they fire relative to each other. The precise timing of spikes carries meaning that a simple firing-rate average would lose.

From Biology to Silicon: The Neuromorphic Vision

Carver Mead coined the term "neuromorphic" in 1990 to describe electronic systems that mimic neuro-biological architectures [2]. His insight was that analogue transistor physics - the very behaviour that digital designers spend billions suppressing - naturally resembles the ion-channel dynamics of real neurons. Instead of fighting physics to build perfect digital switches, neuromorphic engineering exploits physics to build efficient, brain-like computation.

Wolfgang Maass formalised the computational theory behind this approach, defining Spiking Neural Networks (SNNs) as the "third generation" of neural network models [3]. While first-generation networks (perceptrons) process binary values and second-generation networks (backpropagation-trained ANNs) process continuous values, SNNs process discrete pulses over time. Maass showed that SNNs are at least as computationally powerful as second-generation networks and can, in certain temporal tasks, be strictly more powerful.

The Von Neumann Bottleneck

To understand why neuromorphic hardware matters, consider what it replaces. In a conventional Von Neumann architecture, a single bus connects memory and processor. Every instruction and every data value must travel through this bottleneck. GPUs mitigate the problem with massive parallelism, but they still rely on the same fundamental memory-processor separation and consume hundreds of watts doing so.

Brains have no such bottleneck. Memory and computation are co-located at every synapse. Processing happens wherever a spike arrives - there is no central bus, no global clock, and no idle power draw. This is the architecture that neuromorphic chips aim to replicate in silicon.

Hardware: TrueNorth and Loihi

IBM's TrueNorth chip, published in Science in 2014, was a milestone. It packs one million programmable spiking neurons and 256 million synapses onto a single chip, while consuming just 70 milliwatts during real-time operation - orders of magnitude less than a conventional processor performing comparable pattern-recognition tasks [4]. TrueNorth demonstrated that large-scale spiking networks could be physically realised with practical power budgets.

Intel's Loihi, introduced in 2018, pushed the concept further by adding on-chip learning. Loihi's 128 neuromorphic cores support programmable synaptic plasticity rules, allowing the chip to learn in real time without off-chip training [5]. For constrained optimisation problems, Loihi has demonstrated energy-delay improvements of over three orders of magnitude compared to conventional CPU solutions. This makes it particularly promising for edge computing and robotics, where real-time processing must happen on a tight power budget.

The Temporal Dimension

Perhaps the deepest difference between SNNs and traditional Artificial Neural Networks is the role of time. In a standard ANN, an input is presented, activations propagate forward, and an output is produced - time plays no role beyond sequential batch processing. In an SNN, the exact moment a spike arrives matters. Two spikes arriving at the same synapse one millisecond apart carry different information than two spikes arriving simultaneously.

This temporal coding is ideal for sensory data that is inherently time-varying: audio streams, radar signals, tactile feedback, and video. A neuromorphic vision sensor, for instance, does not capture frames at a fixed rate like a conventional camera. Instead, each pixel independently emits an event only when it detects a brightness change - producing a sparse, asynchronous stream that naturally encodes motion and edges with minimal data and minimal power.

Why It Matters

The convergence of spiking neural networks and dedicated neuromorphic hardware opens a path toward computing systems that are orders of magnitude more energy-efficient for the kinds of perception, learning, and control tasks that biological brains excel at. As autonomous vehicles, wearable health monitors, and industrial robots demand real-time intelligence at the edge - far from a data centre's power supply - the brain's architecture stops being a metaphor and becomes an engineering specification.

References

  1. Hodgkin, A. L. & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4), 500–544. doi:10.1113/jphysiol.1952.sp004764
  2. Mead, C. (1990). Neuromorphic electronic systems. Proceedings of the IEEE, 78(10), 1629–1636. doi:10.1109/5.58356
  3. Maass, W. (1997). Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9), 1659–1671. doi:10.1016/S0893-6080(97)00011-7
  4. Merolla, P. A. et al. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197), 668–673. doi:10.1126/science.1254642
  5. Davies, M. et al. (2018). Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1), 82–99. doi:10.1109/MM.2018.112130359