In 1985, a small team at Acorn Computers in Cambridge, England, powered on a prototype processor that drew less power than a single LED indicator light. The chip consumed just 0.1 watts — so little that the engineers initially believed it was not working at all. It was working. That chip was the first ARM processor, and its principal hardware designer was Steve Furber, a mathematician-turned-computer architect whose work would go on to power virtually every smartphone, tablet, and embedded device on the planet. By 2026, over 280 billion ARM-based chips have been manufactured, making the ARM architecture the most widely deployed processor design in human history. More than 99% of the world’s smartphones use ARM processors. Yet Furber, a soft-spoken academic who spent most of his career at the University of Manchester, remains far less known than the technology he helped create. His story spans the birth of the personal computer in Britain, the invention of the world’s most successful chip architecture, and a second act in neuromorphic computing that may prove equally transformative.
Early Life and Education
Stephen Byram Furber was born on March 21, 1953, in Manchester, England — a city with deep roots in computing history, home to the university where Alan Turing had worked on the Manchester Mark 1 computer in the late 1940s. Furber showed early aptitude in mathematics and science, and his academic path led him to the University of Cambridge, where he studied mathematics at St John’s College. He earned his Bachelor of Arts degree with distinction, then continued at Cambridge to complete a PhD in aerodynamics, focusing on computational fluid dynamics. His doctoral work involved writing code for numerical simulations, an experience that gave him practical programming skills and an appreciation for the relationship between hardware efficiency and computational throughput.
Furber’s path into computer architecture was not planned. After completing his PhD in 1980, he could have pursued an academic career in aerodynamics. Instead, he was drawn into the emerging personal computer revolution by an unexpected opportunity. A small Cambridge startup called Acorn Computers was hiring engineers to build a new machine commissioned by the British Broadcasting Corporation — the BBC Micro. Furber joined Acorn, and the trajectory of his career, and of computing itself, shifted permanently.
At Acorn, Furber worked alongside Sophie Wilson, a brilliant computer scientist who had designed Acorn’s earlier machines. Together, they formed a partnership that would produce two of the most consequential computing products of the 1980s: the BBC Micro and the ARM processor. Furber brought his mathematical rigor and hardware engineering intuition; Wilson brought her deep understanding of instruction set design and compiler architecture. Their collaboration was a textbook example of complementary expertise producing results that neither could have achieved alone.
The BBC Micro and Acorn’s Rise
The BBC Computer Literacy Project, launched in 1981, aimed to introduce British households and schools to computing. The BBC selected Acorn Computers to build the official microcomputer for the program after a competitive evaluation process. Furber was the primary hardware designer of the BBC Micro, responsible for the circuit board layout, the memory architecture, and the overall system design. The machine launched in December 1981 at a price of 235 pounds for the Model A and 335 pounds for the Model B.
The BBC Micro was an immediate success. Over 1.5 million units were sold, and the machine became a fixture in British schools throughout the 1980s. It ran on a 2 MHz MOS Technology 6502 processor — the same chip family that powered the Apple II and the Commodore 64 — but Furber’s hardware design maximized what could be extracted from that modest silicon. The BBC Micro offered superior expandability through its Tube interface, which allowed external second processors to be attached, and its operating system was considered unusually well-designed for its era.
More importantly, the BBC Micro established Acorn as a serious computer company and gave the team — Furber, Wilson, and their colleagues — the confidence and institutional backing to attempt something far more ambitious: designing their own processor from scratch.
The ARM Processor Breakthrough
Technical Innovation
By 1983, Acorn needed a more powerful processor for its next generation of computers, but the available options were disappointing. The Motorola 68000 and Intel 80286 were complex, power-hungry, and expensive. Furber and Wilson looked at the research coming out of the University of California, Berkeley, where David Patterson was developing the RISC (Reduced Instruction Set Computer) concept, and from Stanford, where John Hennessy was working on the MIPS architecture. The RISC philosophy was compelling: instead of building processors with hundreds of complex instructions that each took many clock cycles, you could build processors with a small number of simple instructions that each executed in a single clock cycle. The result would be a simpler, faster, more efficient chip.
Furber and Wilson decided to design their own RISC processor — the Acorn RISC Machine, or ARM. The project was remarkably lean by the standards of chip design. The entire design team numbered roughly a dozen people. Furber led the hardware implementation while Wilson designed the instruction set architecture. They completed the design in about 18 months, and VLSI Technology fabricated the first silicon in April 1985.
The ARM1 had approximately 25,000 transistors — a fraction of the transistor count of competing processors. The Intel 80386, released the same year, had 275,000 transistors. This simplicity was not a limitation but a deliberate architectural choice. Fewer transistors meant less power consumption, less heat, smaller die area, and lower manufacturing cost. The ARM1’s register-heavy architecture featured 16 general-purpose 32-bit registers, allowing the processor to keep data close to the computation and reduce expensive memory accesses.
The ARM instruction set embodied several innovations that would prove prescient. Every instruction included a conditional execution field, meaning any instruction could be conditionally executed based on processor flags without requiring a separate branch instruction. This reduced the need for short conditional branches and improved pipeline efficiency. The barrel shifter, integrated into the data path, allowed one operand of any arithmetic or logical operation to be shifted or rotated at zero additional cost. Consider how this makes ARM assembly remarkably expressive:
; ARM assembly: multiply R0 by 10 using shifts and adds
; R0 = R0 * 8 + R0 * 2 = R0 * 10
; No multiply instruction needed — just shift and add
MOV R1, R0, LSL #3 ; R1 = R0 shifted left by 3 (R0 * 8)
ADD R0, R1, R0, LSL #1 ; R0 = R1 + (R0 shifted left by 1) = R0*8 + R0*2
; Conditional execution eliminates branches entirely
; Equivalent to: if (R0 > R1) R2 = R0; else R2 = R1;
CMP R0, R1 ; compare R0 and R1, set flags
MOVGT R2, R0 ; if greater: R2 = R0
MOVLE R2, R1 ; if less or equal: R2 = R1
; No branch, no pipeline flush — three instructions, always executed
; Load/store architecture: only LDR/STR access memory
; All arithmetic operates on registers exclusively
LDR R3, [R4, #8] ; load word from memory at address R4+8
ADD R3, R3, R5 ; add R5 to R3 (register-only operation)
STR R3, [R4, #8] ; store result back to memory
This design elegance — doing more with less — became the defining characteristic of the ARM architecture. The load-store architecture meant that only dedicated load and store instructions accessed memory; all arithmetic operations worked exclusively on registers. This clean separation simplified the pipeline and made performance predictable. The uniform 32-bit fixed-width instruction encoding simplified the decode logic, contributing to the processor’s remarkably low power consumption.
The ARM2, which followed in 1986 with roughly 30,000 transistors, became the processor used in Acorn’s Archimedes computer. It delivered performance competitive with the Intel 80386 while consuming a fraction of the power. The ARM3 added a cache, and by the early 1990s the architecture had matured into a commercially viable platform that attracted attention far beyond Acorn’s own product line.
Why It Mattered
The timing of ARM’s low-power design philosophy proved extraordinarily fortunate. In the late 1980s and early 1990s, the computing industry was about to undergo a massive shift toward mobile and embedded devices — a shift that would reward exactly the kind of power-efficient processing that ARM provided. When Apple, Acorn, and VLSI Technology formed the joint venture Advanced RISC Machines Ltd (ARM Ltd) in November 1990, the company adopted a business model that would amplify its impact enormously: instead of manufacturing chips, ARM would license its processor designs to other companies, who would then fabricate and sell their own ARM-based chips.
This licensing model meant that ARM’s architecture could proliferate across the entire electronics industry. Texas Instruments, Samsung, Qualcomm, Apple, Huawei, MediaTek, and dozens of other companies all became ARM licensees. Each brought ARM into different markets: mobile phones, tablets, automotive systems, IoT sensors, network equipment, and eventually even servers and supercomputers. By the time Apple released the iPhone in 2007 with an ARM-based processor, the architecture was already dominant in mobile computing. Today, Apple’s M-series chips — the processors in every Mac, iPad, and recent iPhone — are ARM-based designs that trace their lineage directly back to the architecture Furber and Wilson created in that small Cambridge office in 1985.
The RISC principles that Furber helped bring from academic research into commercial reality also influenced the broader processor industry. Even Intel’s x86 processors, nominally CISC (Complex Instruction Set Computer) designs, internally translate their complex instructions into simpler RISC-like micro-operations for execution. The ideas that Patterson, Hennessy, Furber, and Wilson championed in the 1980s became the universal foundation of modern processor design. In recognition of this impact, Patterson and Hennessy received the ACM Turing Award in 2017 — and both have acknowledged the ARM project as one of the most important commercial validations of RISC principles in practice.
Other Major Contributions
After leaving Acorn in 1990, Furber joined the University of Manchester as the ICL Professor of Computer Engineering, a position he would hold for over three decades. At Manchester, he shifted his focus from commercial processor design to research that pushed the boundaries of what computing could become.
His most ambitious project at Manchester was SpiNNaker — Spiking Neural Network Architecture. Launched in 2006 and reaching completion of its million-core version in 2019, SpiNNaker is a massively parallel computer designed to simulate the behavior of biological neural networks in real time. The machine contains over one million ARM processor cores — chosen precisely because ARM cores are small, power-efficient, and can be packed densely — organized into a custom interconnect network that mimics the communication patterns of neurons in the brain.
SpiNNaker does not work like a conventional supercomputer. Traditional computers process data in large, synchronous batches. Biological brains, by contrast, process information through sparse, asynchronous electrical impulses called spikes. SpiNNaker was designed to model this spiking behavior efficiently, with each processor core simulating approximately 1,000 neurons and their synaptic connections. The full million-core machine can simulate the activity of roughly one billion neurons in real time — approaching the scale of a mouse brain.
The architecture required solving communication challenges that had no precedent. Biological neural networks have astronomically complex connectivity patterns, and modeling these patterns on a digital machine requires sending billions of small packets between processors with minimal latency. Furber’s team designed a custom asynchronous interconnect fabric that routes packets without a global clock, using techniques from asynchronous circuit design that Furber had been studying since the mid-1990s. The concept can be understood through a simplified model of how spike events propagate:
# Simplified model of SpiNNaker spiking neural network simulation
# Each core simulates approximately 1000 neurons using
# the leaky integrate-and-fire (LIF) model
class LIFNeuron:
"""Leaky Integrate-and-Fire neuron model used in SpiNNaker"""
def __init__(self, threshold=-55.0, reset=-70.0, leak=0.95):
self.membrane_potential = -70.0 # millivolts (resting)
self.threshold = threshold # spike threshold
self.reset = reset # post-spike reset value
self.leak = leak # decay factor per timestep
self.spike_output = False
def step(self, input_current):
"""Advance one millisecond timestep"""
# Leak: membrane potential decays toward resting state
self.membrane_potential *= self.leak
# Integrate: accumulate input from synaptic connections
self.membrane_potential += input_current
# Fire: if threshold exceeded, emit spike and reset
if self.membrane_potential >= self.threshold:
self.spike_output = True
self.membrane_potential = self.reset
else:
self.spike_output = False
return self.spike_output
# SpiNNaker routes spike event packets asynchronously
# between its one million ARM cores using a custom
# multicast fabric — no global clock, no synchronization
# Each packet: source neuron ID + routing key
# Hardware delivers spikes to all connected targets
SpiNNaker became a key platform in the European Union’s Human Brain Project, a billion-euro research initiative to understand the human brain through simulation and modeling. Furber’s machine provided the neuromorphic computing component of the project, enabling neuroscientists to test hypotheses about brain function by running simulations of neural circuits at biologically realistic timescales. The project has also contributed to the development of brain-inspired artificial intelligence algorithms and to our understanding of how neural computation differs fundamentally from conventional digital computation. As AI and neuromorphic research teams grow in complexity, effective project management tools become essential for coordinating the kind of cross-disciplinary collaboration that Furber’s work demands.
Beyond SpiNNaker, Furber made significant contributions to the field of asynchronous processor design. Most digital circuits rely on a global clock signal that synchronizes all operations. Asynchronous circuits instead use handshaking protocols between components, operating without a clock. Furber’s research group at Manchester developed the AMULET series of processors — asynchronous implementations of the ARM architecture — that demonstrated the viability of clockless processor design. AMULET1 (1993), AMULET2e (1997), and AMULET3 showed that asynchronous design could reduce power consumption and electromagnetic interference, advantages particularly valuable in embedded and mobile applications.
Furber also played a pivotal role in computing education policy. He co-authored the influential 2012 report “Shut Down or Restart” for the Royal Society, which argued that computing education in British schools had degenerated into teaching students to use office software rather than to understand how computers actually work. The report contributed directly to reforms in the UK national curriculum that reintroduced programming and computational thinking into schools — changes that affected millions of students across England.
Philosophy and Approach
Key Principles
Furber’s engineering philosophy centers on a conviction that simplicity and elegance are not just aesthetic preferences but practical engineering requirements. The ARM processor succeeded not because it was the most powerful chip of its era — it was not — but because it was the most efficient. Fewer transistors meant less power, less heat, less cost, and higher reliability. This philosophy of achieving more with less ran counter to the prevailing industry trend of the 1980s and 1990s, which equated processor quality with transistor count and instruction set complexity.
His approach to the SpiNNaker project reveals another principle: that the most interesting computing problems require architectures specifically tailored to the problem domain. General-purpose processors are extraordinarily versatile, but simulating a billion neurons in real time is not a general-purpose task. It requires custom communication fabrics, event-driven processing models, and hardware designed around the physics of neural computation rather than the conventions of von Neumann architecture. This willingness to question fundamental architectural assumptions — including the assumption that all computation should be synchronous and clock-driven — has been a hallmark of Furber’s career.
Furber has consistently advocated for the importance of understanding computer architecture at a fundamental level. In an era where most software developers work at high levels of abstraction, far removed from the hardware, Furber argues that understanding how processors actually execute instructions — how caches work, why branch prediction matters, what happens during a context switch — makes developers fundamentally better at their craft. This belief connects his work on ARM, where every transistor mattered, to his advocacy for computing education reform, where he pushed for curricula that taught computational thinking rather than mere software literacy.
His career also demonstrates the value of moving between industry and academia. The ARM processor was born in a commercial setting at Acorn, driven by market pressures and product deadlines. SpiNNaker was born in an academic setting at Manchester, driven by scientific curiosity and long-term research goals. Furber has argued that both environments are essential for technological progress — industry provides urgency and real-world validation, while academia provides the freedom to pursue ideas that may take decades to bear fruit. Modern web development agencies navigating similarly complex technical challenges understand this balance between pragmatic delivery and architectural vision.
Legacy and Impact
Steve Furber’s contributions to computing operate at two very different scales, but both share a common thread: rethinking what a processor should be. With ARM, he challenged the assumption that processors needed to be complex to be powerful. With SpiNNaker, he challenged the assumption that processors needed to be clocked to be useful. In both cases, the results have been transformative.
The ARM architecture’s impact is quantifiable in a way that few technological achievements can match. Over 280 billion ARM-based chips have been shipped since 1990, with over 30 billion produced annually in recent years. ARM processors power virtually every smartphone on Earth, the majority of tablets, most embedded controllers, an increasing number of laptops (including Apple’s entire Mac line since 2020), and a growing share of cloud servers (Amazon’s Graviton and Ampere Altra chips). The architecture that Furber and Sophie Wilson designed on a shoestring budget at Acorn has become the computational substrate of the mobile era.
SpiNNaker’s impact, while less commercially visible, may prove equally significant in the long term. As artificial intelligence research increasingly looks toward brain-inspired computing models for efficiency gains — particularly in an era when the energy costs of training large AI models have become a major concern — neuromorphic architectures like SpiNNaker point toward a fundamentally different approach to computation. The SpiNNaker2 project, currently in development at the Technical University of Dresden in collaboration with Manchester, aims to deliver order-of-magnitude improvements in scale and efficiency, bringing brain-scale simulation closer to reality. Researchers like Geoffrey Hinton, who has argued that current deep learning approaches may need to incorporate more biologically plausible mechanisms, see neuromorphic hardware as a crucial piece of the puzzle.
Furber was appointed Commander of the Order of the British Empire (CBE) in 2008 for his services to computer science. He was elected a Fellow of the Royal Society in 2002, a Fellow of the Royal Academy of Engineering in 2005, and a Fellow of the IEEE and the British Computer Society. He received the Royal Academy of Engineering Silver Medal in 2003, the IET Faraday Medal in 2007, and the IEEE Computer Society Charles Babbage Award. In 2010, ARM’s creation was recognized with an IEEE Milestone in Electrical Engineering and Computing — one of the highest honors the IEEE bestows on technological achievements.
What makes Furber’s legacy distinctive is its dual nature. He is one of the very few computer architects who has made defining contributions in two completely different domains of processor design — one that became the most commercially successful architecture in history, and another that may help unlock the secrets of biological intelligence. From the BBC Micro to SpiNNaker, from 25,000 transistors to a million cores, Steve Furber’s career is a testament to what happens when engineering elegance meets scientific ambition.
Key Facts
- Born: March 21, 1953, Manchester, England
- Education: BA in Mathematics and PhD in Aerodynamics, University of Cambridge
- Known for: Co-designing the ARM processor architecture, leading the SpiNNaker neuromorphic computing project
- Key projects: BBC Micro (hardware design), ARM1/ARM2/ARM3 processors, AMULET asynchronous processors, SpiNNaker (1M cores)
- Awards: Fellow of the Royal Society (2002), Fellow of the Royal Academy of Engineering (2005), CBE (2008), IEEE Milestone for ARM (2010), IET Faraday Medal (2007), Royal Academy of Engineering Silver Medal (2003)
- Career: Acorn Computers (1980–1990), University of Manchester ICL Professor of Computer Engineering (1990–present)
- Impact: ARM architecture deployed in 280+ billion chips worldwide; SpiNNaker simulates approximately 1 billion neurons in real time
Frequently Asked Questions
What is Steve Furber’s role in the creation of the ARM processor?
Steve Furber was the principal hardware designer of the original ARM (Acorn RISC Machine) processor, developed at Acorn Computers in Cambridge between 1983 and 1985. While his colleague Sophie Wilson designed the instruction set architecture, Furber was responsible for the hardware implementation — the physical circuit design that translated Wilson’s instruction set into working silicon. Together, they led a small team that produced the ARM1, a processor with just 25,000 transistors that delivered remarkable performance per watt. This design philosophy of power efficiency over raw performance became the foundation of the ARM architecture that now powers virtually all smartphones and most embedded devices worldwide.
What is SpiNNaker and why is it important?
SpiNNaker (Spiking Neural Network Architecture) is a massively parallel neuromorphic computer designed and built by Steve Furber’s research group at the University of Manchester. Completed in its million-core configuration in 2019, SpiNNaker contains over one million ARM processor cores connected by a custom asynchronous communication fabric. It is designed to simulate biological neural networks in real time by modeling the spiking behavior of neurons — the electrical impulses that form the basis of brain computation. SpiNNaker can simulate approximately one billion neurons simultaneously, making it one of the most powerful neuromorphic computing platforms in existence. It has been a central component of the European Union’s Human Brain Project and has applications in neuroscience research, brain-inspired AI, and robotics.
How did the RISC philosophy influence the ARM design?
The RISC (Reduced Instruction Set Computer) philosophy, developed by researchers including David Patterson at Berkeley and John Hennessy at Stanford, proposed that processors with fewer, simpler instructions could outperform processors with larger, more complex instruction sets. Furber and Wilson studied this research and applied RISC principles to create ARM. The ARM instruction set used fixed-width 32-bit instructions, a load-store architecture where only load and store instructions access memory, and a large register file of 16 general-purpose 32-bit registers — all hallmarks of RISC design. ARM also added its own innovations beyond the original RISC research, including conditional execution of every instruction and an integrated barrel shifter, which gave the architecture unique advantages in code density and power efficiency.
What did Steve Furber contribute to computing education in the UK?
In 2012, Steve Furber co-authored a landmark report for the Royal Society titled “Shut Down or Restart,” which examined the state of computing education in British schools. The report concluded that the curriculum had become excessively focused on teaching students to use software applications — word processors, spreadsheets, and presentation tools — rather than understanding the fundamental principles of how computers work. Furber and his co-authors argued for a return to teaching programming, algorithms, and computational thinking. The report had a direct impact on UK education policy, contributing to reforms in the national curriculum that reintroduced computer science as a core subject in English schools, affecting millions of students and helping to address the country’s growing shortage of qualified computing professionals.