In 1980, a professor at the University of California, Berkeley gave his graduate students an assignment that most of the computer industry considered pointless: design a processor with fewer instructions, not more. At the time, the dominant philosophy in chip design was to pack as many complex operations as possible into hardware — string copies, polynomial evaluations, entire procedure calls executed in a single instruction. The more a processor could do in one step, the thinking went, the better. David Patterson looked at the empirical data and reached the opposite conclusion. Most of those elaborate instructions were almost never used by real programs. The silicon devoted to implementing them was not just wasted — it was actively slowing down the simple operations that programs depended on 99% of the time. From that insight, Patterson and his students built the Berkeley RISC project, which — together with John Hennessy’s parallel MIPS effort at Stanford — launched a revolution that now powers virtually every smartphone, tablet, and embedded device on Earth. But RISC was only the beginning. Patterson went on to co-invent RAID storage, co-author the most influential textbook in computer architecture, lead the creation of RISC-V (an open instruction set now challenging ARM and x86), and win the Turing Award. His career is a masterclass in how rigorous measurement and a willingness to challenge orthodoxy can reshape an entire industry.
Early Life and Education
David Andrew Patterson was born on November 16, 1947, in Evergreen Park, Illinois, a working-class suburb on the south side of Chicago. His father was an electrician, and Patterson grew up in a household where practical problem-solving was a daily activity. He was the first person in his family to attend college — a fact that shaped his lifelong commitment to making education accessible and his belief that talent is distributed far more widely than opportunity.
Patterson attended the University of California, Los Angeles (UCLA), where he earned his bachelor’s degree in mathematics in 1969. The choice of mathematics rather than engineering was characteristic: Patterson has always been drawn to the underlying principles of systems rather than to specific implementations. He stayed at UCLA for graduate work, earning his master’s degree in computer science in 1970 and his Ph.D. in 1976. His doctoral research focused on program verification and programming languages — topics that seem distant from processor architecture but that gave him a deep understanding of how software actually behaves on hardware, knowledge that would prove essential to the RISC revolution.
In 1977, Patterson joined the faculty of the University of California, Berkeley, as an assistant professor of computer science. Berkeley in the late 1970s was already a powerhouse of systems research. Bill Joy was building BSD Unix down the hall. The department was steeped in a culture of open systems and practical, measurement-driven engineering. It was exactly the right environment for what Patterson was about to do.
The RISC and RAID Breakthroughs
The Technical Innovation
By the late 1970s, mainstream processor design followed the Complex Instruction Set Computing (CISC) philosophy. Processors like the VAX-11/780 had over 300 instructions, some requiring dozens of clock cycles and implemented in layers of microcode. The logic was straightforward: high-level languages had complex operations, so hardware should support those operations directly. Every new chip generation added more instructions to close the “semantic gap” between programming languages and machine code.
Patterson, like John Hennessy at Stanford, questioned this assumption — but from a different angle. While Hennessy came from a compiler background, Patterson approached the problem as a systems architect with a deep interest in quantitative measurement. In 1980, he and his graduate students — including Carlo Sequin, and later students like David Ditzel — began the Berkeley RISC project by doing something remarkably simple: they measured what real programs actually did.
The measurements were damning for the CISC philosophy. Real programs used a tiny fraction of available instructions. On the VAX, Patterson found that roughly 80% of executed instructions came from about 20% of the instruction set — the simplest 20%. Complex instructions like POLY (polynomial evaluation) or CALLS (procedure call with automatic register saving) were rarely generated by compilers because their rigid semantics rarely matched what the program actually needed. Meanwhile, all that microcode consumed die area and, critically, added pipeline stages and latency to the simple instructions that dominated real workloads.
; Berkeley RISC I — Patterson's original design (1982)
; 31 instructions total. Fixed 32-bit format.
; 138 registers organized in overlapping windows.
;
; The register window concept was Patterson's key innovation:
; each procedure call gets a fresh set of registers,
; with overlap zones for passing arguments — eliminating
; most memory accesses for procedure calls.
; RISC I procedure call using register windows:
; Window N (caller): r[24]-r[31] = output registers
; Window N+1 (callee): r[16]-r[23] = input registers (same physical regs)
; r[24]-r[31] = local registers
add r1, r0, #42 ; load immediate 42 into r1 (1 cycle)
add r24, r0, r1 ; pass argument via output reg (1 cycle)
call my_function ; switch register window + jump (1 cycle)
; callee sees r16 = 42 — zero-cost argument passing
; no memory access needed, no stack frame setup
; Compare to CISC: the VAX CALLS instruction does register
; saving, stack frame construction, and argument passing
; in one instruction — but takes 10-20 cycles of microcode.
; RISC I does the same work in 3 cycles with full pipelining.
The Berkeley RISC I processor, completed in 1982, implemented just 31 instructions — compared to the VAX’s 303. Every instruction was 32 bits wide and executed in a single clock cycle (with pipelining). The key architectural innovation that distinguished Berkeley RISC from Hennessy’s MIPS was the register window system. Patterson observed that procedure calls and returns were among the most expensive operations in real programs, primarily because of the need to save and restore registers to memory. His solution was to give each procedure its own set of registers, organized in overlapping windows so that argument passing between caller and callee required no memory access at all — just a pointer shift.
RISC I was fabricated on a 2-micron process by MOSIS and contained roughly 44,000 transistors — compared to the VAX’s hundreds of thousands. Despite having a fraction of the transistor count and instruction set, RISC I matched or exceeded the VAX-11/780 in performance on real workloads. The message was clear: architectural simplicity, combined with smart compiler technology, beat brute-force hardware complexity.
But Patterson was not finished. In 1987, he turned his attention to storage. At the time, mainframe computers used expensive, room-sized disk drives. Patterson, along with Garth Gibson and Randy Katz, published the landmark paper “A Case for Redundant Arrays of Inexpensive Disks (RAID).” The idea was to replace a single expensive disk with an array of cheap commodity drives, using redundancy (parity or mirroring) to achieve reliability equal to or better than the expensive drive. They defined five RAID levels, each trading off capacity, performance, and reliability differently.
Why It Mattered
The RISC revolution, driven independently by Patterson at Berkeley and Hennessy at Stanford, permanently changed how processors are designed. Before RISC, the industry assumed that more complex instructions meant better performance. After RISC, the industry understood that instruction set simplicity enables microarchitectural sophistication — deeper pipelines, superscalar execution, out-of-order processing — and that this tradeoff overwhelmingly favors simplicity.
The numbers told the story. RISC processors achieved two to five times the performance of CISC processors at comparable clock speeds and die sizes. Sophie Wilson’s ARM architecture, developed independently but on the same RISC principles, became the foundation of mobile computing. Sun’s SPARC, IBM’s POWER, and DEC’s Alpha were all RISC designs. Even Intel’s x86 — the most commercially successful CISC architecture — has, since the Pentium Pro in 1995, internally decoded CISC instructions into RISC-like micro-operations. The instruction set is CISC for backward compatibility; the engine underneath is RISC because Patterson and Hennessy proved it works better.
RAID had an equally transformative impact on storage. Before the 1988 paper, reliable storage meant expensive, proprietary hardware. After RAID, reliable storage meant cheap commodity drives plus clever software. Every modern data center, cloud platform, and network-attached storage device uses RAID or its descendants (including erasure coding, which extends RAID principles to distributed systems). The idea that redundancy plus commodity hardware could replace specialized expensive equipment was not just a storage insight — it was a philosophical template that later inspired Linux clusters, Google’s MapReduce, and the entire commodity cloud computing revolution.
Beyond RISC and RAID: Other Contributions
Patterson’s contributions extend far beyond the two innovations that bear his name most prominently. In the mid-1990s, he led the Network of Workstations (NOW) project at Berkeley, which demonstrated that clusters of commodity workstations connected by fast networks could replace expensive supercomputers for many parallel computing tasks. The NOW project was a direct intellectual ancestor of Google’s original server architecture and, by extension, of the entire modern cloud infrastructure. The same quantitative philosophy that drove RISC — measure the cost-performance ratio, then favor commodity components and clever architecture over expensive specialized hardware — drove NOW.
Perhaps his most enduring intellectual contribution is the textbook he co-authored with John Hennessy: “Computer Architecture: A Quantitative Approach,” first published in 1990. The book transformed how computer architecture is taught worldwide. Before Patterson and Hennessy, architecture courses described specific machines. Their textbook introduced a quantitative methodology: measure real workloads, analyze bottlenecks with data, make design decisions based on evidence. Now in its sixth edition, the book has trained generations of chip designers at companies from Intel to Apple to NVIDIA. A companion undergraduate text, “Computer Organization and Design,” is equally influential. Together, these books are among the most cited works in all of computer science.
Starting in 2010, Patterson became one of the driving forces behind RISC-V, an open-source instruction set architecture developed at Berkeley. Where the original Berkeley RISC and Stanford MIPS architectures were commercialized as proprietary products, RISC-V was released under a permissive open license, free for anyone to implement without royalties. Patterson saw RISC-V as the natural culmination of the RISC philosophy: if simplicity and openness win, then the instruction set itself should be open. RISC-V has since been adopted by hundreds of companies worldwide, including Western Digital, SiFive, and Alibaba, and is increasingly used in embedded systems, IoT devices, and even data center chips. It represents a direct challenge to both ARM and x86 — and Patterson’s fingerprints are all over it.
Patterson also served as director of the Parallel Computing Laboratory (Par Lab) at Berkeley, which researched how to effectively program multicore processors — a challenge that became critical after Moore’s Law stopped delivering single-thread performance gains around 2005. His work on auto-tuning frameworks and domain-specific languages for parallel hardware influenced the design of modern GPU programming models and machine learning frameworks.
For engineering teams today — whether working at a digital agency like Toimi building scalable web architectures or developers using project management tools like Taskee to coordinate complex hardware-software projects — Patterson’s quantitative approach to system design remains directly applicable. The same principle of measuring actual bottlenecks rather than optimizing based on assumptions is the foundation of modern performance engineering.
Philosophy and Engineering Approach
Key Principles
Patterson’s engineering philosophy is built on a set of principles that he has articulated consistently throughout his career, and that apply far beyond chip design.
The first and most fundamental is the primacy of measurement. The RISC revolution did not begin with a clever idea — it began with data. Patterson measured what instructions real programs used and discovered that the conventional wisdom was wrong. This measurement-first approach — now the standard methodology in computer architecture, thanks largely to his textbook — was radical in an era when many design decisions were driven by intuition, marketing, or the assumption that the previous generation’s choices were correct.
The second principle is Amdahl’s Law applied to design: focus optimization effort on the common case. In RISC, this meant making simple instructions execute as fast as possible, even if complex instructions required more software instructions to accomplish. In RAID, this meant optimizing for typical I/O patterns rather than worst-case scenarios. Patterson has called this “making the common case fast,” and it is arguably the single most important principle in computer system design.
Third, Patterson is a fierce advocate for openness. From BSD Unix (which shaped Berkeley’s culture during his early career) to RISC-V (which he championed decades later), he has consistently argued that open standards and open-source implementations lead to faster innovation than proprietary alternatives. His argument is quantitative, not ideological: open systems attract more developers, more testing, and more innovation because the barrier to entry is lower.
/*
* Patterson's design philosophy, expressed as the CPU
* performance equation from "Computer Architecture:
* A Quantitative Approach" (Patterson & Hennessy):
*
* CPU Time = Instructions × CPI × Clock Cycle Time
*
* Instructions = number of instructions executed (IC)
* CPI = average clock cycles per instruction
* Clock Cycle = 1 / clock frequency
*
* CISC optimizes for fewer Instructions (IC) — complex
* instructions do more work per instruction.
*
* RISC optimizes for lower CPI and shorter Clock Cycle —
* simple instructions pipeline better and allow higher
* clock frequencies.
*
* Patterson's key insight: reducing CPI from 5 to 1
* and increasing clock speed by 2x more than compensates
* for executing 2-3x more instructions.
*/
/* Example: Measuring the common case — Patterson's #1 rule */
#include <stdio.h>
#include <time.h>
/*
* Profile before you optimize.
* The bottleneck is never where you think it is.
*/
typedef struct {
const char *operation;
unsigned long count;
double total_cycles;
} InstructionProfile;
void analyze_workload(InstructionProfile *profile, int n) {
double total = 0;
for (int i = 0; i < n; i++)
total += profile[i].count * profile[i].total_cycles;
printf("Operation | Count | %% Time\n");
printf("----------------------|------------|--------\n");
for (int i = 0; i < n; i++) {
double pct = (profile[i].count * profile[i].total_cycles)
/ total * 100.0;
printf("%-21s | %10lu | %5.1f%%\n",
profile[i].operation, profile[i].count, pct);
}
/* Patterson's lesson: the top 2-3 rows dominate.
Optimize those. Ignore the rest. */
}
/*
* This is why RISC won: simple loads, stores, adds, and
* branches account for 80%+ of execution time.
* Making those fast — even at the cost of everything
* else — is the mathematically optimal strategy.
*/
Fourth, Patterson believes deeply in the power of collaboration between academia and industry. The Berkeley RISC project succeeded in part because its results were published openly, tested against real workloads from commercial systems, and eventually commercialized by Sun Microsystems (as SPARC). The RAID paper succeeded because it addressed a real industry pain point with a practical solution. Patterson has never been an ivory-tower theorist; his research has always been aimed at solving problems that matter to practitioners.
Finally, Patterson is a gifted educator who believes that how you explain something is as important as what you discover. His textbooks did not just convey information — they established a methodology. The quantitative approach to architecture that he and Hennessy championed has become the default framework for thinking about computer system design, influencing not just chip architects but also software engineers, system administrators, and anyone who needs to reason about performance.
Legacy and Modern Relevance
In 2017, David Patterson and John Hennessy were jointly awarded the ACM A.M. Turing Award — the highest honor in computer science — for their pioneering work on RISC architecture. The award citation recognized their “systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry.” Previous Turing Award recipients include Alan Turing’s intellectual heirs across every subfield of computing, and Patterson and Hennessy’s inclusion in that company reflects the depth and breadth of their influence.
Patterson’s legacy is visible in every computing device manufactured today. The ARM processors that power over 95% of the world’s smartphones are RISC designs. Apple’s M-series chips, which have transformed laptop and desktop computing, implement a RISC architecture. The RISC-V ecosystem, which Patterson championed, is growing explosively — with implementations ranging from tiny microcontrollers to data center processors. Even Intel and AMD’s x86 processors, as noted earlier, use RISC-like micro-operations internally.
RAID is equally ubiquitous. Every enterprise storage system, every NAS device, every cloud storage platform uses RAID or its intellectual descendants. When you store a file on Google Drive, Amazon S3, or any cloud service, it is protected by redundancy schemes that trace their lineage directly to Patterson, Gibson, and Katz’s 1988 paper. The economic model of RAID — commodity hardware plus software intelligence beats expensive specialized hardware — became the blueprint for cloud computing itself.
RISC-V represents Patterson’s most forward-looking contribution. In a world where processor architectures are controlled by a handful of companies (ARM by Arm Holdings, x86 by Intel and AMD), RISC-V offers an open alternative that anyone can implement, extend, and manufacture without licensing fees. Patterson has compared RISC-V to Linux’s impact on operating systems — an open platform that enables innovation by lowering barriers to entry. Countries including China, India, and members of the European Union have invested heavily in RISC-V as a path to semiconductor sovereignty.
At age 78, Patterson remains active. He spent several years at Google working on domain-specific architectures — custom chips designed for specific workloads like machine learning (Google’s TPUs embody this philosophy). His recent advocacy for domain-specific computing reflects the same pattern that has defined his career: measure the workload, identify the common case, and build hardware optimized for what programs actually do rather than what architects imagine they might do.
Patterson’s influence on computer science education is difficult to overstate. His textbooks are used in virtually every computer architecture course worldwide. The quantitative methodology he and Hennessy established — benchmark-driven design, empirical comparison, Amdahl’s Law as a design tool rather than just a theoretical result — has become the standard way the field thinks about system design. Modern developers who use code editors and performance profilers are applying Patterson’s core philosophy every time they measure before they optimize.
Key Facts
- Born: November 16, 1947, Evergreen Park, Illinois, United States
- Known for: Co-inventing RISC architecture, co-inventing RAID, co-leading RISC-V, co-authoring “Computer Architecture: A Quantitative Approach”
- Key projects: Berkeley RISC I/II (1982-1984), RAID paper (1988), NOW clusters (1990s), RISC-V (2010-present), Google TPU research
- Awards: ACM A.M. Turing Award (2017, with John Hennessy), IEEE John von Neumann Medal (2000), NAS Member, NAE Member, ACM Fellow, IEEE Fellow
- Education: B.A. Mathematics from UCLA (1969), M.S. Computer Science from UCLA (1970), Ph.D. Computer Science from UCLA (1976)
- Career: UC Berkeley professor (1977-present), Distinguished Engineer at Google (2016-2018)
- Publications: 6 editions of “Computer Architecture: A Quantitative Approach,” 5 editions of “Computer Organization and Design,” 200+ research papers
Frequently Asked Questions
Who is David Patterson and what is RISC?
David Patterson is an American computer scientist and UC Berkeley professor who co-invented the RISC (Reduced Instruction Set Computing) architecture in the early 1980s. RISC is a processor design philosophy that uses a small set of simple, fast instructions instead of the complex, multi-cycle instructions used in older CISC (Complex Instruction Set Computing) designs. Patterson’s Berkeley RISC project demonstrated that processors with fewer, simpler instructions could significantly outperform complex processors by enabling efficient pipelining — executing multiple instructions simultaneously in overlapping stages. Today, RISC principles power virtually all smartphone processors (ARM), Apple’s M-series chips, and the growing RISC-V ecosystem. Patterson received the 2017 Turing Award (with John Hennessy) for this work.
What is RAID and why did Patterson invent it?
RAID (Redundant Array of Inexpensive Disks) is a storage technology that Patterson co-invented with Garth Gibson and Randy Katz in their landmark 1988 paper. Before RAID, reliable data storage required expensive, room-sized proprietary disk drives. Patterson’s insight was that an array of cheap commodity drives, combined with data redundancy (mirroring or parity), could match or exceed the reliability and performance of expensive single drives at a fraction of the cost. They defined five RAID levels, each balancing capacity, speed, and fault tolerance differently. RAID revolutionized the storage industry and became the foundation of every modern data center, NAS device, and cloud storage platform. The underlying philosophy — commodity hardware plus software intelligence beats specialized expensive hardware — later inspired the design of cloud computing infrastructure itself.
What is RISC-V and how does Patterson’s work relate to it?
RISC-V is an open-source instruction set architecture (ISA) developed at UC Berkeley starting in 2010, with Patterson as one of its key champions. Unlike proprietary ISAs such as ARM (which requires licensing fees) or x86 (restricted to Intel and AMD), RISC-V is freely available for anyone to implement, modify, and manufacture without royalties. Patterson saw RISC-V as the natural evolution of the original RISC philosophy: if simpler, open designs produce better results, then the instruction set itself should be open. RISC-V has been adopted by hundreds of companies worldwide, including Western Digital, SiFive, Alibaba, and numerous startups. It is used in applications ranging from tiny embedded microcontrollers to server processors, and several countries have invested in RISC-V as a path to semiconductor independence. Patterson considers RISC-V comparable in potential impact to what Linux did for operating systems.