S U M M A RY
Biggest Breakthroughs in Computer Science,
2025
Source: Quanta Magazine  |  Format: Video transcript  |  Year: 2025
Featuring researchers from Rutgers, Google Quantum AI, MIT, and collaborators
Hash Tables: Disproving Yao's Conjecture
Hash tables store and retrieve data by using a hash function to assign items to memory
locations. The standard technique — uniform probing — tries random slots until finding an
empty one. As a table fills, finding empty slots takes progressively longer. In 1985, Andrew
Yao proved uniform probing was optimal and most of the field accepted this as a hard limit.
Andrew Krapivin, an undergraduate at Rutgers, began working on a data structure called "tiny
pointers" developed by his mentor. He ended up constructing a new hash table that serves as a
counterexample to Yao's conjecture. Rather than inserting into the first available slot, the new
approach looks ahead and selectively fills strategically desirable slots. This produces near-
constant average query time regardless of table fullness — even at 100% capacity. The
published result: there is no fundamental tension between space and time in hash table design.
Quantum Error Correction
Quantum computers use qubits that exist in superposition states, making them susceptible to a
continuous spectrum of errors. Any single error can ruin an entire calculation. Peter Shor's
1990s factoring algorithm demonstrated quantum computing's potential but required so many
steps that running it seemed impossible without error control.
Shor and others showed errors could in principle be corrected by adding redundant qubits.
Alexei Kitaev proposed the surface code in 1998: qubits arranged on a square lattice with

interspersed measurement qubits, enabling local error detection. Multiple physical qubits
thereby function as a single, more reliable logical qubit.
Google's Quantum AI team needed physical qubits good enough that adding more of them
would suppress errors exponentially. Early attempts failed — scaling up only made errors
worse. After years of redesign, their 72-qubit Willow processor crossed the threshold, showing
40–50% error suppression on the first run and error rates halving with each increase in code
size. This is the first experimental confirmation of the exponential error suppression that
theorists predicted decades ago.
Time Versus Space
In complexity theory, the two primary computational resources are time (number of steps) and
space (memory used). For many algorithms, space scales roughly proportionally with time. In
1975, a universal simulation technique was devised that could convert any algorithm to use
slightly less space. No further improvement was made for 50 years.
Ryan Williams at MIT encountered the work of James Cook and Ian Mertz on "tree
evaluation," which reused memory by layering new information on top of existing data such
that the additions cancel out — extracting more computation from already-allocated space.
Williams recognized this technique could generalize beyond its original context. He proved
that any computation taking time T can be simulated in approximately √T space — a quadratic
improvement and an enormous advance after decades of stagnation.