Music aficionados might be familiar with the pop-rock duo The Finn Brothers. Their career spans nearly five decades, and to make a long story short, you probably know them better from the 1990s Australian band Crowded House. But Tim and Neil Finn originally hail from New Zealand, where I spent a year as an exchange student in 1982-1983. That was roughly the time of peak popularity for their original band, Split Enz—arguably one of New Zealand’s best-known gifts to the world, along with wool, wine and the scenic settings for the Lord of the Rings movie franchise.[1]
Thinking recently about quantum computing and how it differs from its predecessors, I borrowed this post’s title from one of Split Enz’ hits dating back to 1981, History Never Repeats. The added question mark is mine, because I was recently pointed to some historical roots in computer science that might be relevant to our understanding of quantum computing, how it works, and where it will be useful. Maybe history does repeat, but if it does, it isn’t in exactly the same way as before.
Learning, off and on
I first encountered computer science in the early 1980s, in secondary school. Besides programming in BASIC on a Digital PDP-8 minicomputer with 48K of RAM, the curriculum also introduced some novel mathematical concepts. My classmates and I quickly learned that our decimal, base 10 notation of numbers is a purely arbitrary construct, and we became comfortable with doing math in octal (base eight) and hexadecimal (base 16) notation. These were useful because, eight and 16 being powers of two, they mapped very easily to binary numbers. Those sequences of ones and zeros (bits) are the fundamental units of all classical computing, the foundation of everything from your smart watch and cell phone to your laptop and the largest high-performance supercomputer. It wasn’t hard from there to learn assembler and machine language programming, and later in university, to understand how compilers and interpreters converted higher-level programming languages to binary instructions. I think we also took comfort in binary digital computing’s determinism. Once we had written and debugged a computer program, and we gave it its input data, we would know with absolute certainty the results it would generate.
When I first discovered quantum computing, I grew very excited about it. The source of my excitement was the whole idea of a qubit and how it differs from a classical bit, or binary digit. Using the notion of superpositioning from quantum mechanics and doing computation on subatomic particles that, beyond binary, could be in a virtual infinitude of possible states—to call that a paradigm shift seems an understatement, to say the least. So, I would usually prefix any conversation I have or presentation I do about quantum computing with the statement that this is the first time in the history of computer science, that we’ve ever done anything different from the binary fundamentals which I learned in the 1980s.
I mentioned my view of this paradigm shift to a new colleague with whom I recently recorded a podcast for IBM. This gentleman, being somewhat older and considerably wiser than me, gently suggested that I look at the history of analog computing—which predates binary digital computing by a good many decades or even centuries, depending on how loosely you define what constitutes a computer—and gives a whole different model for computation not based on ones and zeros.
So, was I wrong about the novelty of quantum computing? Let’s see.
It wasn’t just black and white
Analog computing, as distinct from digital, works on continuous functions rather than discrete values like ones and zeros. Therefore, it would appear to more closely resemble natural phenomena. Waves on the ocean are not just up or down, they rise and fall continuously. Temperatures may be measured in degrees, but a mercury thermometer more accurately shows the smooth processes of getting warmer or getting colder. Isaac Newton, I would argue, had to invent calculus because everything he observed in physics was governed by continuity—the movements of heavenly bodies, for example, or the acceleration and deceleration of objects in motion.
The earliest calculating devices were all, in fact, analog. A mechanical wind-up watch, which uses a spring to drive gears that move the hour, minute and second hands, is a simple analog device. Mediaeval navigators used a sextant to navigate across the oceans—and the sextant is derived from the ancient astrolabe, an analog calculator for modelling the positions and motions of stars and planets. The earliest known astrolabe is the Antikythera mechanism, discovered in a shipwreck off the Greek coast and dating back to the second century BC.
OK, you might say, those are pretty simplistic examples although I do believe they had a pretty big influence on the course of human history. But analog computing continued well into modern times. In the 1800s, analog devices built on pulleys, wires and cylinders could produce perpetual calendars and predict tidal levels. A profusion of analog computers followed—using springs, wheels, discs and eventually electrical currents—and were applied to more complicated mathematical problems like solving differential equations or calculating the roots of polynomial functions. By the early to mid-20th century, electronic analog computers were pervasive especially in military uses involving rocket trajectories and gun fire control systems.
In 1949, New Zealand bestowed one of its most useful, if lesser-known, gifts to the world of computer science. Economist Bill Phillips, born in a small town in the country’s North Island and educated at the London School of Economics, built a hydraulic machine called the Monetary National Income Analog Computer, or MONIAC. It used tanks of coloured water and a system of pipes, faucets and pumps to simulate the flow of money through an economy including taxation, public spending, savings and investments. The MONIAC was surprisingly accurate, and some 12 to 14 units were successfully built and sold to governments, universities and corporations around the world[2].
The ups and downs of computing
Analog computing persisted until roughly the 1980s. It had the advantage of being fast, and well suited to modelling physical phenomena such as the action of springs or the trajectories of projectiles. However, it had limited precision and was quickly superseded by digital computers—although the two coexisted for well over a century.
But digital computing wasn’t always binary. In 1822, Charles Babbage invented the Difference Engine, one of the earliest known digital computers. It used 10-toothed gears to represent the digits 0 through 9 and worked by turning a handle. In theory the Difference Engine would be able to produce all kinds of mathematical tables, calculate exponents and even solve the roots of quadratic equations—although limits in the ability at the time to manufacture parts to the required precision meant that nothing more than a small prototype was ever built. Nevertheless, Babbage and his longtime collaborator Ada Lovelace continued to design more and more complicated computers over several decades, including the Analytical Engine which would have had memory storage as well as a rudimentary central processing unit.
By 1936, Alan Turing proposed a general-purpose computer consisting of a finite set of symbols that could be written to a tape based on a finite set of instructions and states the machine can be in. This “Turing Machine” has been mathematically proven to be able to execute any known computer algorithm although a working device has never been built. It was still being taught as the fundamental model of computing when I was an undergraduate in the late 1980s.
Despite the mostly theoretical nature of their work, Babbage, Lovelace and Turing laid the foundations for modern digital computing[3]. Moving from relays to vacuum tubes to transistors and semiconductors, computers quickly settled on using the binary values of ones and zeros. The reason, simply, is that it wasn’t feasible to accurately and consistently measure multiple levels of electrical current. It was much easier and faster to just define that a current below a certain threshold would be considered zero, and above another higher threshold would be considered one. From this beginning, binary digital computing took off and became the de facto standard for all computing—so pervasive, it seems to me, that the terms binary and digital have pretty much merged to mean the same thing.
Interestingly, there was a movement in the mid-20th century to develop a hybrid computing model, leveraging the best features of both analog and digital. The idea was that the digital computer would act as the controller, dividing up a program into smaller tasks and passing some of those tasks to analog sub-processors as required. The digital controller would then collect and consolidate the results. According to Wikipedia, such hybrid analog-digital computers were used by NASA for the Apollo and Space Shuttle programs.
Back to the future
As I consider analog computing, its similarities with quantum computing are striking—although there are notable differences. Analog computing models certain natural phenomena very well—trajectories, orbits and oscillations, for example—but these are still at the macro level. Physicist and Nobel laureate Richard Feynman famously pointed out that “Nature isn’t classical, dammit” and any computer model of nature “had better be quantum.” Quantum computing, therefore, reflects nature at its most fundamental level—and that is an advantage over both analog and digital computing. The very term quantum denotes discrete particles—notably, photons and electrons, which also can exhibit wave-like behaviours—so, in a way, quantum brings together the best of both analog and digital computing.
In a previous post, I described the terms Quantum Advantage and Quantum Supremacy. I would like now to revisit those, and state for the record that—to be clear and avoid hype—we should stop referring to Quantum Supremacy altogether. I think the term implies a superiority that supersedes and displaces all that came before—and this is not justified. Yes, quantum computing is and will be faster than classical computing, both analog and digital, but only at certain mathematical tasks—not for everything.
Quantum will not replace classical, and, building on the 20th century model I mentioned for analog computing, the best architectural work being done today is hybrid. Quantum Advantage is best defined as quantum computing lending an advantage to digital computing, when the workload is distributed optimally between the different models. On a visit to the TJ Watson Research Center in Yorktown, New York earlier this year, I saw IBM’s vision of bringing together the best of AI, High-Performance Computing (HPC) and quantum. Large clusters of CPUs, GPUs and QPUs—Classical, Graphical (AI) and Quantum—will come together and interoperate to solve the big problems that no one of them could do alone.
Perhaps, in an appropriately quantum sense, I was both wrong and right. Wrong about never having done anything other than binary digital computing before, but right that quantum is still an unprecedented paradigm shift. Maybe history repeats, but it’s not circular. It’s more like an ascending spiral, where the old comes back in new and better ways.
If, like me, you look back with amazement at what technology has done in the last few decades, and you look forward with a bit of bewilderment at the surreal and dreamlike future of quantum computing, let’s give the Finn brothers—as Split Enz—the last word:
Deep in the night it's all so clear
I lie awake with great ideas
Lurking about in no-man's land
I think at last I understand[4]
[1] Not to mention, of course, director Peter Jackson. Also bonus points if, like me, you’re old enough to remember the opera soprano Kiri te Kanawa.
[2] That might look like a small number, but bear in mind that IBM’s famous CEO, Thomas J. Watson, is reputed to have said in 1943: “I think there is a world market for maybe five computers.” And in 1981, it is claimed that Bill Gates said: “640K ought to be enough for anybody.” Both quotes are very likely apocryphal.
[3] I like to think of Charles Babbage as the godfather of computer hardware, and Ada Lovelace as the godmother of computer software.
[4] From the song “History Never Repeats” on the album “Waiata”, A&M Records, 1981
The article was very interesting but I needed three readings to grasp all that was said! Even then I am not sure that I totally understood it all!
Good to hear that a woman was involved.