These days, the vogue in science and technology is all things quantum, especially in the continuously-advancing frontiers of computation and data analysis. Applications such as AI and crunching through Big Data require ever-faster processing of ever-larger datasets. The amount of data generated by companies such as Google and Facebook is already mind-boggling and is only set to increase at an exponential rate. However, it is clear that in many contexts, simply adding datacenters and processors is not going to be enough: a complete paradigm shift is required if these kinds of technologies are to become effective in the lives of billions of people around the world.
This is where the quantum realm comes in. At the smallest scales of our universe, the behaviours of the “classical” mechanics governing our everyday lives is replaced by something entirely different; the laws of quantum mechanics are unintuitive and confusing, hiding layers of complexity in ways that simply cannot exist at our macroscopic scales. Cutting-edge scientific advances help us to systematically codify and understand these phenomena; most interestingly, they also allow the possibility of manipulating quantum characteristics in a precise way.
Could such processes be used to our advantage as a computational tool? The answer is yes! In this short series, I aim to carry out a (relatively brief, but representative) traversal of the broad subject that is quantum computing. You can learn about what it is, how it is important, and how it is and can be implemented in the real world. This is the first post, and I will be updating links to the following posts in the series as they appear.


In this post, I will be introducing the basic motivation behind why quantum computing is a good idea/necessity, and a brief overview of its applications.

Hardware: Transistors and the Limits of Moore’s Law

At the heart of electronic computation lies the invention of the transistor; one of the fundamental circuit elements which allows the control of an output current by the input currents. The transistor’s importance comes from the fact that it is the key to creating “memory”: the state of the current (whether it is “on” or “off”) can be stored at one point in time, and at some later point be read off or changed.

From a computational perspective, this is good news: it means that we can use electric currents as our units of arithmetic, and then have a means to store the information encoded in them for later. The binary number system (base 2) is used, basically because we don’t really want to be fussy about how strong or weak the current is: just whether it’s there or not. In come the pure mathematicians, who can tell us what is possible with binary arithmetic, what the most efficient computational algorithms are, etc. etc., the engineers implement this, and everything is fine and dandy for the next few decades.

Fast forward to 2017: individual transistors now live on integrated circuits and are tiny, with over 2 billion on a single processor. The number of transistors on a chip is a measure of its computational power, and so this increase in transistor density over time (which can be seen in the graph below) corresponds to a direct increase in the capabilities of computing units. As we’ve gone from adding two numbers together to calculating the orbits of spacecraft to watching videos of cats falling into boxes, so our computers needed to become more and more powerful; transistor densities have doubled every two years, a phenomenon known as Moore’s Law. We expect this trend to continue in the future, what with the enormous capabilities that the analysis of Big Data will provide…

Moore's Law: the increase in transistor density on computer chips over time.

Moore’s Law: the increase in transistor density on computer chips over time. (from Wikipedia’s article on transistors)

Except that this can’t happen indefinitely. At some point, Moore’s Law must break down, simply because silicon transistors deal with current, which is a macroscopic quantity. Eventually, the computing industry will reach a stage where they simply cannot make components any smaller, thanks to the atomic limit and the constraints of Heisenberg’s Uncertainty Principle. In fact, the industry is already experiencing these issues, in the form of subthreshold leakage (transistors being “on” when they shouldn’t be) and electromigration (electrons tunneling across to places where they shouldn’t be).

So why can quantum computing solve these problems? In a way, it can’t, as the fundamental physical limit of the units of quantum computing (“qubits”) would still be limited by Heisenberg. Quantum computing changes the game in different ways (which will be discussed below), but it is still important to remember the motivation behind requiring a new model for computational hardware from a hardware perspective in the first place.

Software: The Quantum Speedup

So we’ve examined why the hardware construction is a problem in classical computation, but we also admitted that quantum computation doesn’t actually help us address these difficulties. So where does quantum computing help? Clearly it must be in the actual computational operations that can be carried out…

The characterization of classical computation boils down to (roughly speaking) the idea that all operations can only be:

  1. carried out on “definite” states, such as 0 or 1, and
  2. are deterministic, that is, they always give the same result given the same input conditions.

It’s somewhat difficult to conceptualize anything that breaks free of the first constraint, although you may correctly suspect that quantum physics provides the answer. The second constraint is seen as a Good Thing: after all, if we construct a computation that is “correct”, we want to be able to keep getting the “correct” answer no matter how many times we run the computation.

And here’s were the balance is struck: by using quantum physics, we sacrifice a bit of constraint 2, but we more than make up for it in the gains we receive from breaking free of constraint 1. Most importantly, this means that quantum computing is not just an extra-fast version of silicon chips: it changes the entire paradigm of algorithms and computation as a whole! We won’t go into too many details right now, but describing exactly how this is possible will be the focus of most of the rest of this series. The takeaway is that we no longer need to worry about the limits of scaling quantum computing systems, as the mathematical relationship of the hardware scaling is completely different; again, more on this later.

One last point to mention: so far, it seems like I’ve painted an absolutely amazing picture, and once the “secret to quantum computing” is cracked, all of our computational problems will be solved; this is not strictly true… Quantum computing isn’t actually a one-size-fits-all kind of thing: the species of computational problems which allow for a quantum speedup are very specific, and form a relatively tiny subset of all possible variations of computational problems. We will discuss this in more depth when talking specifically about computational complexity theory.

This seems bad. Why are we bothering spending so much time and effort on something that can only solve a few very specific things? Conveniently, however, just a handful of problems, namely

  • Fourier transform
  • Discrete logarithm
  • List search
  • and some others…

underpin pretty much all desirable computations that are carried out in the world today (and the world of the near future, too). Conveniently, these are also the problems for which quantum models of computation offer effective speedups!

So our excitement is saved, and we can safely delve into the inner workings of the quantum computational world…


When the next post in the series goes up, I will place a link to it here. Stay tuned!

How clear is this post?