The ultimate goal of quantum computing is to build devices that dramatically outperform existing classical computers, the realization of which will yield a new era in computing. However, they are extremely difficult to build so it will be some time before we can manufacture devices that can achieve the full predicted capabilities of quantum computation. Despite this, recent results from complexity theory have revealed that quantum supremacy over classical devices might be achieved even with quantum computers with capabilities intermediate between classical and quantum computation. Research teams around the world are now racing to be the first to unambiguously demonstrate quantum supremacy. At the QSI we are focussed on identifying tasks that demonstrate quantum supremacy in the easiest, and cheapest, way possible. This involves theoretically identifying experimental benchmarks, designing architectures capable of achieving them, and working with experimental teams to bring it to reality.
Key Members: Prof Zhengfeng Ji, Dr Peter Rohde, Dr Chris Ferrie, and Dr Nengkun Yu
Benchmarks, applications, and architectures for intermediate quantum computers
This research creates a bridge between the mathematically rigorous study of algorithms and software, and the challenges faced by experimentalists and engineers in the lab. Our goal is to develop interesting computational tasks that are experimentally viable that are clearly on the pathway to scaling-up to more ambitious quantum computing architectures. We are working with experimental teams to optimize their architectures informed by the latest developments in the theory of quantum algorithms, complexity, and error correction. Over the next few years our team will be focussed on developing applications built on these results. Key to such questions is classifying the computational complexity of quantum simulation and sampling protocols, an area in which our team has a large amount of experience.
Verification and validation
As quantum computing experiments become increasingly sophisticated the verification and validation of prototype devices poses a major challenge for experimental teams. The number of degrees of freedom required to fully describe a quantum computation grows exponentially with the size of a quantum circuit, this fact is one of the key reasons for the power of quantum computers but it has the downside that it makes device characterization extremely difficult. Fully developing quantum machine learning and compressed sensing techniques will be the key to overcoming this roadblock to developing scalable quantum computers.
- Sergio Boixo, Sergei Isakov, Vadim Smelyanskiy, Ryan Babbush, Nan Ding, Zhang Jiang, Michael Bremner, John Martinis, and Hartmut Neven. Characterizing Quantum Supremacy in Near-Term Devices. QIP 2017.
- Michael Bremner, Ashley Montanaro, and Dan Shepherd. Average-case complexity versus approximate simulation of commuting quantum computations. Phys. Rev. Lett. (2016) (Earlier version: QIP 2016).
- Jeongwan Haah, Aram Harrow, Zhengfeng Ji, Xiaodi Wu, and Nengkun Yu. Sample-optimal tomography of quantum states. STOC 2016 (Earlier version: QIP 2016).
- Keith Motes, Jonathan Olson, Evan Rabeaux, Jonathan Dowling, S Olson, and Peter Rohde. Linear optical quantum metrology with single photons: exploiting spontaneously generated entanglement to beat the shot-noise limit. Phys. Rev. Lett. (2015).
- Andreas Schreiber, Aurél Gábris, Peter Rohde, Kaisa Laiho, Martin Štefaňák, Václav Potoček, Craig Hamilton, Igor Jex, and Christine Silberhorn. A 2D quantum work simulation of two-particle dynamics. Science (2012).
- Michael Bremner, Richard Jozsa, and Dan Shepherd. Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy. Proc. R. Soc. A (2010).