Your cart is currently empty!
An interview on the state of quantum machine learning with Manuel Rudolph
In this interview we discuss supervised and unsupervised quantum machine learning with Manuel Rudolph. He shares with us his passion for the topic, his caution for overly optimistic takes and his point of view on current trends.
Fred from Alqor
Hi Manuel, we recently put together a series of tutorials on supervised machine learning with quantum computers. To round it up, we now wanted to get the opinion of someone that has worked on quantum machine learning for a while now, and we are excited to have you around for it. So to start out, could you please tell us a little bit about you ?
Manuel Rudolph
Hi Fred, thanks for having me on! It feels great to be at a point where you interview me, seeing as we started exploring quantum computing together about 3.5 years ago. At the time, I was a student of yours, and we were just playing around with qiskit and learning about simple algorithms. I then had the opportunity to do my Master’s thesis with you in collaboration with the Honda Research Institute Europe. The idea was that I should go ahead and explore a range of potential near term applications of quantum computers. After a few months, we landed on quantum machine learning as one of the more promising avenues for noisy devices. Mostly, we thought about generative machine learning algorithms – for two main reasons: Not only are quantum systems naturally capable of encoding complicated measurement distributions and sampling from them, but this is also a domain where classical algorithms have more difficulties than, for example, in data classification. At the time, I read these arguments in publications of Alejandro Perdomo-Ortiz and his fellows at NASA and UCL. This is why, after I finished my thesis, I reached out to Alejandro, who was working at Zapata Computing. I eventually got an internship position in the Quantum AI Team of Zapata, where I worked on very similar topics as during my Master’s; on quantum-assisted generative machine learning models involving a quantum circuit component and classical artificial neural networks. After six months at Zapata, I was offered a full-time Quantum Application Scientist position, where I’m still working on quantum circuit generative models. I’m mainly concerned with the trainability aspect of quantum circuits, and how you would design your quantum model to tackle a given classical dataset. In these (almost) two years at Zapata, I have continued learning so much about different branches of quantum computing, current quantum hardware, software development, and more. I have even supervised a few interns – which I really enjoy. Even my colleagues are amazing! I’m very grateful how everything has worked out, and I love what I do.
Fred
Indeed, we both have learned a lot over the last few years. How would you summarize what you learned about supervised machine learning with quantum computers to this day ? At the beginning we started discussions on advantages in training but I understand that the field is thinking about very different things nowadays.
Manuel Rudolph
I agree that the focus of the field has shifted. First of all, I want to comment on the “quantum hype”. Many people note that there are inflated expectations in the community and overall a bunch of “bullshit” – excuse the phrase. I personally only see this hype from people who don’t do research themselves, or who do not talk to scientists. Inside the scientific community, I feel that scientists are sometimes inflating the value of their own work, but not the overall expectation of quantum computers and quantum algorithms.
Having said that, this is how the field has changed, in my opinion, in regard to supervised machine learning. Publications and research have shifted from demonstrating slightly improved algorithms on slightly different applications at slightly bigger scales, to really understanding the fundamental properties and implications of the algorithms. On the theoretical side, work by Maria Schuld and colleagues has helped us understand that supervised quantum machine learning models, as we know them now, are effectively kernel methods. Other folks have shown that these kinds of algorithms do not have rigorous guarantees for improved performance on classical data. Then came no-go issues such as Barren plateaus. They were first identified by Jarrod McClean’s group at Google Quantum AI, and have since been proven by scientists at Los Alamos National Lab (Marco Cerezo, Zoe Holmes for example amongst them) to appear in a wide range of circumstances. This, in my understanding, or at least for me personally, made us aware that we need paradigm shifts – in how we design, train, and apply quantum circuits. Now, we have an additional perspective on QML, namely quantum algorithms for “quantum data”. While this domain might be further from practical application, it is much more realistic to provide actual benefits as compared to conventional techniques. In any way, there is a lot of work to be done to really understand the potential of these algorithms.
Fred
Let us get back to the hype later as this is certainly a major point in quantum tech these days. But your description on supervised QML seems to go along with some rather pessimistic voices on the immediate application is this right ? What do you mean on the other hand with quantum data ? Do you have any examples here and also what makes them interesting from your point of view?
Manuel Rudolph
I’ve been told before that I sounded pessimistic, but I don’t think this is true. The thing is that classical techniques are pretty good at the least, and usually very good. If nothing else, they are fast and cheap. When you say “immediate” application, what time frame does that refer to? If it refers to this year or next year, then yes, I’m pessimistic. If we allow ourselves some time to learn more and develop smarter techniques for specific problems, I’m very optimistic about the capabilities of quantum computers. Not to speak of the quantum devices that need to improve as well. QML for quantum data would be one problem where there are barely any other techniques available. Say you have a quantum state prepared in a lab experiment, or you were somehow able to coherently transfer the state from the lab to a quantum computer, how would you attempt to learn the most from it? You can always measure it, that’s true. But you can also process it before measurement, and that is not something that you could do classically with a quantum state. So if you are given two quantum states, and you want to classify if they are approximately the same state, you could either take a lot of measurements until you are confident enough in your prediction, or you perform coherent techniques such as SWAP tests or, if you have several copies of the states, Bell measurements. I’m not an expert in this field, but I very much recommend reading anything that Hsin-Yuan (Robert) Huang does at Caltech. He is a PhD student of John Preskill and also collaborates with Google Quantum AI. They are pioneering early applications of quantum devices (note, not necessarily universal quantum computers) for quantum systems. Some people feel like this is taking away from quantum computers because this is not an industrial application. But it may become one, and realistically, if we can’t use quantum computers for tackling quantum systems, we shouldn’t expect much for classical problems. I’m personally very excited about their research, and I hope that we can transfer to other QML domains what we learn in this domain. For unsupervised learning, and specifically generative modeling, I see things a little differently. You could still go through quantum data first, but even for classical data, I can see a conceptual advantage for quantum computers which we can aim to manifest.
Fred
This point brings us nicely to unsupervised learning. But before we dive into any kind of advantage I would love to have a quick description of a typical unsupervised learning task from you and how it differs from supervised learning. Could you provide one ?
Manuel Rudolph
In the context of discriminative tasks, such as classification, a supervised algorithm requires data with associated labels. A label for an image could be a person saying: “This image represents a cat”, “This other image represents a dog”. In such a case, the supervised algorithm learns the mapping of input images to the labels “cat” and “dog”. If the algorithm was unsupervised, it would have to first uncover that there are two major families to which the images belong, and then group the images correctly. This case is a lot harder, from my understanding, but unlabeled data is much more abundant.
In the context of generative algorithms, this is a little different. The goal of a generative algorithm is to take in data, and produce new data that has similar properties as the data you gave it. This could be like generating new images of cats and dogs. If the data comes with labels, it can make the learning easier, as the algorithm is being supported by information that we know about the data. Additionally, we can then ask a trained model to produce images of only cats, or only dogs. But in my understanding, the conceptual difference between supervised and unsupervised algorithms is more significant in the context of discriminative algorithms such as classifiers.
Fred
In simplified words, your research focus is then how quantum technologies could help in the generation of new cats and dogs ? How would this work and what is the field mostly working on ?
Manuel Rudolph
That’s right, I focus on a type of algorithm that could eventually be used to generate images of cats and dogs, but also very different kinds of data. There are endless possibilities of what you could do with a reliable algorithm that you provide with data that you care about, and it keeps producing new data that has similar properties. The algorithm that I mostly study is the Quantum Circuit Born Machine, QCBM in short. This is a rather involved name, but in essence, you use a quantum circuit to prepare a complicated quantum state, and then you measure this state. The measurements are the generated data, and the probabilities of measurements are encoded in the quantum state. This means that you need to figure out which quantum circuit you need to run to prepare a quantum state that has the desired measurement distribution. The machine learning approach is that you parametrize gates in the quantum circuit, and train the parameters to minimize a loss function. This is just how quantum circuit classifiers are trained, but mostly the loss function is different. Now, why would you use quantum circuits for this generative modeling task? Intuitively, quantum systems are the prototypical generative models. They are very good at encoding complicated distributions, and you can get random samples of those distributions by measuring the quantum system. On a technical level, researchers are working on a range of different aspects. Some people, for example Jens Eisert’s group in Berlin, work on theoretical proofs that there exist probability distributions which a quantum system can learn, but a classical model can never learn. This would clearly be a reason to use quantum generative models instead of classical ones. I personally work more on the practical aspects of the QCBM algorithm. For example, how to reliably and efficiently train it, and how you structure the quantum circuit for a specific generative modeling task. Additionally, we at Zapata want to pin down which kinds of real world datasets quantum algorithms could be useful for.
Fred
I am really looking forward to seeing the future developments in this direction. As a last question I wanted to get back to your comments on the hype in the field. Everyone involved knows that the promises that are made are often super optimistic to say the least. Experts might cringe but just ignore it. But how should newcomers or non-technical people navigate the field in your opinion ?
Manuel Rudolph
In my opinion, the key is to listen to talks of knowledgeable people who are also great educators. They will tell you about the frontiers that they are working on, and likely embed their work into a broader context. It is important that they are trying to reach a general audience and don’t speak too technically, but also don’t just wave hands. I personally really enjoy speakers who help me embrace the context of their work, and even if I don’t understand everything, I go out feeling good about it. The difficulty comes in when you have to decide as a newcomer who is the authentic expert. As a rule of thumb, you could look up invited speakers at quantum computing/ quantum machine learning/ quantum information conferences on Youtube, and see if you feel like you can learn with them. Scientists are rather allergic to hype and will likely invite other scientists that they respect. To go a step further, most scientists are very happy to talk in your group or at your organization if you reach out to them. One thing that Zapata does, and that I really respect, is that there is a team that focuses on projects with other organizations that are “quantum curious”. They work out together what a future use-case of quantum computers might be, and what it would take to get there. I think the outcome of sincere projects like these are just as insightful for the organization as for the scientific community.
I’m very excited about the future of quantum computing, and I’m looking forward to contributing to it. Let’s all be “quantum curious”, and embrace that there is much to learn.
Fred
Thanks a lot Manu for sharing your opinion and your curiosity with us. It was as always a pleasure and I am hopeful that our readers got some valuable information.
Leave a Reply