Podcast | Using Copilot for Coding and the Path to a Quantum Supercomputer— with Microsoft Azure Quantum

Microsoft Azure Quantum has provided access to quantum computers in the cloud for about four years. A lot has changed in that time, including the generative AI revolution. It’s now possible to create quantum circuits with the help of Copilot and users can work on advanced scientific problems in Azure Quantum Elements by combining quantum computing, high-performance computing (HPC) clusters, and AI. Microsoft is also making advances in logical qubits in partnership with Quantinuum as well as with their own topological qubits. Join host Konstantinos Karagiannis for a wide-ranging chat with Dr. Krysta Svore, who helped build Microsoft Azure Quantum.

Guest: Dr. Krysta Svore from Microsoft

The Post-Quantum World on Apple Podcasts

Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organizations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.

Subscribe
Read transcript +

Krysta Svore: We have this unique capability of bringing together HPC, AI and quantum computing all together. We also have the most reliable logical qubits coming from the quantum computing side on record. You have very advanced capabilities coming across all these technologies — advanced capabilities coming together, integrated together. On top, you can drive and discover with CoPilot.

Konstantinos Karagiannis: Microsoft has provided access to quantum computers in the cloud for about four years. A lot has changed in that time, including the generative AI revolution. It’s now possible to create quantum circuits with the help of Copilot. Users can also work on advanced scientific problems in Azure Quantum Elements. We cover all that, along with the latest logical and topological qubits, in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.

Our guest today is the distinguished engineer and VP of quantum software at Microsoft, Dr. Krysta Svore. Welcome to the show.

Krysta Svore: Thank you so much for having me.

Konstantinos Karagiannis: Can you start by giving a general introduction to Azure Quantum? I believe you had a thing or 200 to do with how it took shape.

Krysta Svore: Absolutely. Why do we care about something like Azure Quantum? Why do we care about quantum computers? It’s that we want to accelerate scientific discovery. These machines, at scale, along with HPC and AI, promise to help us solve problems we can’t otherwise solve with our classical digital technology alone or even with AI on top.

It’s important that we have an integrated set of capabilities, an integrated platform that covers HPC — your cloud compute integrated alongside AI and quantum computing. You want quantum machines in the mix to help you solve some of the most challenging pieces of some of these workloads where you’re looking at chemistry, material science. At the core, we’re trying to understand quantum mechanics. What are the electrons doing? How are they dancing? How are they interacting? We want to do that very accurately. That’s where the quantum computer comes in. With Azure Quantum, it’s all about setting up, making it easier for scientists, researchers, people all around the world to perform scientific discovery. We want a platform that enables people globally to take AI alongside HPC and quantum and solve new things with it that we can’t do otherwise.

Konstantinos Karagiannis: It’s a great answer, because when I think about cloud access to quantum computers, I always think of it as building what people are already used to but just adding in quantum. It’ll be one more thing you pull down: “Do I need six GPUs?” “Will I also need a QPU?” — that kind of thing. That’s why I always love that approach. By now, speaking of AI, most people have heard of Copilot and how it’s changing how we interact with our computers personally. How does it work with Azure Quantum?

Krysta Svore: When we think about these new types of workloads we want to be able to do, where we’re taking the HPC, AI and quantum together, ultimately, you want a capability that helps you put that workload together. Let’s say you’re a chemist and you have no idea how to program a quantum computer, or you’re an expert in quantum computing or quantum information science, and you have no idea the problems chemists might be facing. The idea is that we want to make it easy. No matter where you sit in this diversity of fields, we want to make it possible for you to bring that QPU into the fold to accelerate the right problems with quantum computing.

Part of that is learning, where should you use a quantum computer? How should you program it? What functions or subroutines, what actions, should you ask the quantum computer to go do? Copilot in Azure Quantum helps ground you in the capabilities of quantum computers: How do you program them? It can help you write your first programs in qHash, for example.

At the same time, if you’re more of a quantum computing expert and you want to learn more about what kinds you have, this great new quantum algorithm or quantum idea, and you want to see, how could this help in the space of chemistry or materials science? This is now a more natural-language-based interface to ask the right questions about how that fits together. I want to understand more about the chemical nature of something, the ground states, the energy spectrum, etc.

It’s about opening up that communication, those capabilities, across disciplines, but doing that from a natural-language format. Copilot in Azure Quantum today enables you to write programs across these different workloads to better understand quantum computing on the one hand — say, chemistry, material science on the other. It’s specially trained further for this set of information.

Konstantinos Karagiannis: That’s great. I’m glad you keep bringing up the science aspect, because let’s all remember where these machines came from. They were originally dreamed up by Feynman to simulate the quantum universe we live in. I see so few people actually using them to do anything even remotely in the scientific realm like that. Everyone’s, of course, interested in real-world use cases — optimization things — but I do want to lift the veil and peek a little deeper into the universe. That’s pretty exciting.

On that note, how does Azure Quantum Elements expand what use cases are possible?

Krysta Svore: Let’s first dig into the point you just made. These machines, a quantum computer, it’s quantum. At its core, it’s called a quantum computer because inside, it’s taking advantage of the principles of quantum mechanics, depending on exactly how you’re building it. In all cases, though, it’s taking advantage of quantum mechanics to do that compute, to store information, and then ultimately output a solution and answer a set of classical bits.

We do need to understand, how are they different than our digital classical machines, and what are they going to be good at? We don’t want to ask a quantum computer to solve just anything. We want to ask it to solve the types of problems that are hard for our classical computers today, for AI models today and so on. It’s important to understand what they’re good at.

When you look under the hood, just as you said, and as Feynman originally laid out years ago, they can be good at understanding quantum mechanics. It’s a quantum mechanical system at its core that can understand quantum mechanics. This means we can understand things and evolve — Hamiltonian’s, Schrödinger’s equation. These are the equations that govern much of nature. The quantum mechanical world is spelled out and specified by quantum mechanics, by a Hamiltonian, by Schrödinger’s equation. We want to understand what happens over time in a material system to a molecule in a chemical reaction. At the core, we need quantum mechanics underneath that.

Then, when we think about the fact that these machines are well-suited to understanding quantum mechanical problems, we can then look and see, where are the hard problems here for classical computers. It turns out, especially when electrons are correlated in a system, this is hard for classical computers for the most part. We want to then run that part of that solution on a quantum machine. It turns out this is in the space of chemistry, for example, materials science. When we think about catalytic chemical reactions, understanding what happens in that reaction to very high accuracy for a number of cases, we need the quantum machine.

Similarly, in the space of materials, we want to better understand exotic properties of materials. We want to know if a material could superconduct at higher temperature. We need to understand these materials models and what’s happening at the electron level in these systems. This is what we want to use a quantum computer for.

I raise that in the context of your question around Azure Quantum Elements, because what is Elements? Even the name Elements, it speaks to something about the periodic table and the atoms involved and, potentially, the electrons involved. It’s about enabling, bringing, HPC, AI and quantum computing together and enabling scientists globally to make more accelerated discoveries in the space of chemistry, materials science and, ultimately, science more broadly.

Azure Quantum Elements is about bringing these different technological capabilities — cloud compute, advanced AI models — the ability to simulate different aspects of physics more quickly by using cloud compute and then bringing a quantum computing capability into the mix, integrating that alongside for the pieces of the problem that are hard for the classical compute and AI models. I think of it as this layering — Azure Quantum Elements is a lot like a layer of different solutions, different capabilities that you string together, that you bring together, to solve a problem. Ultimately, it’s a platform to accelerate this discovery.

A great example is what we did with Pacific Northwest National Laboratories here, where we sought to find a better makeup of a battery. You want to replace, ultimately, the lithium with something that is a less scarce resource, that has better properties in terms of, say, compostability or otherwise. With PNML, we used this Azure Quantum Elements capability where we were able to generate 32 million candidates for this potential new battery material.

Then, through a series of computations — in this case, we did not use the quantum computer for this problem, but we used a series of AI models of accelerated simulations that were ported to the cloud environment to make them very fast to understand the physics in a given material. Through a series of these, in 80 hours, we were able to take that 32 million–candidate set and drive it down to 20 candidates. We then chose one with Pacific Northwest National Laboratories to synthesize in the lab. It works as a battery, and it has 70% less lithium than batteries today. This is a great discovery through the Azure Quantum Elements pipeline.

Konstantinos Karagiannis: That’s well-described. That’s obviously what Feynman was thinking of — simulate reality — and he just probably couldn’t dream of the AI assistance he’d get along the way. I know we’re going to have all sorts of issues as we use green energy: How do we store this? We do need to look for other materials. That’s going to be hugely important going forward.

Are there other ways Azure Quantum differs from other cloud platforms? So far, you’ve already given examples that show that workload, that show a little bit AI here, HPC here, quantum here. Do you have any other examples of how it’s different from other cloud platforms?

Krysta Svore: We want to make sure that it is a platform, bringing the different tools, the different models, the different pieces together. We want the best-in-class capabilities across the platform for our scientists and researchers and for people worldwide. It’s about bringing these things together and making it much easier, bringing that acceleration forward. I mentioned in the course of 80 hours going from 32 million candidates down to 20 in the case of the battery example with PNML. That is what we want to bring forward in this platform. The platform needs to be efficient. It needs to bring the right models forward. It needs to be able to operate over many types of problems you might want to solve.

We have a number of pieces there that come together: We have a generative AI capability. We have an accelerated DFT model, which is a model often used in physics to understand the underlying structure. Then, of course, we also have quantum computers you can plug into the mix. On the quantum computing side, no matter what quantum machine you’re looking at in the world, these machines can’t yet give you something you can’t do classically. But we are very much progressing through the levels of capability of quantum machines, getting to the point where you will see scientific advantage.

That means seeing advantage from getting a solution from the quantum machine for scientific problems. We are fast approaching the point where we will see that advantage coming from the quantum machine being in the mix. That will give you the ability to do something you can’t otherwise do classically. This platform — Azure Qantum plus Azure Quantum Elements — what is unique to other platforms, bringing together HPC, AI and quantum computing all together, we also have the most reliable logical qubits coming from the quantum computing side on record. You have very advanced capabilities coming across all these technologies — coming together, integrated together. Then, on top, you can drive and discover with Copilot.

Konstantinos Karagiannis: Great. We’re going to talk about a couple of these newer announcements and newer achievements. We’re going to get to logical in a little bit. Can we talk first about the recent work with Photonic?

Krysta Svore: Absolutely. When we think about our approach and what we’re doing in the quantum ecosystem, you need different pieces. Before just jumping into what we did there, it’s good to understand there are different pieces across different technologies needed across the quantum ecosystem. You have quantum computation, which we’ve been focused on in our initial discussion here, where you’re wanting to compute something. You need to be able to operate that, much like we do a classical computer in many ways: You need to be able to program it. You need to be able to ask for a solution. That’s quantum computing — we’re going to ask it to store and process information just like we do classically, but when we look at our classical compute ecosystem, we also have networking.

You and I are discussing right now, via the internet, a very important network that all of us use all the time, every day. It’s a classical network — a classical communication mechanism. In the quantum ecosystem, you also want to be able to explore, how would you share quantum data, quantum information? Not just classical information, but quantum information. Interestingly, there are sensors in the world that collect quantum information directly — not just classical bits, but quantum bits of information. We need to be able to transport this information. We need to be able to share this information, distribute this information. We call this quantum networking: the ability to communicate over a quantum network, a quantum channel, share quantum information instead of just classical information.

This won’t replace the classical internet. This is not something that gets rid of the classical internet, just like quantum computers will not replace your classical computers. It will be an additional capability alongside classical compute and integrated with classical compute. Something similar is true of the quantum internet and quantum networking. We still need our classical internet. We still need classical communication and classical networking. But this would integrate alongside and offer new and different capabilities we’re still discovering — exactly what we could do if we’re able to share in what’s called entanglement. If we can share quantum information and quantum entanglement across a network, it could open up a huge number of possibilities, including what you might think of as a quantum internet.

With Photonic, we are looking at how to share this entanglement: How would you distribute what we call entanglement, this quantum information? How would you set up an early version of that network and then grow that network to the point where that’s as big as the quantum internet? First, you want to show that from point to point, you can establish that quantum connection and share quantum information from point A to point B. Then you want to be able to enable a more complex network, for example, where you share from, say, a hub to many endpoints.

Then, from there, you want to build out and connect many of these, like, stars, together into a more complex network, much like our classical internet. That would be a quantum internet. With Photonic, we’re working on these steps — going from this point-to-point capability to a more complex hub-spoke capability to, ultimately, a quantum internet.
What’s neat is, this technology can also be used to connect quantum computers. We think about cloud computing, data centers, distributed classical computing. This is all enabled by networks. Data centers are full of a lot of networking technology. With a quantum computer, if we want to connect it to another quantum computer, we’re also going to need a quantum network there. What we’ve done with Photonic, most recently, is that in collaboration, we’ve been able to establish a point-to-point capability.

Photonic has, for example, a system at each point. This is separated by roughly 40 meters of telecom fiber — an everyday operating environment. The type of fiber that we already use in data centers today, you have 40 meters between two points. They were able to establish quantum communication between those two points, share entanglement between those two points, and then, ultimately, process and perform an operation between those two points over that network. It’s a great step forward, and we’re excited about that capability Photonic recently demonstrated.

Konstantinos Karagiannis: You visualize that as, of course, enabling something like connecting quantum computers to operate as one.

Krysta Svore: Yeah. For example, it can be used to realize what we would call distributed quantum computing, which, in that scenario, would be exactly as you’re saying: You could have a quantum computer that is housed in a single unit. That’s much like the approach we have also at Microsoft called topological quantum computing, where all your qubits would sit on a single chip. But another way to realize quantum computing at scale is, when you need many physical qubits — upward, say, of a million — to have many, say, chips or points units, and connect them with a network. This is another way to realize distributed quantum computing with this quantum network.

Konstantinos Karagiannis: You tease the topological word, and we’ll get to that in a bit. Everyone just has to wait for the juicy stuff there. What business-use-case impact do you foresee with this distributed entanglement and the long-distance communication realm? There’s a potential, like you said, to interact with those physical layers. Are there any use cases in which maybe an end user won’t even know they’re working with a quantum computer or something because it’s involved in the mix, when it comes to communication or anything like that?

Krysta Svore: At some point in time, especially with the ability to have something like Copilot set up workflows for you, over time, you can look into the future and imagine that you’re only communicating with your natural language. Maybe behind the scenes, you don’t need to know what processor is running what part of your workload. This would be true even of just classical processors, but no doubt also maybe with a quantum processor in the mix. It may be the case in the future — and I definitely think that’s true — that over time, you’ll just ask the platform to put together the solution it needs to go do, and it’ll do it for you.

Right now, in terms of the business cases, it’s building on what we’ve already talked about. Ultimately, we want to be able to solve solutions that we can’t solve classically that offer advantage over what we can do with classical capabilities. You could divide those maybe into networking and computing. But at some level, it’s, what can we do within this quantum ecosystem that we can’t do classically? Indeed, this network, as we just discussed, could enable distributed quantum computing — in other words, another path to build up a larger quantum computing capability.

Ultimately, we need quantum computing at scale. That means we need a lot of qubits — a very large quantum computer, ultimately, to perform, for example, high-accuracy calculations for things like catalysis. If we want to understand how, say, nitrogen fixation works, which is the process we use behind artificial-fertilizer production, if we want to understand that in enough detail to replicate what is happening with microbes in the soil so we can produce that industrially, locally, in different regions of the world, we need high-accuracy solutions, and this needs a lot of qubits — upward of a million physical qubits.

This network could enable a path to scale up to a million physical qubits by enabling a multimodule architecture to get there. At the same time, as I mentioned, it can unlock something like a quantum internet — a way to share quantum information in new and different ways. There are quantum sensors in the world that are extracting quantum information. This is a way to now take that information, learn from it.

We don’t know. We’re just scratching the surface on what distributing entanglement, sharing this quantum information, means. It’s a totally new resource that we know can help with quantum computing and building a quantum internet. But there’s a lot of opportunity there to figure out how sharing this entanglement can help others drive forward new and different ideas we haven’t thought of because we haven’t had it right at our fingertips. But ultimately, you can use it, potentially, for different security capabilities, different information-sharing capabilities. You can potentially use it so you have better signatures or understanding of what’s happened to that information as it’s been shared. But ultimately, also, you can use it for quantum computing.

Konstantinos Karagiannis: On that note, when I think of the march toward fault-tolerant quantum computing, I still hold a place in my nerdy heart for topological. I’ve been talking about topological in public as long as your team has been working on it. I always bring it up. It has such promise. Can you discuss the status of that effort along with Microsoft’s roadmap for achieving quantum?

Krysta Svore: I’m glad to hear you have a love for this area. It’s an absolutely fascinating and beautiful theory. We’ve engineered beautiful devices to match this beautiful theory. Why do I say it’s so beautiful? It’s this approach to quantum at scale. Why are we doing it? Why do we care about topological quantum computing? We want to get to a million physical qubits and beyond because that is critical for being able to unlock solutions across, as I mentioned, catalysis, materials design, materials prediction. We want to understand what’s happening at that subatomic level in detail so we can, quite literally, feed our planet and save our planet.

We’re talking about solving problems we all should care about deeply. We need to understand, how can we help extract carbon dioxide from our atmosphere? How can we do that efficiently? We need to understand the reactions at hand there, and we need them in detail to mimic industrially. Getting it in detail to high accuracy needs a quantum computer. But that quantum computer needs to be large enough to hold those calculations and output the solution we need to high accuracy. We need upward of a million physical qubits. When we look at architectures for enabling that, we just talked about quantum networking being one path to enable a more modular design. But another path is to consider a single module and to put everything on a single chip for that million-qubit scale.

The topological qubit promises that type of path. We can fit millions on the size of the chip on your credit card. They are small enough to fit there, but they’re not so small that you can’t wire them up and control them, which is super important. We have to be able to program this device. We have to be able to get, literally, a computer program onto it, operate it and extract a solution. It can’t be so small that you can’t fit wiring there.

There are some qubit designs that are quite small. They’re very hard to wire and thus very hard to control. These topological qubits sit in the sweet spot of being the right size. They’re controllable digitally, making it an easier control stack. They’re fast. They’re fast enough to do a computation such that we can get a solution in a month or less. We don’t want to be waiting millions of years for the quantum computer to produce a solution. We need these solutions in a matter of weeks or days, but not years or even longer.

Konstantinos Karagiannis: This sounds like the opening to a disaster movie: “We need the solution now.”

Krysta Svore: That’s right: “We need it now to save our planet and save humanity!” It has the right speed, controllability and size. In addition, it has this intrinsic ability to handle noise. By design, topological qubits can combat much of the noise that qubits face, and it can do that in the physical device. This is different than other systems. Other qubit designs require many extra qubits to help protect them against the noise of the environment. Qubits like to interact with their environment, and that means that they become too noisy to trust the solution. We need reliable qubits. We need to be able to have them be protected from the environment enough and not so noisy that they collapse or give us a wrong answer.

The topological qubit not only has this right speed, size and controllability, but additionally, it has this intrinsic property that it can handle quite a bit of noise just by its physical design. It is a great thing because it means you have fewer physical qubits required for a given solution ultimately. You can create what we call better logical qubits here with fewer resources required to do so.

Our topological quantum computing work is focused on enabling a quantum machine at scale, doing that efficiently and enabling that to be in a small form factor. You don’t want the thing to be the size of a football field. That’s very hard to control. It is, as I said, like the size of the chip on your credit card. We have made a series of progress here where we have devices that show the properties required. This is amazing. Part of the beauty here is that it’s a new phase of matter.

These qubits require understanding exotic physics and showing some of this exotic physics for the first time. We did that in the last year, where we were able to show that we can quite literally engineer, drive, this device into this new phase of matter called a topological phase of matter. Then, from there, we need to be able to control that so we can store information in the device, operate the device, and that’s what we’ve been able to do more recently — be able to read out the device. We’ve been able to read out these devices, so now it’s about bringing these various components together to be able to operate it fully as a qubit. That’s what we’re working on.

Konstantinos Karagiannis: Then we’ll need multiples of them, of course.

Krysta Svore: Yes. Then you indeed start to stamp it out, tile it out, to build up to a larger and larger quantum machine there.

Konstantinos Karagiannis: Quantum breadboarding. You’re plugging it all together. It is a bizarre state of matter.

Do you have any sense of timeline for this march to scale? How long before you expect to see it? I realize you can’t say, “Ten qubits this week,” but just a general idea.

Krysta Svore: As you probably know, putting innovation on a timeline is tricky. At the same time, we are looking at years instead of decades to get to the point where we have a scaled machine here.

Konstantinos Karagiannis: The beauty is, if I’m understanding correctly, that the number of qubits you get are pretty much going to be the number of qubits you get, as opposed to thousand-to-one ratios or something for transmon that would be required.

Krysta Svore: Indeed, the overhead to reach that scale of quantum machine is less for topological qubits. That is why it’s a very important and promising approach. We love the topological qubit because it promises less overhead needed to correct these errors, as I mentioned, intrinsically in the physical design. This is the whole idea of topology — that you could stretch and perturb the system and it won’t incur an error. Here, using it as a quantum machine, we do have an advantage, because the physical device itself promises to protect some of the information we want to compute and store in it. That’s important. It results, as you said, in less overhead for that ultimate scaled architecture and scaled machine.

Konstantinos Karagiannis: That’s great.

We could switch gears quickly to a qubit that is running right now. There have been a few logical-qubits announcements this year, including the Quantinuum achievement with Microsoft. That was pretty amazing. It was a 7.5-to-1 physical-to-logical ratio. Can you talk briefly about that and how it might have changed your assessment of a timeline for error-corrected systems appearing in general?

Krysta Svore: Absolutely. We can look at the capabilities of a quantum computer in rather simple terms. At first, you have qubits available in a quantum computer, and you operate them in that form. You directly store information in the physical qubits themselves. You compute on them. We call this noisy intermediate scale quantum computing, or NISQ. This is level one, where it’s foundational. You’re operating on those qubits in their physical form — not doing any extra on top to get better results or maybe a little classical processing or something like this, but ultimately, you’re operating on the physical qubits themselves, and that’s level one.

The thing is, in doing that, these qubits are noisy. Physical qubits are noisy. You’re limited in how much compute you can do with them and still get it while still getting a reliable answer. Ultimately, we need reliable solutions from our quantum computers. We can’t get that. We can’t get deeper, more complex computation and a reliable solution from NISQ machines. From level one, quantum computers, we have to advance.

At level two, the next level is to use these physical qubits more as a unit. I like to think of it as what I call qubit virtualization — taking a pool of physical qubits, some number of physical qubits, and treating them like a virtual qubit. That virtual qubit is then what you use in your algorithm, that you use to get your solution: You take a pool of physical qubits, you treat them as a unit, operate them as a unit, and then this becomes the type of qubit. This virtual qubit is used at the algorithm or program level.

You might also hear of these as logical qubits. Level two is about logical qubits. It’s about reliable quantum computing, where now, by using this virtual, or logical, qubit, I can get more reliable solutions out. Maybe in this level, I imagine driving to, say, 100 logical qubits or more. At 100 logical qubits, when those qubits are good enough, you can see advantage for scientific problems by using this quantum machine.

Then you graduate to level three. Level three is all about scale. There, you’re looking at thousands of logical qubits. That’s what you need for problems, for example, in catalysis. For more of your commercial, industrial-style problem, level three, you’re going to need upward of thousands of logical qubits that are very good and enable very deep computation. That’s scale. That’s level three.

Today, all quantum computers are at level one, except now with Quantinuum, as you mentioned, in April, we shared that we were able to show the most reliable logical qubits on record. We showed four logical qubits that are 800 times better than their physical counterparts. We have graduated from level one to level two. We are in the era of reliable quantum computing now. Now, with that said, we still want to advance as rapidly as we can toward scale because at scale, we unlock these commercial, industrial problems that can help us, as we talked about, feed and save our planet.

But with that said, at level two, 100 logical qubits — even less than that at 50 to 70 logical qubits — we can show things we can’t show classically. We can, at 100 logical qubits, have solutions coming from that quantum machine for scientific problems we can’t get classically. This is a very exciting period in the next couple of years where we look at advancing to 100 logical qubits and beyond and showing things with those logical qubits that we cannot achieve with classical machines alone.

Konstantinos Karagiannis: It’s exciting because it allows us to make a pretty strong prediction, even if we don’t know exactly which paper is going to appear in the future that shows advantage, once we have a machine that can’t be simulated by an emulator, you have to have advantage.

Krysta Svore: It’s why we’re building these. We want quantum computers because we want them to be advantageous for a set of important, useful problems.

Konstantinos Karagiannis: It’s, like, if you have a million-horsepower engine, I can guarantee it’s going to break some record. I don’t know, but it will be something. Are you exploring any use cases where, even though we still have more unreliable qubits, you believe you might see some early quantum advantage anyway?

Krysta Svore: Absolutely. In that 50- to 100-logical qubit range, we will be able to show advantage for different materials — modeling problems, for example — that are of strong scientific interest. You can explore these models with these quantum machines when you have enough logical qubits.

Those logical qubits, of course, have to be good enough. It’s not just any logical qubits similar to physical qubits. Not all qubits are created equal. That’s the same with logical qubits. We need to drive what we call the error rate on those logical qubits down. That enables, as the error rate goes down on the logical qubits and gets better, to do more complex calculations showing, when we think about use cases, what can we show with these logical qubits? As that error rate gets better and better, we can do a more complex computation that enables us to have access to more problems so we’ll be able to show that advantage. On paper, we work out where that line sits — what problems, and how big of a quantum machine will you need?

We even have, for example, another big piece of Azure Quantum, it’s not just running on the quantum machines, but also being able to predict how big of a quantum machine we’ll need for this problem. We want to understand how many resources will be required to provide this solution. We have what’s called the Azure Quantum Resource Estimator, and it’s a tool we use internally as well to explore, where are those use cases? When do they light up in terms of the size of quantum computer we need to do it? How many logical qubits will we need? How good will they have to be? We want to understand in advance the workloads we’re going to run on the machine.

We’ve spent well over a decade looking at those types of problems, and it’s the size of quantum computer needed and how you should design that quantum computer. That points, again, to the topological qubits. We have identified the workloads at scale. What are the scaled workloads we want to run, for example, in chemistry with, say, catalysis problems? We know what those problems look like. We’ve written them down. We’ve done the resource estimation in a lot of detail, where we have the whole algorithm written out of and ready. That enables us to design the quantum computer such that it can light up and successfully run that solution. We’ve done that as we’ve designed our topological quantum computer and that architecture. We do it with the workloads in mind. That’s the same at level two. When we have logical qubits, we are designing that logical qubit system with the types of solutions we want to run or yield in mind.

Konstantinos Karagiannis: That was a great answer. They’ve all been great answers. I want to thank you for your generosity of time and the care you gave to these answers.
Before I let you go, I’m going to be including in the show notes a link to how people can sign up and try this amazing platform out for themselves. Is there a tip you want to give before we close the show — something they should try first?

Krysta Svore: If you’re just starting in quantum computing in the scientific space — this crossover with chemistry and quantum computing as well — Copilot in Azure Quantum is a great starting place. You can have an interactive discussion with Copilot and identify directions for your further discovery. That’s a great starting place for the more general audience.

Konstantinos Karagiannis: That’s great. Thank you. With that, I’ll let you get back to achieving utility at scale.

Krysta Svore: Terrific. Thank you so much for having me.

Konstantinos Karagiannis: Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts discussed today in case things got too nerdy at times. Let’s recap.

Microsoft is interested in accelerating scientific discovery in its Azure Quantum platform. It has built a way for quantum computers to interact with HPCs and AI to solve challenging problems in materials science, chemistry, biology and more. A great example is a battery that uses 70% less lithium.

These projects are done in Azure Quantum Elements, and the offering has drastically reduced experimental time frames — for example, from six months to one week. Copilot AI makes the process even more powerful, letting researchers without quantum-coding experience get a head start on the best circuit approach to take or help along the way. Copilot is available in other areas of Azure Quantum too.

To advance quantum computing in general, Microsoft has been working with partners on a few exciting experiments. To achieve better connectivity to transmit quantum information from sensors or between quantum computers, they partnered with Photonic to achieve quantum networking for a future quantum internet. This distributed entanglement started at 40 meters, but the goal is to have distributed quantum computing one day.

With partner Quantinuum, Microsoft helped create four logical qubits from 30 physical ones. These logical qubits have 800 times the fidelity of physical ones without error correction.

Let’s not forget that Microsoft is working on its own topological qubits, the so-called Majorana approach you might have heard of. The timeline for a working system is unclear, but as Krysta puts it, we may have these qubits and systems in years rather than decades. They’ll require very minor error correction, so they could hit the quantum world with an atomic impact.

While we wait for these fault-tolerant machines to come to life, you could try Microsoft Azure Quantum today using the link in the show notes.

That does it for this episode. Thanks to Krysta Svore for joining to discuss all things Microsoft Azure Quantum, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @Konstanthacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.

Loading...