Transcript | A Million Photonic Qubits Listen Nearly US$1 billion in funding poured into the quantum computing industry last year. One company, PsiQuantum, received about half of that! What are they building? Nothing less than a photonic quantum computer with a million qubits. How does this machine compare to trapped ion and transmon approaches from the competition? And, more importantly, how soon could this quantum computing behemoth be available to help change the world? Join host Konstantinos Karagiannis for a chat with Terry Rudolph from PsiQuantum. Guest Speaker: Terry Rudolph – Co-Founder PsiQuantum Listen Topics Digitale Transformation Konstantinos Almost a billion dollars in funding poured into the quantum computing industry last year. About half of that money went to one company, PsiQuantum. What are they building? Nothing less than a photonic quantum computer with a million qubits. Find out how soon this machine could change the world in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. Our guest today is the cofounder and chief architect of PsiQuantum. I’d like to welcome Terry Rudolph to the show. Thanks for coming. Terry Hi, Konstantinos. Thanks for having me. Konstantinos You guys have been making a lot of buzz in the industry for a while, so I’m super excited to have you on finally. One thing just happened in the industry recently, so I hope we could touch on it. It might color some of where we go today. Last week in Nature, there were three papers published on super high-fidelity qubits on silicon — 99.87% percent fidelity, which is pretty close to error-proof, and I know that that’s the approach of your company, so hopefully we can touch on that. While other companies have started with smaller NISC machines, you’re going for the gold here. You’re going for the grail — a million qubits, boom, straight out of the gate. Do you have a rough timeline or road map of when that kind of horsepower is going to appear? Terry We have taken a bit of a contrary stance. Five or six years ago, when we started the company, maybe it was possible to justify targeting NISC machines, the noisy intermediate scale machines, which is a bit like building a calculator instead of a universal programmable computer. We decided we really wanted to build something that could have a high societal impact, so we wanted a fully programmable machine. Yes, that’s shooting for the Moon. When you’re trying to get to the Moon, you have two choices: You can start building things up — build the building — and then go off and invent concrete and build a skyscraper and try and get a bit closer to the Moon incrementally. Our approach is very different. It was just to say, “Well, we’ve got to go off and work on how to build the rocket engine.” That’s what we’ve been doing. Our rocket engine is the quantum equivalent of the GPU would be for a classical supercomputer. The GPU is some optical fibers attached to it, this kind of thing. In order to build that GPU, we, right from the start of the company, said, “Since we’re going to need a million qubits, we are only going to build everything in a 300mm top-tier semiconductor foundry.” This is the kind of place that builds laptops, cell phones, so that we can stare down those numbers of “Yeah, we’ve got to build a million-qubit machine.” Right now, we’re at the point where by the middle of this decade, we will have that core module. Then, at that point, we’re in a similar situation to if you were building a data center or a classical supercomputer, you then have to assemble those modules, millions of them, to make the final machine. So, we’re talking middle of the decade to have that core module fully manufactured in the semiconductor foundry. The other thing to say about that is a million qubits sounds scary, but if you work with these top-tier foundries, they’re building machines of billions of components — trillions, in fact, all the time — so it becomes much less scary. Konstantinos So, you mean within two years, we might have a million qubits from your machine? Terry No, within a couple of years, we will have the modules, which then, essentially, have to be wired together a bit like a data center or a thing from which we can essentially scale arbitrarily far in terms of how many qubits we build. Konstantinos Will these be like a modular approach? The University of Maryland, seven or eight years ago, was kicking around the idea of making these little five-qubit modules and daisy-chaining them. Are you envisioning something like that? Terry It’s similar, yes, because photons can be very easily daisy-chained, because we have this amazing technology called optical fiber, which is the way I’m speaking to you right now. We don’t have a lot of problems getting the quantum states out of the atomic qubits into the photons and vice versa, so, essentially, that’s what we’re doing. Konstantinos Like interconnect within one big machine is the idea — interconnect on steroids and one machine? Terry It’s a massive networked machine. Konstantinos Do you know, roughly, how many qubits would be in each of these modules that are proposed? Terry Yes, I do, but I don’t think I should be telling you. It’s not like we only need a handful of those modules — we still need a large number of them. We’re still building a very big machine, and I think any realistic technology has to stare down the fact that they’re building something that is going to be a large machine, consumes a lot of power, and data power and cooling power and so on. Konstantinos So, that gets the module, let’s say, by 2025 or something? Then, how many years, do you think, before they’re all married into this monster? And do you consider, at any point, rolling out an interim machine that’s half a million qubits or something? Anything like that? Terry We certainly have engineering proof points that, internally, we know we need to build systems of larger amounts of integration. Those are primarily bad engineering and not the quantumness of the devices. The assembly of the final machine, well, to some extent, we’ve started now. It’s not like we’re going to have the module and then immediately go off and go running. Now, we’ve got to pour some concrete. We’re getting ready for the assembly now. It’s a long and expensive process to do it for classical supercomputers, and that’s when you can go off and hire an engineering team, which I’ve already done. It’s always a bit more uncertainty when you’re doing something which no one has ever done before. At the same time, the components, everything we’re doing, is standard semiconductor components. So, other than the cooling that we have to do — we have liquid helium being pumped around this thing — I think we have a pretty fair idea of what we’re facing. Konstantinos Yes, and Microsoft seems to believe in it too. That generated quite some buzz. What kind of error-correction ratios are you looking at? The first idea of a million qubits, it was tossed around because of some pretty heavy error-correction ratios. IBM was kicking — “You might need a thousand qubits to generate one logical one.” Was there a number your team was shooting for or started to see evidence of maybe being the path forward? Terry Yes, we always targeted a million qubits right from the start. That was the number that we felt was the right ballpark to have in mind. IBM will say, “We need about a thousand qubits per logical qubit,” and then, it can be very confusing, because other companies will come out — you will hear trapped-ion approaches saying, “Well, we only need four qubits for three logical.” Konstantinos To give you like three logical — yes, that’s such a big difference. Terry So, the problem with this is that there’s just a whole bunch of apples-to-oranges comparisons being done, along with a whole bunch of obfuscating hype, to put it bluntly, and there are many details that get swept under the rug when you just use a single number, like the error-correction ratio. What we really need to know is, how many gate operations can you execute with your logical qubit? How many gates can you run? How fast are those gates? How many of those logical qubits are accessible to the user, as opposed to being to help the qubits that are required to run the computation? The more meaningful way to think about the error-correction ratios is to say, “How many physical qubits are we going to need to be able to finish a quantum computation with a few billion operations, a few hundreds of logical qubits within a few hours?” You don’t want to be running for more than a few hours. A few billion operations, a few hundred logical qubits. And that’s the smallest size of quantum computation we need in order to do stuff that’s genuinely useful that we know for sure that that can change the world. It’s thinking about numbers like that. That’s where IBM’s number of 1,000:1 comes from. It’s a very similar ratio on our machine, and so, if you put that together in the order of a thousand logical qubits, you’re going to need a million physical qubits, whereas this 4:3 ratio, on the other hand, this is not going to provide logical qubits that are anywhere close to being able to do a billion gate operations. In fact, I think we need tens of billions of gate operations. So, for trapped ions, it’s just the true-usefulness ratio could be even much worse than 1,000:1, because their gate operations are very slow, so that may be a hundred to a thousand times slower — just the time it takes to even do one operation on a physical qubit. Their equivalent of our 1 million–qubit machine could be 100 million trapped-ion qubits. This is why, of course, you don’t hear them saying very much about that kind of thing. Konstantinos Yes, that’s a scary number considering what they have now. When you describe it this way, it makes me think of like what Super.tech’s doing right now with the SupermarQ benchmark — this ability to run different quantum computers through actual applications to see how they perform and compare. They even consider things like reset times — how long it takes to get up and running for another shot or whatever. That is all important. It sounds like this machine would easily achieve quantum advantage. If you just do the math, we’d be in a territory where we can’t stimulate it, but it would still be short of the 2,500 you’d need for attacking, let’s say, Bitcoin, or 4,000 for the start of RSA, and then up, depending on the size — somewhere in that sweet spot. Of course, if you built a few of them and connect with them, maybe overnight we do have a juggernaut. It is possible to chain these together. Terry Yes. And those applications that you bring on, attacking Bitcoin — I think it’s attacking the communication protocols and Bitcoin, or attacking RSA. I’m not an expert on security, but I’m not keen that the first quantum computer gets used for that kind of thing — for things that are of dubious societal value. And the things that we think the several-hundred-logical-qubit quantum computer will be able to solve are things, fortunately, that are not like that — they’re things that are important for health and climate change and material science and so on. Fortunately, those will be the things that we use the first quantum computers on. Also, there are options for the classical security world to start using technology. In fact, they’re ramping up to doing that now — using technology which is immune to quantum computing attacks. Hopefully, by the time quantum computers are big enough — they do have several thousands of logical qubits — we won’t need to worry about them. Konstantinos Yes, a lot of customers do come worried about that exact thing: When these machines are a little more mature, are we immediately in danger? And I tell them the benefits will come way before. Optimisation will be revolutionised before cryptography is attacked. Machine learning will get better, and that’s the space where we can’t even predict how much of an impact it’s going to have. And security uses machine learning too, so it might be better for smart networks and things like that. When you see research like what’s going on with Nature and everyone’s talking about low error rate, do you think that a machine built on one of those technologies, do you think that might be similar to the quest for a topological quantum computer? Do you think there are still other lower error-rate ways to fill? Terry Those numbers look very good, but the reality is, those numbers are the numbers you need to hit in order to build the million-qubit machine. So, if you have a gate fidelity of 99.99%, you still have a failure in one in a thousand operations, which means you can’t do 10 billion gates — you just get out noise. So, we need to hit gate fidelities — and with photons, we can — of 99.99% just to be able to build that million-qubit machine with several hundred logical qubits. Don’t get confused here: Unless they hit 99.999999999%, they’re going to have to use error correction, and they’re going to end up using roughly that 1,000:1 physical qubits per logical qubit. Konstantinos The founders of your company, you have all published a lot of papers — it’s pretty impressive — and you’ve been building toward this odyssey for a while. Do you anticipate doing something soon just to show where you are in the process, like how your qubits are faring? Terry There’s debate as to what extent we should or shouldn’t get into that game at some level. Once you start that, it can become a big distraction, so we’re very focused on not deviating off the critical path. Along the way, if it’s a minor deviation to just build some large thing that looks impressive, we will do that. The thing that we’re building is really going off making everything that can be manufactured — it’s boring engineering if you’re a physicist at some level, but I can now take this thing, and we’re already making thousands of wafers of components — hundreds of millions, much more than every physics lab of optical components has ever put together. We just got it here on a wafer. That kind of stuff is very cool if you’re an engineer or a physicist who’s into having to manufacture this stuff, but it would be a bit more difficult to convince the editors of Nature that “this is something worth making a song-and-dance about.” Konstantinos You have to also consider at some point IP. At some point, it’s not just science anymore. You’re not just publishing papers for papers’s sake — you want to have an edge. How much do you want to give away? I definitely can respect that. You’ve hinted at this, but how would you say your approach compared to the other big two — trapped ion, transmon? What kind of indicators do you have early that make you go, “This is why it’s better, clearly”? Terry Our machine, we use photons, which are particles of light, as our basic qubit. The nice thing about the photons is, they don’t feel heat. So, in principle, this is the only way we know that you can one day get to a room-temperature quantum computer, because the photons don’t feel heat. Something which is a bit more subtle but important physics that isn’t a priori of the obvious if you’re not a scientist is that photons are the leading way to do very high-speed, very clean measurements. Photons, obviously, move fast — they move at the speed of light — but it’s also that you want to be able to do a measurement on them extremely quickly, and you can do your measurement on photons much faster than any other technology. The thing about measurements and why is this important, well, you probably know that measurements in quantum theory are infamous for being random, and they cause this fundamentally uncontrollable collapse of the wave function. They feel like a very problematic piece of quantum theory. But every quantum-computing approach has to use measurements, because despite the fact that they’re random and uncontrollable, they’re the only good way we have of removing entropy from the system. Entropy is noise, and the noise is a thing that’s going to cause our quantum computer to start spitting out nonsense. If you look at what, say, IBM or Google is doing, they take four or five qubits that are close to each other and do a four- or five-qubit measurement on them to remove entropy, and they repeatedly do that, and the qubits that they have are not destroyed by the measurement. The whole computation is really just repeated measurements on ensembles of four or five qubits. But with photons, measurements are destructive. Unfortunately, you can measure them really quickly and get the answer really quickly, but the process of doing it actually destroys the photon, so that kind of architecture just doesn’t work for us. There’s a different approach, a different architecture, that we follow which allows us to use two-qubit or two-photon destructive measurements instead of the many-photon nondestructive measurements. We have a proprietary architecture which allows us to use those very fast two-qubit measurements as a way of encoding and driving a fault-tolerant quantum computation. It’s a very different approach at a fundamental, physical level. Konstantinos Are we drifting here into the fusion-based quantum-computation approach? Terry Yes. The fusion I’m talking about here is not another fusion-of-energy production. Konstantinos Yes, of course. Very different headlines to be generated. Terry Very different data. I don’t want to be associated. The word fusion for us has to do with the process which merges, or fuses, quantum states, and it’s actually something which is very easy to get photons to do. I hope you’re in a room with windows, but if you look at a window and you see a partial reflection of yourself in the window, what you’re seeing there are photons that have gone from you to the window, and then some of them transmit through the window and some reflect back. Now, what you can imagine is that if there’s a photon on the other side of the window— consider one that’s gone through you and then reflected off the window. If there’s one coming from the other side of the window that happens to be identical to the one that’s reflected off — it’s got the same color and the same shape and polarisation and so on —hat process, they interfere together in the window, and that process is the key part of fusion, what we call fusion of the two photons. Our whole architecture is based on that kind of interference — we build it onto semiconductors, but it’s the same thing as the window: The photons can come along and then see a window and go off. Then, much like your eye absorbs that photon and you see whatever it is, it gets destroyed in the process of being absorbed in your eye, and our fusion-based approach, that’s what happens as well, which is why we had to design an architecture which doesn’t rely on keeping all the qubits sitting there in a nice, static array — in some kind of cryostat that can deal with the fact that the photons are moving really fast, they get destroyed. But the nice thing is that the gates that you do with them are as simple as building a piece of glass. It’s a simple technology in terms of getting the photons to interfere with each other. Konstantinos Would this have any applications in quantum networks also? Terry You were asking here, “What’s different about our approach?” One thing is the manufacturability and stuff like this. Another is, we get that networking for free, because optical fibers, this amazing quantum memory, if you want to think of it that way, where you can put a photon in and send it for very long distances or very long times from the perspective of a photon. Our architecture already is essentially a networked machine, and we can imagine extending that to bigger and bigger networks rather than if you were building your qubits out of ions or superconducting qubits. Your best bet is probably to try and take that state and then convert it into photons and then put it into an optical fiber or something similar, which is what we call the transduction, and it’s a pretty tricky technology. There are other big differences at an engineering level. Because the photons don’t feel heat, we can put electronics right on top of the photonic chip, and we don’t have to keep the photonic chips super isolated, so we can have a billion transistors sitting right next to our qubits, and things like this. There are other things that we do already which are very large-scale from the perspective of quantum computing. Konstantinos That becomes a bit of a wiring nightmare otherwise. If you can’t have anything right on the qubit — I don’t want to imagine a million wires plugged into this refrigerator or something. Yes, that was a really good explanation. So, it’s still early, and you’re building this to interface. Are you already giving thought to the other layers of the stack for interfacing with this — what kind of SDKs, maybe even a simulator from now just so you can start to see what it might be like to address these qubits one day? Terry There are software tools and books and things out there which can really help anyone learn quantum computing, and I would encourage people to do it. I have a book myself: Q Is for Quantum. The first part of it is free. That teaches up to quantum algorithms — like QIsforQuantum.org. From my book or from other books, you can get the abstract mathematical understanding of how a quantum algorithm works and what way this power comes from. What’s not so often talked about is that when it comes to a real-world quantum computer, the compiler depends very much on the hardware. This is true in classical computing as well — I don’t know if you remember when Apple rolled out the M1 chip, which is built on an ARM architecture instead of an Intel architecture, and everyone suddenly has to go and reprogramme stuff and recompile and all this kind of stuff. Konstantinos Or write within stacks and layer — yes. And that introduces hits to performance. Terry Exactly. So, you can go off and learn the very high-level stuff from my book or some other book, but when it comes down to working out, “How will I compile the problem I want to solve for a particular quantum computer?” it really depends on the hardware, and because our fusion-based approach is really a different instruction set, very different to the other approaches, we can’t just rely on some other quantum algorithm companies doing that. And so we have an internal team that basically works at that compiler level there and then works with potential end users, works with customers and industries that we’re interested having use the quantum computer to basically work at, “You want to solve a problem of this size? Here’s the size of photonic quantum computer that you will need. Here’s how long it will take,” and all of this stuff. So, yes, it’s, in some sense, not possible to be just a 100% pure hardware company, because no one else can actually compile for your machine. That’s why we have a bunch of people coming to work with us on those kinds of problems. Konstantinos Do you envision then building it all the way up to stack or at some point handing it off? Like ColdQuanta, for example, I think they got to the point where they said, “We can do this much, but when you want access our machine, you’re going to use Qiskit, because at the end, you’re still going to interface with that.” Terry No, we’re going the whole way up, because it turns out that there’s a lot of optimisations that you can take all the way up to the algorithmic level. To some extent, people live in this fantasy world where they think that quantum computing will be like classical computing, where the software companies ate the hardware company’s lunch. It’s just not going to happen when you have a scarce resource, like the hardware of a quantum computer. Why would I let some other software company go off and get the IP that I can get by solving these problems myself? Konstantinos So, how do you envision access to it, then? Right now, including one of your big investors, Microsoft, they have their cloud environment, where you can go there and access other hardware. Do you envision being available on Azure, Quantum or Amazon Braket or anything like that, or just a direct-to-you kind of interface? Terry Until we have many quantum computers and an overproduction of hardware, which will be a long way down the track, it’s not going to be something in which you just dial in and get access to a computer. That’s very unlikely to be the case. Konstantinos So, you envision some kind of usage model for customers, like subscription, or something like that? Terry They will need to work with us — or with any hardware company, really — to get that compilation right, because the machines will be small and one logical qubit extra can make a huge difference in what you can run. Konstantinos Double. It’s double the power. Terry Exactly. So, it’s just not going to be the thing where you can hand the grad student a login key and they’ll go off and solve this, for a long time. It’s not going to be a glut of quantum computing power. Konstantinos Yes. I’m already calling it the bottleneck. I do see someone’s going to say, “We came up with this perfect way to do fraud detection.” It’s, like, “Great. Now, do it real-time.” You can access a machine once a week for two seconds — good luck with that. We have to work out those availability issues. I was going to ask you if you had any set use cases you were hoping to work on, but with the range of what you’re going to launch — and it sounds like you can work on anything, really. Terry As a company policy, we’re most interested in applications that we think we can tackle with a small-scale quantum computer but that have the maximum possible positive societal impact. When you look at that interface, you’re talking about things to do with healthcare, climate change, energy, that kind of thing. Then you can broaden out from there. That’s why we have the applications team, which works with customers on looking at different quantum algorithms, but we target end users who have potential applications that we think could be really important, where we can get results that shift the needle on certain areas in which society will really benefit. And we’re less interested in the security things you talked about. Once you got a lot of computing power, there are plenty of other things you can do to monetise it. But I’m a scientist, not a businessman, and I’m not personally motivated to build a large company for the sake of it. Konstantinos Changing the world is a good goal, so we’ll allow you to have that one. With that said, I appreciate your coming on and sharing all this. I’m excited to see this machine come to fruition. I’ve been thinking about it a lot lately, and it’s fun to finally get to talk to you. I’ll be sure to put info on your book and everything too in the show notes. Terry, thank you. I appreciate it. Terry Thanks, Konstantinos. Konstantinos Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap. PsiQuantum is working on a photonic quantum computer that will launch with a million qubits right out of the gate, to be applied to quantum gate operations. Even with error correction, this could yield a thousand logical qubits that would prove advantageous to almost any type of use case. PsiQuantum is hoping to apply the machine to some noble, impressive uses too, including climate science. The photonic approach they’re taking may be very low in error when put into practice. The founders have been working on optical quantum computing basics since around 2003 and are now taking lessons learned to a new level. By focusing on a modular approach, PsiQuantum will be able to build multiqubit modules that can be optically connected inside the box to make a machine with a large qubit count. This may be extendable on a larger scale too. Unlike with transmon qubits, photonic qubits are already optical. It should be possible to connect them over fiber without any loss in fidelity. Imagine the possibilities of connecting a few PsiQuantum machines together one day. We’re still a couple of years away from the first modules being completed, but the company’s already planning on how it will make maximum use of the technology by building the full software stack right to the end-user level. We’ve already seen a lot of performance gains from proprietary stacks in the industry, such as Qiskit runtime from IBM, so this is not surprising as an approach. That does it for this episode. Thanks to Terry Rudolph for joining to discuss PsiQuantum, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services of Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow ProtivitiTech on Twitter and LinkedIn. Until next time, be kind, and stay quantum curious.