Podcast | Quantum Computing in Particle Physics – with Dr. Harry Cliff

In 1981, Richard Feynman gave a keynote that proposed simulating physics with computers. We’ve come a long way with the resulting quantum computers, and you may have heard about business use cases for them. But how much progress has been made in using the machines to understand the universe? Who better to ask than Dr. Harry Cliff from the Large Hadron Collider? He discusses how quantum computers can simulate particle interactions or handle the mind-boggling amounts of data generated at CERN. We also dive into his new book, Space Oddities. Join Host Konstantinos Karagiannis for a chat with Harry Cliff from Cambridge and the LHCb experiment.

Guest: Dr. Harry Cliff from Cambridge and the LHCb experiment

The Post-Quantum World on Apple Podcasts

Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organisations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.

Subscribe

Read transcript

+

Harry Cliff: There’s a group of people now at CERN looking at what applications of quantum computing could be applied to high-energy physics, and it’s quite a rich area already. A couple of big roadmap documents were published last year.

 

Konstantinos Karagiannis: In 1981, Richard Feynman gave a keynote that proposed simulating physics with computers. We’ve come a long way with the resulting quantum computers, and you might have heard about business use cases for them. But how much progress has been made in using the machines to understand the universe? Who better to ask than Dr. Harry Cliff from the LHCb experiment? He discusses how quantum computers can simulate particle interactions or handle the mind-boggling amounts of data generated at CERN. We also dive into his new book, Space Oddities, in this episode of The Post-Quantum World.

 

I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.

 

Our guest today is a particle physicist at the University of Cambridge working on the LHCb experiment. He’s the author of two wonderful science books, How to Make an Apple Pie From Scratch and the fresh-off-the-press Space Oddities. Harry Cliff, welcome to the show.

 

Harry Cliff: Great to be here. Nice to meet you, Konstantinos.

 

Konstantinos Karagiannis: Thanks. I have to confess, I’m a fan. I’m thrilled to have you on here. Your publisher sent me an early copy of Space Oddities, and it was just as readable and engaging as your first book. We’ll dive into ideas from those books in a moment, but I’d love to orient listeners high-level and explain your work. Most people have heard of the LHC, but can you give a brief overview of what that machine is and explain why there’s a tiny little b appended to LHC?

 

Harry Cliff: The LHC is the biggest scientific instrument that’s ever been built. It’s just outside Geneva on the border between France and Switzerland, about 100 meters underground. It’s essentially a 27-kilometer-circumference ring, and its job is to study what the universe is made from. That’s ultimately the big question we’re all working on in one way or another at CERN. We’re trying to figure out what the physical world is made from, and we do pretty much the simplest and most brutal thing you can imagine, which is to take little bits of the world in the form of protons, which are subatomic particles, and you accelerate them round this ring until they’re going 99 with nine sixes after it percent of the speed of light, and then you crash them into each other, and you see what happens.

 

My experiment is one of four big detectors that are spaced around this ring. Essentially, it is a big, three-dimensional digital camera the size of an office building that records what happens in these collisions. In those collisions, lots of physicists are looking for different things. I am particularly interested in particles called beauty quarks, which is what the B in the name of my experiment stands for: the LHCb — Large Hadron Collider Beauty experiment.

 

These beauty quarks are very interesting because the way they behave, the way they decay, the way they interact with other particles can tell you about the fundamental forces and the fundamental interactions that exist at very short distances when you zoom right down smaller than an atom. That’s the very big picture of what we’re doing and what the LHC is. 

 

Konstantinos Karagiannis: It’s been generating news for a while — the LHCb, especially.

 

Before we step away from that, do you want to talk about the fifth force of nature and this potential mystery we might be solving?

 

Harry Cliff: The book, Space Oddities, that I’ve just finished working on is about anomalies. What an anomaly is in science is a result you didn’t expect. It’s some place where you observe something in the heavens or you see something in your lab that you can’t explain, that sticks out from your expectations. There are a bunch of anomalies in circulation, but there’s been a set of anomalies in the behavior of these beauty quarks emerging over the last decade or so — and not just at the LHC. Basically, we’ve seen these beauty particles behaving in ways you can’t explain with our prevailing theory of particle physics. We’ve seen some anomalies at LHCb.

 

There have also been hints of anomalies in other experiments. There’s an experiment in Japan called Bell, and there’s one in California — now defunct, but they’re still analysing the data — called BaBar. In essence, what has been seen is that these particles are decaying. When these beauty particles are created, they live for a very short period of time. They live about one and a half trillionths of a second, and then they decay, they turn into other particles, and there are lots of different ways they can decay.

 

Our theory, which is called the standard model, can be used to quite accurately predict how often different decays should happen or, in a certain decay, what angles the particles should go out at or how much energy they should have. Through a range of measurements, there are a whole set of these decays that don’t quite line up with what you’d expect from the theory. And what’s been quite exciting in the last decade or so is that theorists have found that you can explain all these anomalies at the same time if there is a new force of nature — a fifth fundamental force beyond the four we already know about — so it’s potentially a clue to what would be a big breakthrough in fundamental physics.

 

The problem with these anomalies is that the experiments are very complicated. There is always the possibility that some of them could be explained by an experimental glitch or a missed effect you didn’t account for. In fact, some of the anomalies have recently disappeared for that very reason. Also, it might sound strange, but sometimes, it’s quite difficult to calculate from your theory what you should expect because the theory is very complicated and the interactions are very complicated. It’s an unresolved story at the moment. We have these clues, and we don’t yet know whether they are evidence of something exciting and new or whether it’s something we’ve misunderstood in our experiment or our theory, but we’re working very hard to try and figure that out.

 

Konstantinos Karagiannis: You did a great job in both books talking about this idea of minimising the uncertainty or error. We’re not going to go deep into the statistics of it, but this idea of, you want to be able to get down to, let’s say, five sigma to show that it’s significant — a one-in-a-million chance or something that you’re wrong. Can you talk about the headaches involved with that? There’s a theme here — dealing with data, and then where we’re going to with quantum computing and everything. Can you talk about how challenging that is?

 

Harry Cliff: I should explain the five-sigma thing for people who may not have come across this. In particle physics, this standard has emerged by which you can say you’ve made a discovery, and it’s called five sigma. If you have a set of data and you have a prediction from your standard theory, the data has to essentially move away from the theory by five standard deviations — five units of uncertainty.

 

Let’s say you’re measuring some property of a particle — mass or magnetism or something like that. That number would have to be five errors away from the prediction before you can say this is a bona fide discovery. The reason for that, essentially, is that in an experiment like the LHC, you have trillions and trillions of collisions that are analysed by thousands of people in different ways. In effect, you make thousands of measurements. Well over a thousand papers have been published using LHC data. That is a bit of a problem, because if you have a thousand experiments, just by statistical uncertainty, you will get some measurements that randomly will wobble away from your data by quite a large amount.

 

It’s a bit like if I gave you a coin and I said, “I want you to tell me if this coin is fair.” Let’s say you tossed that coin ten times, and you got ten heads in a row. Now, that’s a very unlikely thing to happen, and you might conclude that the coin is biased. But if I told you that I gave that same coin to a thousand people and you were the only one who got ten heads in a row, you would go, “OK, maybe this is just luck. I was the one-in-a-thousand lucky person who got this result.”

 

The reason you have five sigma is, at that point, when it’s five errors away, the chances of a measurement wobbling away from the theory by that amount is less than one in a million. We don’t do millions of experiments, so that means it’s very likely to be real. But that’s just statistical uncertainty. That’s this random wobble that you get in a coin toss or a dice roll.

 

But there are also other kinds of problems — what we call systematic biases — that are effects in your experiment you haven’t properly accounted for. This is exactly what happened in an experiment I work on. We were comparing how often these beauty quarks decay into two types of particles. We were comparing how often they decay into particles called electrons, which are probably familiar to your listeners — they’re the things that go around atoms — and then another particle, called a muon, which is basically the same as an electron. It’s just a bit heavier.

 

There are two very similar particles, and you expect the decay to electrons and decay to muons to happen at the same rate. The problem is that the detector we have is much better at seeing muons than electrons. Muons are easy — they go through the whole detector. They leave a big electrical signal — no problem. Electrons, they scatter, and they’re messy, and they’re easy to confuse with other particles.

 

When you make your measurement, you can’t just count how many electrons and how many muons you see, because you’ll see a difference, but the difference is due to your detector not being perfect, and you have to account for that. And if you get that wrong, then you can get a biased result.

 

I suppose the biggest job of any experimental scientist is chasing down those systematic biases because these detectors we work on, they have millions of sensors. They’re incredibly complex devices. The way particles interact with them is very complicated and not totally understood from a theoretical point of view — you can’t simulate it perfectly. You have to use a combination of simulation, data-driven techniques, physical intuition to try to unfold all these kinds of biases before you can then finally make your measurement. And that is a very long process.

 

People are often surprised how long it takes us to produce scientific papers. Papers can take years. The result I worked on most recently, I started working on in 2015, and it wasn’t published till 2021. There’s a very long time developing these results because you have to take into account all these effects before you can say with confidence that you’ve measured something properly.

 

Konstantinos Karagiannis: That’s certainly longer than any quantum computing paper I’ve worked on, although I can relate to the errors. We see that — the machines are prone to errors, and we have to account for that, and we have to do multiple shots and then compare what the most likely answer was. All these worlds converge.

 

And you’ve been there since 2008, when the machine switched on. That’s basically when you got started, and you were involved in the early days with doing some code, and ever since then working with data. You’re no stranger to all the mass amounts of data. I’d like to get your thoughts on what it’s like dealing with that much data — and we’re going to, of course, now turn our attention a moment to quantum computing. It’s a lot. You couldn’t even capture everything that comes out of the LHC.

 

Harry Cliff: That is true. This is probably a bit out of date, but I remember doing some research for a talk I was giving at a data science conference. This was almost a decade ago now, but I worked out how much data the LHC produces each year — not what we record, but what comes out of the detectors every year. If you recorded all the data the LHC produced in a year, it would exceed by orders of magnitude all the data that’s ever been produced by the human race or ever recorded by the human race, including all internet traffic, all telephone conversations. It’s a vast flood of data. It’s a big challenge.

 

The first thing that has to happen is, you cannot record it — it’s just not possible. In real time on the experiments, there is a decision-making algorithm called a trigger that basically looks at each collision as it happens and goes, “Is this interesting? Is it likely that something interesting has happened in this collision?” And if it thinks it’s interesting, it will record it or pass it to the next level of decision-making. But most of the time, it’s just thrown away immediately. We keep only about one in 100,000 collisions or so. Just give you a sense, you’ve got 40 million collisions, interactions, every second in all four detectors. And that happens 24 hours a day, seven days a week, about nine months of the year, with a few little breaks in between.

 

That gives you a sense of the vast amount of data. That’s one immediate challenge. I suppose one of the scary things about these triggers is, you design them to record things you hope or expect to see, but there’s always a chance you just code it wrong, and you lose the signal you’re after because it gets thrown in the bin and it’s lost forever, or there are things you didn’t expect. There could be that there’s some exciting new signature we just never thought of, and our triggers aren’t designed to recognise it. That’s one challenge. 
But practically, from my point of view, as we’ve accumulated more and more data, one of the big challenges is the time it takes to process it. If you want to change the way you’re processing your data or analysing it, running back over the entire data set can take months now because the data sizes are so huge.

 

The other big challenge — and this is maybe where quantum computing will come in in the future — is in simulation. I talked about unfolding these biases from the data. One of the main tools we have for this is large quantities of simulated data where we simulate the entire experiment from the quantum mechanical interactions of the protons in the collision all the way through how the particles interact with each sensor in the detector.

 

At the end of it, what you get out is data that looks as close as we can to what came out of the real experiment. You simulate all the sensors in the detector, the electrical signals, how they’re processed, and the simulation is put through the same chain as the real data. It’s pretty good. It gets pretty close to what the real data looks like, although there are some areas where we know it doesn’t work — doesn’t quite get things right. But as you have larger and larger data sets, you’ve got to deal with these systematic biases to a greater and greater level of precision, and that means more and more simulation.

 

Then it’s, how do you get the processing power to generate that simulation? It’s very CPU-intensive, and then storage is the other problem. This is increasingly the thing we run up against. It’s not so much we don’t run against experimental-hardware problems. It’s computing problems that are the biggest challenge, particularly as we’re now moving into an era where the LHC is going to get upgraded and will record data at orders-of-magnitude-faster rates than it has in the past. This is only going to get worse as we go forward.

 

Konstantinos Karagiannis: You have that whole blend of time where you have to spend some time actually getting results and sometimes improving the machine. It’s a trade-off that happens.

 

A few thoughts on what you said: This idea of triggers, they sound a little bit like classifiers, like in information. If you’re looking for fraud or whatever, you would run classifiers, and there is always the risk that you don’t fine-tune it, and throw things out. I hope maybe one day quantum can help with that initial trigger classification and do a better job of that gray area, that granular line, and where it is, but then dealing with all the information you actually get.

 

Feynman, when he came up with the idea for quantum computing, he was thinking along the lines of exactly what you said — simulating reality. It is a quantum reality, and, in some ways, you’re going to run out of resources the minute you try it. You mentioned how many particle collisions there are and where those particles can go. That’s a natural fit.

 

It’s been frustrating to me over the years that so few scientists even gave a thought to using a quantum computer one day to do that level of it, because it’s where it all began. Are there folks right now at CERN giving thought to following the path of quantum development and seeing how that simulation could go on?

 

Harry Cliff: There’s a group of people now at CERN working on looking at what applications of quantum computing could be applied to high-energy physics, and it’s quite a rich area already. A couple of big roadmap documents were published last year. I was mugging up on them as preparation partly for this podcast.

 

The thing that seems interesting from my point of view is, one of the biggest challenges we face at the LHC has to do with the fact that we’re colliding protons. Protons are not fundamental particles. They’re made of particles called quarks and gluons. You can think of a proton as a messy bag of particles, and when you smash them together, those quarks and gluons interact in a complicated way, and then stuff comes out, and the particles that come out are also not fundamental. They’re usually other particles made of quarks and gluons. And the theory of quarks and gluons, which is called quantum chromodynamics, Feynman helped to develop back in the ’60s and ’70s.

 

The problem throughout its history is, it’s very difficult to do any calculations with, because in a lot of the regimes you want to apply, it’s a nonlinear theory. It’s called strongly interacting theory. You can’t use the traditional technique, which we use in quantum mechanics, which is called perturbation theory, where you essentially break your interaction up into a series of terms that get smaller and smaller, and then you can ignore the small ones and calculate the leading-order terms. That doesn’t work in quantum chromodynamics.

 

The solutions we have at the moment are incredibly CPU-intensive. They involve breaking spacetime into a lattice and solving it at particular points in spacetime. Then you have to put it through a massive supercomputer. Long story short, one of the biggest challenges we have of extracting physics information from our data is, we don’t understand how quarks and gluons behave. One of the interesting applications of quantum computing could be new ways of simulating the theory of quarks and gluons so you can get a much more accurate and much more efficient calculation of the properties of these particles.

 

If you could do that, that would be revolutionary because what we’re interested in is not these messy things made of quarks and gluons. We’re interested in the fundamental interaction that happened right in the center of the collision, and that will allow us to unfold that from the data. That is very exciting — particularly, even my own work on some of these anomalies. One of the problems with them is that we don’t understand the theoretical prediction, because it involves quarks and gluons, which you can’t calculate very easily. Anything that helps us with that will make a big difference.

 

Of course, on the experimental side, it could well be that — I’m not an expert, but people are looking into the use of quantum computing for classification, but also for reconstruction of the data. There’s this step before you get to analyse the data where what the actual data looks like is, you’ve got this detector with millions of sensors, and what you have is hits in the detector. There were hits here in these sensors with this much energy deposited or this much light in this detector. That unprocessed information has to be turned into, these series of flashes of electrical energy are a particle going in this direction with this momentum. You fit a track through those dots in your detector.

 

There are also people looking into the use of quantum computing to speed up that process, where, rather than using classical algorithms to do this, you could potentially do this more quickly using quantum algorithms. But it’s still an early stage of development, this kind of work. But they are looking, as I understand it, at near-term noisy quantum computers of the type that are available or are going to be available in the next few years, rather than waiting for much-larger-scale quantum computers where you don’t have to worry about noise in the same way.

 

Konstantinos Karagiannis: You have to run it a lot of times, and you guys are used to that. You run things, as you said, all day. The reason the LHC is buried so deep is, can you imagine what the neighbors would be listening to if it wasn’t — a racket 24 hours a day? It’d be too much.

 

Harry Cliff: It’s very quiet because it’s in a vacuum. The reason it’s underground is quite interesting. People often assume it’s because it’s radioactive or dangerous, but it would have been incredibly expensive to buy 27 kilometers of land to do this. You’d have to knock down people’s houses and build over farms. That’s why, essentially, it’s underground. Underground, no one has anything going on once you’re below 100 meters.

 

Konstantinos Karagiannis: There’s a wonderful elegance — this idea that quantum computing, which we are able to do because of our understanding of quantum physics, might yield a new understanding of quantum physics in the end. I love that. There’s something wonderful and cyclical about that, like the LHC. And there’s other stuff going that might feed into quantum computing. There’s cryogenic research, other things like that. You might find new ways to improve devices there that can feed out. There are papers being published, I’d imagine, also on those more technical aspects besides what particles are discovered and things like that.

 

Harry Cliff: I don’t know to what extent CERN is developing actual quantum hardware. It certainly has a lot of expertise in cryogenics. The LHC is the biggest cryogenic facility in the world, but it’s quite specifically for cooling superconducting magnets down. One adjacent area is using quantum sensors in cosmology experiments.

 

There’s a project a few of my colleagues are involved in at Cambridge. It’s a gravitational-wave detector that uses atom interferometry. This is essentially using entangled atoms that get sent down two paths of an interferometer. And you can use this to detect dark matter gravitational waves from the early universe. And this is all based on quantum sensing.

 

There’s a collaboration going on between high-energy physics and very cold physics, very low-temperature physics, which traditionally have been quite distinct areas, but they’re coming together to build these new sorts of instruments. It’s quantum — not quite in the same sense of quantum computing, but quantum sensing is another big area that is going to have huge impacts in the future.

 

Konstantinos Karagiannis: We’ve covered it a few times because it does have real-world uses already. It’s amazing. One day, there’ll be a new detector for gravitational waves buried in yet another bayou somewhere, I’m sure, where they’ll be detecting these kinds of things. That’s a very specific reference — you have to read the books to get that.

 

Your writing is clear and remarkable. You have a way of explaining these complex topics clearly, and you bring readers places they couldn’t physically go. In Space Oddities, in just a few paragraphs, you bring us to the beginning, the big bang, and then inflation after. I thought it was well done. And then I remember in Apple Pie, you brought us into the pseudo-mind of the protons that are about to be smashed together and how they would have been immortal otherwise and all that. That was incredible. When you think of these visual ways of getting across information as a science communicator, what’s your process like? Is it something like exploring the daydreaming of thought experiments? Is it some similar place you go, like a liminal world where you conjure this?

 

Harry Cliff: Sometimes these ideas just come to you. That passage you’re describing from the first book about what I described for people who haven’t read the book is the first collision at the LHC from the point of view of the protons. And that came from a project I’d worked on many years before. I used to work at the U.K.’s national Science Museum, and we did an exhibition about the Large Hadron Collider. I was one of the curators of that show, and we spent a lot of time discussing with artists and designers about how we should structure the exhibition because we decided we wanted to take people on a trip to CERN.

 

One of the ideas that emerged was, as a visitor, you’re the proton. You follow the rig of the LHC from the proton’s point of view. In the end, we didn’t go down that route because we thought it’d be too weird, but that idea was lingering in the back of my head and somehow just came out when I was writing. But I have quite a visual imagination. When I’m solving problems in physics — particularly in my research, but also when I was an undergraduate solving algebraic problems — usually it was about coming up with a mental picture and an intuition for what was happening that allowed me to solve the problem. It starts that way, and then you have to codify it in mathematics.

 

That’s the way I’ve always thought, and that’s where some of the imagery in the books comes from. But another big aspect of it is, I try to get across not just the abstract physics but also, what is it like being in a scientific workplace? Some of the places people work are extraordinary — astronomical observatories on mountaintops. They’re — well, if you don’t have to be there 30 nights in a row on cold evenings — an incredibly romantic place if you’re just visiting for a night to write a story down.

 

These are spaces most people don’t get to experience. A relative few people get to go to CERN and go underground and see the LHC. I do try to get some of the magic and excitement of these places across in the writing as well. It’s a mixture of that — what’s it like as a scientist versus also trying to find ways of visualising what’s going on. Quite often, the physics itself is abstract, and it’s difficult to get your head around unless you can come up with a clear mental picture of what’s happening.

 

Konstantinos Karagiannis: I got to spend some time at Fermilab. I was there because of the QuJiT work they’re doing, which is like a qubit, but more dimensional states. And when I read your description, I was, like, “He totally nailed it. That’s exactly what it’s like there.” You definitely have that ability when it comes to that thought experiment idea. You also work in actual experimentation, so you bridge those worlds. It’s not just like dreaming up things, like Einstein riding on a beam of light or something. You’re actually down in the trenches, too, at the same time. Any thoughts on the importance of those two worlds coming together — that practical experimentation, and the theoretical?

 

Harry Cliff: Theory and experiment have to work together. They’re two sides of science. One can’t function without the other. The days of people like Einstein sitting there in his patent office, imagining what it’s like sitting on a photon and coming up with deep truths about the universe, there was this particular period in the 20th century where that kind of thinking worked. But now, the theoretical problems are much more difficult, and it’s very hard to make progress because so much of the low-hanging fruit has already been plucked.

 

A lot of theorists now work much more closely alongside experiments. They’re listening to what their experimental colleagues are telling them and then coming up with ideas that then feed back to us as experimentalists, saying, “Have you tried looking for this particular signature or in this particular place or building this kind of experiment?” The two are constantly interacting with each other.

 

It’s also interesting — in particle physics, in high-energy physics, where you’re working on these huge projects that have thousands of people on them, there’s a high degree now of specialisation. Back in the mid-20th century, if you were an experimental physicist, you would have designed, built, analysed the data from your experiment, and you might have done it in collaboration with two or three other people. Now, because these experiments are so vast, it’s not possible for any one person to have a complete understanding of the whole thing.

 

You have a general overview, but my job, broadly speaking, is as a data analyst. I essentially work on the data after all the hard work of building the experiment, making it run, processing it, has been done, and people contribute in various ways. And then you have people who are real experts on electronics, how to read data out from these detectors, or real experts on how to build photon detectors. It’s now very highly specialised. You have this stratification. It’s not just theory and experiment. You’ve got hardware people, software people, data analysts.

 

Even between theory and experiment, you have this new discipline called phenomenology, which is taking theory and using it to make precise predictions about what happens in an experiment. It’s phenomenologists who have the most to do with experimentalists. Then, beyond that, you have the high-theory people who work on fundamental quantum field theory or string theory or whatever, who are several steps removed from the experimental world. The interaction of those two worlds is absolutely crucial.

 

There is a possibly dangerous idea that’s got around in the last few years, this idea of post empirical science. There are people, particularly, who are working on quantum gravity, which is the ultimate goal of fundamental physics — getting close to a complete description of the fundamental workings of the universe. One of the problems with quantum gravity is that there is precious little data on it because the regimes in which the effects of quantum gravity become manifest are so extreme that they can’t be created in the lab or any foreseeable lab we might be able to build. There’s been this suggestion that we can assess a theory’s validity based purely on principles of elegance or self-consistency or mathematical beauty. That’s fine to an extent.

 

Konstantinos Karagiannis: A slippery slope.

 

Harry Cliff: It’s a slippery slope. Ultimately, to be science, you have to be able to say, “Is this actually what the real world is like?” These two worlds have to work closely together.

 

Konstantinos Karagiannis: That brings me to my next question perfectly. You set me up for it. Many folks in quantum computing tend to be fans of the many-worlds interpretation. It’s not a surprise, because the practical father of quantum computing after Feynman’s idea would be David Deutsch. Let’s face it — he’s the one that first came up with a way to get this machine to possibly work. How do you feel about Everettian branching or other aspects of multiverses, including the idea that we’re in the right one for our fine-tuned numbers to be correct, and in other ones, it’s not correct? Do any of these ideas feel necessary to you, like they’re out there?

 

Harry Cliff: I’m agnostic about interpretations of quantum mechanics. I’m not a theorist, so these are not things I spend a lot of time worrying about or thinking about. I do think quantum mechanics — there are deep mysteries about it. But the point at which I become interested in an interpretation of quantum mechanics is the point at which it says something about experimental physics that is different from some other interpretation.

 

Whether you believe in many worlds or wave-function collapse, it makes no difference in any experiment we can conceive of, as far as I’m aware. From that point of view, it doesn’t matter very much. If you get to the point where one of these theories is going to tell us something different about the universe, then fine. But I don’t know. Many-worlds is a fun idea, but it seems like the idea of creating an infinite number of infinitely branching worlds just to understand why a wave function arrives at a particular point in a screen seems to me a little bit of overkill. But I don’t have a strong opinion about it.

 

I’ve usually heard the anthropic argument applied not to quantum many-worlds but to other sorts of multiverses. You have this thing called the inflationary multiverse in the early universe, which is where you have this exponential period of expansion before the big bang. The big bang is essentially where part of this rapidly expanding spacetime — the field that’s driving this inflation — decays and creates a spray of high-energy particles, a fireball which is the big bang, and the expansion slows down in this little bubble.

 

The idea of inflation is that if this process is happening all the time, you get bubbles forming throughout this expanding space, and you get multiple universes, essentially. In some versions of this, when a big bang happens, the laws of physics get set up differently. If you believe in string theory, it’s because extra dimensions get squashed up in a different way each time. You can then explain the fact that we live in a universe that seems very nice and conducive to life based on the principle that there are lots of uninhabitable universes, but we live in the one where the conditions are right.

 

Again, my general attitude to multiverses — and I say this in the first book — is that I got on a bit of a rant, which is that the problem with the multiverse is, it’s a get-out-of-jail-free card for any problem that you come across — particularly, problems that are to do with fine-tuned fundamental constants or fine-tuned laws of nature that seem to be set up to create the universe we live in, because you can just say, “It was the multiverse.” And it’s not a very helpful response, because it shuts down the discussion. It says, “There’s not a problem to solve here. It just happened that way by dumb luck, and we’re here because that’s what happened.”

 

Maybe that’s true. Maybe there is a multiverse. Maybe it explains features of our universe. But we have to exhaust all the other explanations before we go to that point of view because the problem, ultimately, with the multiverse is, like many-worlds interpretations of quantum mechanics, it’s not testable. If you accept it, you’re taking something on belief or faith. And at that point, you may as well say, “God set it up that way” — or whoever your favorite deity is.

 

Konstantinos Karagiannis: Every once in a while, I do see someone come up with something that feels almost testable. Over this past year, I’ve seen a couple of approaches where they’re, like, “Maybe if we see this, it’ll prove some kind of multiverse.” I don’t know if we’re going to get to that. But you are potentially looking for a fifth force. You’re looking for other anomalies.

 

There’s weirdness going on with the Hubble tension. Do you want to talk about that before we close out and this potential crisis and all that, if you have anything to say there for listeners. It gets the idea across that maybe something is detectable with a more refined machine in the future that would let us get a hint, like gravitational waves or the cosmic microwave background. Maybe there’s some other thing we’ll pick up on to prove inflation or something.

 

Harry Cliff: There are anomalies in particle physics. But there’s one big anomaly in cosmology. There are several, but there’s one big one, which is, as you alluded to, the Hubble tension. In short, what the Hubble tension is, it’s a disagreement over how fast space is expanding.

 

There are two different ways of getting at this in cosmology: One is based on traditional methods where you look out with your telescopes at the universe. You look at distant galaxies. You measure the distances to them, which is very difficult and complicated because it’s very hard to tell whether a galaxy is big but far away or not so big and close. This problem of distances is a fundamental challenge in cosmology. But you measure the distances as best you can. You measure how fast they’re moving, and you do that using the Doppler shift — the same effect that makes a siren sound higher-pitched when it’s moving toward you or lower-pitched when it’s moving away.

 

Then you can measure the expansion of space as a function of distance, and that gives you this number called the Hubble constant, which is, essentially, how fast should a galaxy be moving away from me based on its distance? That’s one way of doing. It’s called the direct, or local, method.

 

The other way you can do it is to look at the cosmic microwave background, which is the faded light of the big bang, to analyse the patterns that exist in that microwave radiation and use that to infer the properties of the very early universe, the universe as it was 400,000 years, roughly, after the big bang. Then, using that knowledge of the early universe, you can use general relativity and our theory of cosmology to run the clock forward to the present day and predict, effectively, what the expansion rate should be.

 

These two numbers — one based on the local current universe and one based on the early universe extrapolated forward — don’t agree. They don’t agree now by quite a big amount, by over five sigma. This tension has been simmering away for about a decade now, and every time people have challenged it and said, “There’s probably a mistake in how we measure distances, for example, in the local universe,” people have drilled into it, and the tensions just got bigger.

 

It’s a perplexing problem. There are many explanations on the market, but why people struggle so much with the Hubble tension is that none of the explanations are that compelling. One explanation is that there was a form of dark energy — some unknown, repulsive force that existed in the early universe but then disappeared, conveniently, in the late universe, in the universe we live in now. Other ideas, at the more radical end, are that our theory of gravity needs to be revised. Other ideas are that we happen to live in a void in space — an area in the universe that is underdense, where there’s less stuff than there is elsewhere, and that would explain why we see it apparently expanding faster than it ought to be, because we live in a weird place.

 

There are all these kinds of ideas around, and no one knows what the right explanation is yet. But of all the anomalies I talk about in Space Oddities, the Hubble tension is the one that, to me, looks the most compelling. Now, whether it’s going to lead to some fundamental shift in our understanding of the universe, it seems unlikely now that it’s an experimental mistake. It’s been around for such a long time. I would be quite surprised now if it turns out there’s an error, because there are so many measurements, techniques, they all pretty much agree. It’s either something fundamental and exciting — maybe it is something to do with the fact that we live in a weird bit of the universe and we’re just a bit unlucky.

 

An assumption in cosmology is this thing called the Copernican principle, which is the idea that we don’t live anywhere special, so our bit of the universe is representative of the universe as a whole, and the universe is isotropic and homogeneous — it looks the same in every direction — and it’s got the same density and properties in every direction. If that’s not true, then it screws you because it means it’s very hard to figure out the bulk properties of the universe as a whole. Maybe that’s the explanation. I’m not a cosmologist, so I’m not qualified to pass judgment, but definitely as someone who’s dug into it as an outsider, it’s interesting.

 

And it’s not the only anomaly. There’s this other weird one called the sigma-8 anomaly, which is the fact that the universe isn’t as clumpy as it should be. There’s less structure in the universe than there ought to be. Stuff has collapsed less under gravity than you’d expect based on our cosmological theory. It seems to suggest that there is something missing from our understanding of the cosmos as a whole. That’s an exciting place to be, because the last time we had something like this, it led to the discovery of dark energy. We could be in for a similar breakthrough in the next few years. But it’s going to be going to require more data and more clever theoretical ideas to unravel what’s exactly going on.

 

Konstantinos Karagiannis: You heard it straight from Harry. If you’re into any big ideas like this in physics, which I imagine quite a lot of my listeners have to be, you have to check out Space Oddities. It was such a great book. I devoured it. I got it early, and I was, like, “Yay!” I couldn’t wait to rip it apart.

 

Harry Cliff: Thank you so much.

 

Konstantinos Karagiannis: Thanks again, Harry. I could talk to you probably for a few centuries, but we’ll have to limit it to this for now. Thank you. This was terrific.

 

Harry Cliff: It was a real pleasure. Thanks for having me on.

 

Konstantinos Karagiannis: Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap. The Large Hadron Collider, or LHC, is the biggest scientific instrument ever built. Harry Cliff analyses data for the LHCb experiment, which focuses on a specific type of particle called a beauty quark. The team is investigating beauty quarks and their decay to understand the asymmetry between matter and antimatter, potentially revealing new physics beyond the standard model.

 

When looking for an anomaly that rises to the level of a breakthrough in experiments, you need to detect data that moves away from theory by five standard deviations. This five-sigma standard requires a lot of refined work to ensure that no errors have crept in. Running quantum algorithms has many parallels, as we’re used to contending with noise or errors all the time. In short, you’re looking for million-to-one odds that you’re wrong to add weight to the claim that you’ve got the right answer.

 

The LHC generates more data in days than every hard drive on Earth could store, so a lot of complex computation is used. Triggers carefully select which data to record, and then massive horsepower analyses the results. This amount of data will only grow with upgrades to the machine. Quantum computing might help manage this in the future. 
Harry points out some other use cases in physics. The most notable is simulation, which goes back to Feynman’s 1981 keynote. In the case of the LHC, simulating collisions can help better identify biases in handling the data from real experiments. The hope is that quantum computers will do a better job simulating and tracking what should happen after a collision. These simulations make it easier to analyse real collision results. There’s a group of scientists at CERN looking at other ways of how quantum computing could be applied to high-energy physics. See the show notes for a paper link.

 

Harry’s new book, Space Oddities, goes into his LHCb work and some of the other anomalies scientists are looking for. The Hubble tension is one example, where there’s a discrepancy in the rate of the universe’s expansion. If we extrapolate from the big bang and the cosmic microwave background radiation, we expect one rate of expansion, but direct observation of distant galaxies shows a much faster rate. You can learn about that, the alleged fifth force of nature, and many examples of how handling data and science leads to amazing discoveries. Space Oddities is available wherever books are sold starting March 26, 2024.

 

That does it for this episode. Thanks to Harry Cliff for joining to discuss his new book, Space Oddities, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.

Loading...