Transcript | Financial Index Tracking via Quantum Computing— with Multiverse Listen Using cardinality constraints for portfolio optimisation opens the doors to new applications for creating innovative portfolios and exchange-traded-funds (ETFs). All while providing better returns with less market risk. Host Konstantinos Karagiannis recently co-authored a paper on portfolio optimisation with Sam Palmer from Multiverse Computing. Find out how the team was able to outperform classical financial index tracking using D-Wave’s Hybrid Solver… and a little ingenuity. Also, learn about Multiverse’s innovative Singularity software tool. Guest: Samuel Palmer— Multiverse Computing Listen Topics Board Matters Data, Analytics and Business Intelligence Digital Transformation Konstantinos My guest and I recently coauthored a paper on portfolio optimisation. Find out how we were able to outperform classical financial index tracking using D-Wave’s Hybrid Solver and a little ingenuity. You may even hear Schrödinger’s cat make a few background noises in this episode of The Post Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. Our guest today is the head of financial engineering and quantum developer at Multiverse Computing, Sam Palmer. Welcome to the show. Sam Hi, Konstantinos. Thanks for having me. Konstantinos Full disclosure: Sam and I have known each other for quite a while now, and we’ve been working on a project together. We have a paper out on ArXiv that I’ll link to in the show notes, but before we get to all that, you have a cool background, so tell us about how you ended up in your current position. Sam I studied my Ph.D. in computer science, where I specialised in neural networks and customised hardware for derivatives pricing problems. Then, after finishing my Ph.D., I was looking for the next big thing, and quantum was already becoming the next big thing, so this is where I thought, “This would be a cool area to get into.” I started teaching myself more about quantum computing and quantum physics, then combined that with my knowledge of financial engineering. This is when Multiverse was just starting hiring. I saw them coming out of their incubator in Toronto, and I was, like, “Let’s take a chance and apply to these guys to get into this cool industry,” and here we are today. Konstantinos It’s a lot of fun hearing how people got into it. We just did an episode about becoming a quantum coder, because there were no quantum-coder tracks, so you have to assemble your own pathway. I always love to hear. Thanks for sharing that. Of course, what you majored in is going to come in very handy with the types of things we’re talking about today and the things you’re working on. Let’s talk a little bit about some basics here for listeners who don’t know this. Let’s talk a little bit about portfolio optimisation — just a quick explanation for people to understand and level-set before we dig in a little deeper. Sam The basic formulation of portfolio optimisation is, we start off with a set of assets. These are your stocks, and you have your returns on your stocks — how much money are they going to make in the future? — and you can get this by forecasting, any estimations. Then, obviously, there’s some risk involved with these assets. This is your co-variants — how much they correlate with each other, and how much uncertainty there is in these price movements. The idea of your traditional portfolio-optimisation problem is, we want to maximise returns, but then also minimise the amount of uncertainty in our portfolio, and then minimise also the amount of correlation between our assets in that portfolio. That’s the plain and simple objective here. Then, obviously, you start at being able to add more fancier constraints around these portfolios — about how you would like these assets to be constructed, and any other rules. Konstantinos Some of those rules could include things like how long you’d hold them, or what you expect to get back. Sam Yes. We have some previous work where we’ve looked into these problems where we’ll say, “You have to hold onto these assets for 30 days before you can rebalance your portfolio,” where this is more of a dynamic problem as well, where you have full cost going long into time. These are complicated problems. Then, you also have other constraints where we might say, “We want to target the volatility of our portfolio.” Maybe we say, “We don’t want to maximise our returns and such, but we want to have 10% risk and then maximise returns with respect to that.” This is some other work we’ve done with portfolio optimisation as well. Then, the other interesting set of constraints, which is very useful, is if you have different classes of assets. You may want to invest in x amount in commodities, y amount in bonds. These are other constraints you can combine. Konstantinos When people hear about this, they have a hard time wrapping their head around what it means, because this has been dealt with with classical computers for a while, obviously, and it has its challenges and limitations. One of the limitations has been speed, if you’re talking about a big data set. It could take dozens of hours to run one of these if the data set is large enough. Some of the first successes you had in the past were with speeding it up, correct? Getting it done quicker with an annealer? Sam Yes. When we come to these large dynamic problems, there are a lot of combinations. This is where it becomes very hard for the classical solvers to be able to do these problems. The cool thing about quantum annealing is that when you find the ground state of these problems — when we turn these problems into a quantum optimisation problem — the runtime is static. It’s not static, but it doesn’t change on the size of your problem. That’s the whole philosophy behind these quantum annealing optimisation processes, so we don’t encounter these descaling issues. Konstantinos That’s important because, obviously, some data sets get so large that classical just can’t do it ever. Basically, it will take till the end of the universe to run it. Sam An important point is we have very good classical solvers, but then, when you start adding in these discrete constraints, this is when even the classical solvers become quite slow for what would be a reasonably sized problem. It’s in these problems where the power of quantum comes in, because they’re designed to handle these discrete-style problems. Konstantinos Yes. We brought all this basic knowledge to a recent project. It’s a long-running project, so it took us about half a year to do. It’s with the company that’s named in the paper. If you read the paper, you’ll see the company name. The paper is called “Financial Index Tracking via Quantum Computing With Cardinality Constraints.” It rolls off the tongue like all paper titles — it’s smooth, like butter. There’s a lot to talk about here. When we started this project, the client wanted something a little different. They weren’t concerned with speed, because they were doings things pretty quickly already. How did that hit you when we first began this? Right away, almost on day one, they’re, like, “We do this in two minutes.” Immediately, we had to start thinking outside the box. What’s it like hearing that, and adapting and experimenting over time? Sam This whole project we did was a great story of how we started off with something simple and then progressed to this paper. Where we started off simple was, they obviously have this problem they could do very fast, but they wanted to test, how good was quantum? Even if we could have results which were of similar speed, or the speed-ups weren’t significant to how fast they’re already doing things, is quantum actually a viable solution? We were able to match their state-of-the-art convex solvers for solving these simple portfolio-optimisation problems, and what was the cool result in that for us as well was that without any tuning of our optimisers, we were able to find better-quality results than they were able to with the convex solver due to the way we were able to create the problem in a discrete setting. We can attribute this to the convex solvers — they’ll be continuously minimising the gradient, so they’ll be continuing minimising very small amounts, whereas when we have the problem set in a discrete manner, you’re jumping out, so you’re not getting caught up into changing very small values all the time. You’re almost changing the things that have the most impact. This is where we saw a lot of value in demonstrating this discrete problem for the simple case. Then, after that, the story was that, how can we push it further? This is where the interesting work began. Konstantinos Yes, exactly. As the title hints, we’re going to get into cardinality constraints. Listeners should understand too that the applications here are for financial-index tracking. Did you want to talk about that for a moment before we get to cardinality constraints? Sam The index tracking is very similar to the portfolio optimisation, except here, the objective is, we want to match the performance of an index. Typically, indexes, they’re not directly traded. You have to trade them through an ETF, where the idea is that these ETFs, or these funds, they’re buying all of these assets in the index and then compiling them to you, and then selling you a small fraction of this product. You’re getting the same exposure. The idea here is that we want to create a portfolio where we’re using these assets to track the same exposure to this index. This may be the popular Nasdaq 100, or we’ve got the S&P 500, and then you’ve got the Russells, which go into the thousands of assets. This is also where, as the title suggests, we go into the idea of these cardinality constraints, because if you’re wanting to replicate these indexes, you might not want to buy all 500 assets of this index. Konstantinos Historically, that’s been difficult to do classical. Trying to match one of these, it’s been a nightmare if you do a small subset. In the past, you had a speed-up. In this case, speed wasn’t the issue — we wanted better performance. This approach, in a nutshell, we were able to use a tiny little percentage of the amount of assets and get very similar performance. That’s obviously very appealing. Did you want to talk about the numbers and the performance we achieved that way? Sam First, it’s important to highlight that when we introduced this problem of we only want to use a small subset of assets, this is what takes the problem from what we would call convex — you can solve this using some very nice and efficient classical solvers, which we’ve just previously mentioned. The problem then becomes nonconvex because we have to make these decisions: Do I want to buy an asset or not buy an asset? You can’t decide this using it as a continuous variable. It either has to be one or zero. Then, this is where you then have to stop relying on heuristic optimisation methods, which can be nonexact, or they’ve also become slow because you’re having to evaluate your metric — your cost function — every time. The cool result in this paper is that we were able to use the power of quantum to formulate the problem in such a way that the quantum processor can solve this optimisation problem directly. Konstantinos Because we’re doing this as an experiment that iterates over weeks and we refine it, that’s a lot of hand-holding: We’re there — we’re guiding the customer through. What happens in the end? We end up building a tracking portfolio, and then we have to, at some point, enable the customer to do something with this. Do you want to talk about what happens at the end of such an experiment — what kind of uses they could apply directly on their own without us? Sam The results we saw, where we were able to, as you said, replicate these portfolios nearly exactly, with just using a small amount of assets, we were able to replicate the S&P 500 very closely with only 50 assets, and the Nasdaq 100 with only 25 assets. This obviously has a great impact for teams who are going to be building these ETFs. They’re able to now create these products with less management overhead, only having to buy and manage 50 stocks, manage 50 exposures. This makes the life of the fund a lot easier. Then, on top of this, we also looked at enhancing the portfolios. This means that we provide them the same exposure, or very similar exposure, to their underlying index, but we’re improving on the risk profile of it. In this way, we’re saying you have the same exposure as the S&P 500, but you’ve got less risk in this portfolio of 50 assets than you would if you were to buy all 500. This has a big implications for the customer. Now, they can offer this product, but with less risk to themselves, but with still a similar exposure for the client. Konstantinos Yes, and some potential return. We outperformed the risk profile of the target index. Sam Yes. This is where we saw significant adaptation. We could customise a parameter which allowed us to change, how much did you want to track the index? If you wanted to get it very close — and we were still outperforming the risk profile by two times with nearly exact tracking. If you want to enhance on the returns of the index as well, you’ll say, “I want to get a better risk-return profile in total but still mimic some behavior of the index.” We were seeing we’re able to get four times the risk-return profile of the index. Konstantinos Yes. That’s obviously not insignificant. We went from a world where portfolio optimisation was considered faster but not as accurate. Sharpe-ratio accuracy was down by 20% or something just a year ago, and now, we’re at a point where, who cares about speed? We’re outperforming it. It’s quite a different world. Where do you see this logically going as these machines get better? I know D-Wave’s Hybrid Solver is going to get better soon. They’re going to be upping its qubit count as always and improving its performance of it. What do you hope to see from the next iteration of the Hybrid Solver? Sam Just sticking to relation to portfolio-optimisation problems and financial problems, what I hope to see now is with more resources and better-quality annealing solutions, we can start to integrate more complicated constraints. This is what makes the problems interesting. At the moment, what we see is that with D-Wave, if we have too many constraints or we have to formulate the constraints in a way where we’re using things called slack variables, we have to apply additional variables, which may not relate to the problem, but they relate to the constraints. It makes finding the solution of the problem harder. What we hope to see is that if we can find ground states more reliably, we’re going to be able to satisfy the constraints more reliably, and, with more resources, be able to embed larger constraints in all optimisation problems. Konstantinos A few times over the course of the project, I know the customer had mentioned, “What’s the actual cost of running something like this?” How would that compare in terms of everything — time, effort? Typically, what other benefits can someone expect by going this approach instead of classical? Sam The first thing I’m going to say is a shameless plug here, but when we use the Multiverse Singularity Optimisation packages, where we provide access to a very simplified Python interface for creating these optimisation problems, the time of development is minimal. If you have experience building an optimisation problem using any other coding package, you can use this, and then it compiles it and sends it to a quantum machine for optimisation. With the tools we’re developing now in-house and providing to the customers, the cost of development is minimal. Then, we’re also seeing that, as you said, as the quality of the annealing solutions increases, you have to do fewer calls or samples to the annealer. We costed it out to the client that the total solution may have cost him $0.20 per time they wanted to run this. Obviously, this is only going to improve. If they can just get the result perfectly with one sample, then it may only cost them $0.01. This is of an advantage. Konstantinos That is great. For the record, I was going to ask you about that software platform. We’ll talk about it a little more. When someone engages with us to do something like this, they’re getting a few things: They’re getting this experimental approach where we work with them, we listen to their constraints, we build this, we show them numbers week by week. Then, eventually, we get to the point where we’re delivering things. My team delivered a Power BI dashboard so they can observe all the results and analyse them, to slice-and-dice them as they want. Part of the deliverable is also a plug-in for interacting with this data. Do you want to talk about what that all looks like — what you get in the end to play with after we leave the shop? Sam We provide multiple ways to use the tools. We have the interface where we offer through an Excel plug-in. You can access the quantum optimisation just by opening up your spreadsheet, selecting your data with the plug-in, and then you can send it off without a care in the world, and then you get back your results from this quantum optimisation. If you’re more inclined in the programmatic side, we offer these programmatic access packages through our library, Singularity Portfolio Optimisation, where you can code up these problems in your own way if you want and then run these problems on the quantum. We’ll provide the base code for the solution we provided. Then, you also have the power to modify these solutions, play around with these solutions, and even play around with your own problems with the Singularity access that you get at the end of the project as well. Konstantinos If you had to rate the abstracting away that your tool does, how would you rate that? Would you say that it abstracts away a high, medium or low amount of the complexity? A lot of these tools and interfaces for coding in quantum, they either get down, analogous to old-school machine language, or all the way up to something more human-readable. How would you describe that for those who haven’t seen the tool? Sam The tool with the Excel plug-in is obviously at the high level. You don’t even have to have programming experience to use this. You can just drag-and-drop, select your data and then choose the product that you wanted to run it through, and then it’s sent off and then you get your results back. With the programming side, it’s still between medium to high. Even with minimal experience in programming, it’s still very accessible. We obviously provide basic examples to get you up and running, but the idea is that if you understand the basic concepts of programming and a little bit about your problem, you should be able to get up and going within a day or two. You can definitely get playing around with the problem straightaway out of the box. Konstantinos What are your plans for the software tool? Do you envision having access to it without what we did with our whole project and the experimental approach? Do you think it’s going to stay for the foreseeable future as part of the deliverable? Sam This is definitely something we can deliver. We have one customer who’s already accessing Singularity just as a pure tool. This can be just offered as a licensed tool for you to use in your company for however you please, and you get access to the quantum machines through this. You have the programmatic interface. If you were to develop more specialised applications for plug-ins through Excel, that would be part of the experimental or development cost. But, yes, you can just go straight for the license to use the products. Another unique thing we offer, which you don’t get with the access to the quantum machines, is, with the access with the quantum dividers, we also offer tensor network-optimisation tools. These are quantum-inspired tools where they’re running on classical machines, and we have some of the world’s experts developing these algorithms. Then, you have access to free Singularity as well so you can try out quantum versus quantum-inspired. Then, we also have plans to release quantum machine-learning libraries as well, so you’ll be able to access all of these state-of-the-art quantum machine-learning models you see, but you don’t have to necessarily be an expert in quantum or even an expert in quantum machine learning. This is our value proposition — that you don’t have to be an expert to use any of these tools. Konstantinos That’s great. I know they’re going to be available through cloud access, or would they have to be installed locally? Sam We can provide the solutions either way. The easiest package would be the cloud access, where all your data would be sent — all the data of your problem — but there’s no worries there. We use high standards of encryption and abstraction, so we never know anything about the problems which are being sent. Konstantinos It’s tokenised, in a way? Sam Yes. Then, any data risk, we shuffle it — we can encrypt it however the client needs. We can customise some of the security needs, if they were your worries. Then, yes, we can also do on-premises. We could build your version of the Singularity servers on your servers, where we would provide our code in Docker containers or however you would like to run it. Obviously, the code would be compiled — abstracted, in a way. Konstantinos Obviously, we work together — we’re going to work together on other projects too — but I want to make sure people had a good glimpse into some of these great things that your team is working on there. What are your feelings on quantum-inspired in general? This is something that not too many people talk about, but if you had a gut feeling, how is that going to go over in the next few years? Do you think quantum-inspired is going to take off and become this viable thing, or will it be eventually so overshadowed by exponential growth of quantum power? Sam I definitely think, over the next few years, quantum-inspired is going to continue to be part of the trend. We’re obviously seeing, even with big players like Microsoft, Microsoft is offering their quantum-inspired solvers where they’re using customised FGPA hardware to simulate quantum annealing. That’s similar with Toshiba as well. Konstantinos Toshiba, yes — the digital annealer. Sam Exactly. We have these big players already offering some of these classical solutions. With the quantum-inspired algorithms, they’re providing the gateway to quantum. I think you’ll agree with me right now that gate-model quantum computers aren’t quite there yet to start getting real-world, valuable, reliable results. They’re also very expensive to run if you were going to run it continuously on a problem with daily or minute-by-minute updates. The idea is, with quantum-inspired solutions, you can get the data processing that you might get from the quantum, and maybe at the cost of that, you’re not getting that nanosecond speed, but you’re getting a similar level of the complexity of how the data may be processed by a quantum machine. Then, on top of that, where you’re going to see the trend is a lot of these hybrid methodologies, where we’ll use quantum-inspired to simulate part of what we want to do in the quantum processor, but we might not be able to quite do it now, or we might want to reduce our errors by doing some of it with the quantum-inspired classically and then put it on the quantum machine to finish off the runs. This is going to be the trend where we’re going to see this shift between quantum-inspired and running these hybrid methodologies. Konstantinos Thanks. I appreciate that input. It’s a funny chicken-and-egg problem: You got quantum-inspired trying to outperform quantum, and then it becomes this “Who gave birth to whom?” It’s funny. I find that fascinating. Of course, hybrid is a big part of what we use. The Hybrid Solver is key to the work here. Part of it is still done classically, part of it’s on quantum and I anticipate that’s going to be forever. These machines are always going to be paired up. If you had one hope for a dream future project, what would you say it is? Sam That’s a tough one. Konstantinos Like, a customer asked a question in a meeting, and we were, like, “That’s the one — that’s the thing that we want to work on.” If you could add anything off the top of your head. Sam I’m trying to think about this. We’re getting quite close to these projects — with some current projects we’ve had from clients, which are looking more into derivatives pricing. Obviously, that’s a passion of mine from my background before I joined Multiverse. I like seeing these projects where we’re trying to solve these hard partial differential equations using these quantum algorithms and trying to innovate new ways to solve it. If a client came to us and said, “We want to solve this partial differential equation. We don’t care how you do it — just do something crazy,” that would be my dream project, where we can try some of these tensor-network methods, which may not have been tried before, or even try very experimental quantum Monte Carlo methods using all different types of platforms. This would be exciting. Konstantinos I was hoping for a good, nerdy answer. Thank you. Thanks so much for coming on. I encourage everyone to read our paper. It’ll be linked in the show notes. It’s been a lot of fun working with you, sir, and having you on here. I hope we get to do another project like this soon. Sam Likewise. Thank you so much for having me. Yes, I’m sure we will see each other in the future. I would love to come and actually meet you in person. Konstantinos Yes, I expect we’re going to be hanging before you know it. Sam Thank you so much. Konstantinos Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap. After completing a six-month project together, Sam and I coauthored a paper called “Financial Index Tracking via Quantum Computing With Cardinality Constraints.” In the paper, we described a hybrid quantum-classical approach to financial-index tracking portfolios that maximises returns and minimises risk. Running on D-Wave’s Hybrid Solver, this approach builds investment portfolios that can generate the same financial returns as traditional portfolios with significantly smaller groups of stocks. Replicating financial indexes using a limited subset of assets, known as cardinality constraints, has historically been an extremely difficult challenge for classical computers. How much smaller were the groups of stocks we used? The number of stocks in the Nasdaq 100 fund was four times smaller than in traditional portfolios and 10 times smaller than the S&P 500 fund. The quantum-built portfolio significantly outperformed the risk profile of the target index by up to 2x. This algorithm can be used for managing ETF funds and reducing overhead costs for financial managers while helping keep fees low for customers. Multiverse has developed an Excel plug-in that makes it easy for users to run this algorithm without a programming interface. Multiverse also makes a version of its Singularity tool, allowing for more coding flexibility. That does it for this episode. Thanks to Sam Palmer for joining to discuss this method of portfolio optimisation and Multiverse Computing. Thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World and leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow @ProtivitiTech on Twitter and LinkedIn. Until next time, be kind, and stay quantum curious.