PlaygroundDocumentation API Reference LoginGet started

Products

  • Pre-recorded audio
  • Streaming audio

Tools

  • Voice agent
  • Documentation
  • API Reference
  • Login
  • Get started
    00:0000:00

    Hi, everyone. I'm Jake and Better, and I really, really want to welcome you to the 2022 IBM Quantum Summit. This is our signature event.

    We want to bring you the latest and greatest from IBM Quantum. We have a lot to share today. The team has worked really, really hard, and I'm very proud of our what we've done and what we're going to show you.

    And of course, it's great to do this in person again. I like to start all my talks with the reason we're doing it. As you can see, this is the planet Earth.

    We have a lot of big problems. We have to solve climate change. We have to work out how we're going to continue to feed the growing population.

    We need to counter emergent diseases, and we need to work out how we're going to solve and manage volatile economies. These are big problems, but they're not impossible. For instance, if we can come up with better ways to do molecular simulations, we could discover new materials that could aid in carbon capture.

    Or we could come up with better ways to come up with fertilizer. If we could work out how to do more with complex data, we could come up with better ways of managing financial problems and even work out how we could get drugs to people faster. We believe quantum computing will be useful in solving some of these problems.

    There's still a lot to do, but that is what our mission is. Our mission, as Dario said, is twofold. We want to make sure that we can bring useful quantum computing to the world.

    And we also, at the same time, want to make sure that we can make the world safe as we keep making advances in quantum computing. So where are we on that mission? Simply put, we're getting there. As Dario mentioned, last year, we said we'd break the 100 Qubit barrier and launch the 127 Eagle quantum processor, and we did that.

    We also said that in our software and services, we'd make it 100 times faster, and we did that. With the release of the Kiskit runtime this year, we said we'd bring you the 433 Qubit processor and dynamic circuits. And I'm proud to announce that we've accomplished both of these.

    We're going to take you through a walkthrough and explain these in much more detail today, along with ten more breakthroughs and announcements that I'm very excited to share with you. So to get started, I would like to invite Jerry Chow to the stage, who's going to take you through the announcements in performance. Thank you, Jay.

    Now, before I get to the big news about our Osprey processor, let's just take a step back and talk about performance again. Now, to remind us all, this is how we define performance with three key metrics. First, we have scale, which we define by the number of Qubits.

    Second, we have quality. We measure it using quantum volume, which is a widely adopted benchmark of circuit fidelity that we introduced to the world in 2017. And then third is speed.

    This is a measure of how fast our systems can actually solve a problem and we use a metric that we define called clops or circuit layer operations per second. So first, let's start with scale and talk about osprey. Now, we've been committed to provide a processor every year and in 2019 we introduced the 27 qubit Falcon processor.

    In 2020, the 65 qubit hummingbird, last year over 100 qubits with Eagle. And now in 2022, we're introducing the world's largest and most advanced quantum processor yet again, introducing the IBM Quantum Osprey, weighing in at a remarkable 433 qubits over three times the qubit count of Eagle. And I'm going to invite my assistant Jay to actually show it to you all.

    So here it is. You can really see the size of osprey and how far we've come from the days of two qubit devices in the past. What you're looking at is actually the osprey chip inside of the printed circuit board package.

    And it's really this large so that we can actually bring all the signal wiring that we need to actually control. 433 qubits arranged in our heavy hex lattice topology. And we introduced this concept and technology of multilayer wiring last year already with Eagle and that performs the critical function of providing flexibility for the signal wiring as well as optimal device layout.

    And with Osprey to push out to 400 plus qubits, we've had to make a lot of further advances within there as well as adding an integrated filtering to reduce noise and improve stability. And yes, it's alive and being tested as we speak. Here's a connectivity map of this osprey and a histogram of our coherence times.

    Now, much like many of our first generations of large birds, we find these coherence times at the moment between 70 to 80 microseconds median t one times. Now, at 433 qubits, there's a lot to measure, so please stay tuned for further calibration updates in the months to come. So let's add that as our first breakthrough for today 433 qubits with our latest processor Osprey, I want to shift gears and talk about that second element of performance quality with our development roadmap and our new birds every single year they drive our scale dimension.

    But it's also critically important that we continue to drive quality improvements in the background. And we achieve that using our agile hardware development process where we're effectively always moving, taking lessons learned from our highest performing birds, carrying that through into our newest device visions and pushing them into larger and larger and more advanced birds. And so, for example, here's the state of the art that we had in 2021, which were with our Falcons and our Eagle processors with coherence times in the 100 microsecond range.

    We improved the coherence of Falcon when we went from our R Five revision to R eight by a factor of three. And we quickly were able to take that learning and feed it into a new revision of Eagle and engineer a new Eagle R Three with three times those T ones as well from our original first generation. And in fact, we work extremely fast.

    We already have a new revision of Osprey and this is just coming right off the experimental pipeline here where we're seeing significant coherence time improvements even on a subset of these qubits that we've started to measure on this second generation Osprey device. Now, this all comes together in that we actually need to have performance measured via quantum volume. And last year we introduced a new tunable coupling architecture that's been described in these research papers that allow us to push our Falcon revision ten devices to ten to the negative three error rates for two cubic gates.

    We typically refer to this as three nines in terms of the fidelity that we hit. Now, with Falcon R Ten, we were able to actually double our quantum volume not once but twice this past year, first at 256 and then again at 512 with our IBM Prague back end. So our innovations in tunable coupling architecture have allowed us to drive a forex increase in quality, which is our second breakthrough for today.

    Now finally, let's talk about that third element speed. And here the capacity to actually run a large number of circuits is absolutely critical for targeting quantum vantage as well as applications down the road. And so the overhead of error mitigation also added on top of that and eventually for error correction speed is absolutely important looking into the future.

    Now, we set ourselves a really challenging goal this year to go from 1.4 thousand clops to 10,000 clops by the end of 2022. And we tackled this by increasing speed in three different ways.

    First, by improvements to the Runtime compiler, the quantum engine and our hardware control systems. And in June we introduced code pipelining followed by huge improvements in our control systems and quantum engine. And I'm proud to announce that not only have we hit our mark of 10,000 clops, but we've surpassed it at 15,000 clops.

    And so that's our third breakthrough for today, a ten x improvement over our fastest integrated system last year. And so on top of all these breakthroughs, we can also take our development roadmap and tick off a major milestone off of it osprey at 433 qubits. Back to Jay.

    Thanks Jerry. So I hope you see that we've made a lot of improvements in performance. But now I want to start to talk about something that we call creating value.

    With all the progress we've made in hardware and software, we think it's important that we actually start to make our software easier to use and able to do more. We call our process towards getting quantum advantage. We call it the non nonsense path to quantum advantage.

    And in this path. We believe it's pretty simple, actually. There's quite a lot to do.

    But fundamentally, there's two simple things that we want to focus on. We want to work out how we can run quantum circuits faster on quantum hardware and software. And two, we need to work out what interesting problems can we map to those circuits? We're going to cover .2

    later in this talk, later in the sessions today. And fundamentally, I do believe we have to do this together as a community to work out what are those right circuits that we want to investigate for quantum advantage. But in the meantime, please welcome Blake Johnson to the stage, who's going to talk through our progress in point one.

    Thanks, Jay. Earlier this year, we told you that error mitigation offers the potential to deliver quantum advantage at lower total resource cost than a fully error cricket solution. In particular, we know that many applications require accurate estimates of large quantum observables, and we now see a way to deliver unbiased estimates of such observables using error mitigation.

    So now I want to tell you about our work in this area. Building upon the performance enhancements that Jerry has just described, what we are after is delivering value to our users. We believe that value requires coupling great performance with advanced capabilities and delivering that in a frictionless experience.

    Today, I have three announcements that advance this goal. First, on performance at a foundational level, our ability to deliver value in quantum computing rests upon faithful execution of quantum circuits. In practice, though, we have to contend with the presence of errors.

    Fortunately, we have powerful tools to deal with these errors. One category of tool is error suppression and reduces errors in circuits by modifying the underlying circuits without changing their meaning. For instance, we can inject additional gates that echo away certain error sources.

    Another category of tool is error mitigation. Error mitigation can deliver accurate expectation values by executing collections or ensembles of related circuits and combining their outputs in postprocessing. Error mitigation is powerful.

    In fact, earlier this year, we showed how error mitigation can deliver unbiased estimates from noisy quantum computers, and we'll be talking about this more later today. But error mitigation also comes at a cost, and that cost is exponential with the form you see on the screens here. And critically, it depends on critical parameters to the base of this exponent gamma bar, which is a measure of the collective quantum noise of a quantum circuit.

    Critically, then, error mitigation depends on the performance metrics that Jerry has described before, tying together scale, quality, and speed. Consequently, error mitigation is not practical without holistic performance. So we know the problem in front of us, and we also know we have some powerful tools to help address it.

    Now the question becomes, how are we going to make those tools easy to use and accessible to everyone? In other words, how are we going to make it frictionless. Our answer builds upon the Kiskit runtime primitives. We launched primitives earlier this year and they elevate the fundamental abstraction for interfacing with quantum hardware to directly expose the kinds of queries that are relevant to quantum applications.

    These more abstract interfaces allow us to expose error suppression and error mitigation through simple to configure options. And when we do it right, it can have a major impact. For example, this work from our partners at Lawrence Berkeley National Laboratories studied a circuit that they called wormhole inspired teleportation.

    And in this example, air suppression is sufficient to convert the response from something that looks just like noise what you see in orange, to a response which closely tracks the noise model of the system, what you see in blue. So my first announcement is that as of today, we're launching a beta support for air suppression in the Kiskit runtime primitives through a simple optimization level in the API. We can go further though, and to do so, we're going to introduce a new option that we're calling a resilience level.

    This is a simple to use control that allows the user to adjust the cost accuracy tradeoff of a primitive query. At resilience level one, we're going to turn on options that specifically address or methods that specifically address errors in the readout operations. We're going to adapt the choice of method to the specific context of sampling or estimating.

    These methods have fairly minimal overhead, so we're making this the default resilience level in primitive queries. We can go further by layering on other error mitigation approaches at higher resilience levels. A resilience level two will enable zero noise extrapolation.

    This can reduce error in an estimator, but doesn't come with a guarantee that the answer is unbiased. Finally, at resilience level three, we turn on our most advanced error mitigation strategy, which is probabilistic error cancellation. This method occurs a substantial overhead, both in terms of noise model learning and circuit sampling, but also comes with the most robust guarantees about the quality of the result for the developers in the audience.

    Here's what that looks like in code to manipulate this resilience level and the new options interface. And my next announcement to add to our chart is that we're also releasing a beta launch of this resilience feature in the Kiskit runtime primitives, which you can use today. Finally, I want to tell you about a powerful new capability and an important milestone achievement on our roadmap.

    What I'm talking about is the ability to execute dynamic circuits on IBM quantum systems. Dynamic circuits marry real time classical computation with quantum operations, allowing feedback and feed forward of quantum measurements to steer the course of a computation. What can you do with dynamic circuits? A lot of things.

    But just to give one concrete use case, we know that dynamic circuits offer new opportunities to reduce circuit depth. For instance, there's a surprising result from Joza that any Clifford group operation can be implemented in constant depth at the cost of doubling the width of the circuit. You can see an example of this on the screens here, where a four qubit Clifford Group operation is reduced from depth 18 to depth six using this concept.

    This is but one example, and we know that there are many more. Today, we enter a new phase of discovery for dynamic circuits by enabling their exploration on live quantum systems. Our team is just now rolling out support for dynamic circuits.

    In a matter of a few days, we'll have support on 18 IBM quantum systems. These are those systems which built for fast readout, which we first introduced with the Falcon R Five processor. As a result, projects like this research prototype from our team, which previously required working side by side with the hardware engineers, can now be written and executed with a few simple lines of kiskit code.

    So the third announcement to add to the chart is that we're launching support for dynamic circuits on IBM quantum systems. And I also am excited to check off another major milestone along our quantum roadmap. All right.

    With that, I'm going to pass it back to Jay. Thanks, Blake. There's a lot to unpack here.

    I expect many questions and lots of user feedback as you start to work with these innovations. I fully expect with these abstraction levels of the stack, the quantum industry will go into overdrive. So that's the next thing I want to talk about.

    It may surprise you, but our objective is not just to build our quantum program. Our aim is to grow a quantum industry. Growing in a quantum industry is the only way we're going to make quantum computing succeed.

    We need to nurture different kinds of industries. Some may even compete with us. We accept that, and actually we encourage it.

    What's good for everyone is good for all of us. So based on the industry building efforts, we focus on five different areas. The first is we want to keep, as you heard, advancing quantum.

    From day one, we've always focused on doing open science, open source, as well as how do we work together to understand when quantum advantage can occur. The second is growing that workforce through the many quantum innovation centers that we partner with. How do we create a sustainable future? The third is we need to make sure we're solving relevant industry questions.

    So how do we actually understand what matters for industry and how do we get industry adoption? The fourth, which I think is very exciting and is going to start to kick off very soon, with all the progress that you just heard from Jerry and Blake on, how do we actually start to integrate application services into the services we provide to make it easier to do more with quantum? And finally, we need to make sure as we go forward, we are always thinking about how we keep clients data secure with the quantum safe. So with this in mind, I want to welcome Scott Crowder to the stage to talk about how we're driving adoption and creating this industry. Thanks, Jay.

    So, as Jay mentioned, we're trying to foster a global quantum computing industry. That's why driving adoption of quantum computing is so fundamental to our mission. And adoption begins by laying the groundwork for tomorrow's quantum workforce through education initiatives.

    The energy we've seen from learners of all age is absolutely amazing. Over 460,000 registered users have taken advantage of IBM Quantum's open access, and these emerging quantum developers have run more than 2.3 trillion circuits on real quantum computers.

    As we work to build the future quantum workforce, we want to ensure it reflects our full potential by focusing on building skills in populations that are currently underrepresented in Stem. At the high school level, we've been working with cubic by qubit to reach more than 12,000 students, of which about 70% come from underrepresented populations. At the university level, we're seeing rapid growth in interest in learning about and studying quantum computing.

    And Kiskid Is, an IBM quantum, has been used in over 375 university classrooms, and we continue to capture the wider interest that's out there with over 5 million learners accessing Kiskit digital learning content. So as a community, we've together provided access to real quantum computers. We've helped train a generation of quantum native developers, and we provided new capabilities.

    As a result, we've seen a rapid rise in the use of real quantum computers for research. And we've seen over 1750 research papers published since 2016 using IBM Quantum and Kiskit technology alone. This growth in adoption could not be possible without access to our technology.

    Back in 2016, we started with one five qubit system in a research lab in Yorktown Heights IBM Research. We have now cumulatively deployed 60 systems via the cloud. That includes deployments in our poughkeepsie New York data center plus global Computation Center deployments to support our Quantum Innovation Center partners Fronhofer in Eineghan, Germany, and University of Tokyo in Shinkawasaki, Japan and coming soon to support our Quantum Innovation Center partners cleveland Clinic in Ohio, Pink in Quebec and Yonsai University in South Korea.

    In total, we now have more than 200 members in our IBM quantum network, all using these systems. These members range from quantum innovation centers and industry partners to startups building out the commercial ecosystem, to individual researchers advancing the quantum field. And we now have a global network of 34 quantum innovation centers that provide access to quantum computing, advance quantum and research development, support development of a quantum workforce, and drive the economic development of their regions.

    We're constantly welcoming new quantum innovation centers to our quantum network. Since the last summit, we have added institutions across the globe, such as Arizona State University, who's building a center of excellence focused on quantum computing, education and research. Desi, who's applying quantum computing to enhance their high energy physics mission.

    Ayatri Madras, our first quantum innovation center in India and uptown Basel, who aims to accelerate innovation in areas like life sciences, manufacturing sustainability by enhancing the Swiss ecosystem of quantum technologies. We're also seeing rapid growth in industry activity, exploring the potential of quantum for real world problems. Our partners have investigated over 45 different industry applications.

    These span topics from simulating nature to processing data with complex structure to search and optimization. And we're excited to announce a number of new industry partners today industry leaders who are expanding our understanding of how quantum computing can bring value to business and society. Bosch, with whom we're jointly researching the use of quantum computing for material science.

    Credit Mutual to explore the applicability of quantum computing for finance. Ersted Digital is working with IBM to become quantum ready for applications such as risk management and fraud detection, and Vodafone is joining the IBM Quantum Network to explore use cases for telecommunications, as well as collaborating on quantum safe cryptography. Welcome.

    We're also working with startups and software vendors to provide new capabilities to industry workflows by integrating Kiskit Runtime as a service into their application services. For example, as you'll hear in more detail later today, Cunasis used Kiskit Runtime primitives and error mitigation tools to easily port simulations of quantum circuits to quantum hardware for their work with JSR. As Jay mentioned, this is going to be really critically important to us working together to build a quantum industry.

    We've learned a lot over the last six years. We've learned that through our deep collaborations with our partners, that successful adoption requires three basic components. The first is access to real quantum computers.

    The second is access to a quantum runtime environment, someplace where you can run quantum programs, preferably based on an underlying open source software stack. And the third is training and education to build skills. All of our offerings are based on these three aspects we continue to provide open and free access to Kiskit Runtime as a service, as well as open learning material, all based on Kiskit, the leading open source quantum software development kit and community.

    We also offer two paid plans for Kiskit Runtime as a service, which provide access to our more advanced quantum systems and capabilities. The first is a pay as you go plan available today from IBM Cloud, which is a standard serverless cloud model that you're charged by the amount of time you use. The second is Premium Plan, which is a reserve capacity model that includes access to Kiskit Runtime as a service and has been developed for longer term, deeper partnerships.

    This includes access to our entire fleet of systems, including our Exploratory systems, making the latest technology available to our partners. It also includes more advanced technical support and training, and membership in the IBM Quantum network. And we offer the Quantum accelerator to help our clients build their skills together, deeply investigate the use of quantum computing for relevant business problems, and understand the implications of quantum computing for their industry.

    It's supported by a mix of deep quantum and deep industry expertise and includes customized technical support and skill building. Of course, unlocking the potential of quantum comes with important considerations. A future quantum computer, much more powerful than today's systems, will be capable of cracking today's public key encryption and digital signature algorithms.

    This means we need to find new classical methods of encryption, and we need to go find and change our current cryptography to those new methods. This is not going to be easy. It will be like Y two K, but in some ways much more complex.

    But there is good news. Since 2016, IBM has worked with standard bodies to define new quantum safe cryptography to prepare for this new era. In fact, back in July, NIST selected four quantum safe algorithms for standardization.

    Three of the four selected were proposed by IBM researchers and their collaborators. US. Government has also begun to establish timelines to transition to these new algorithms.

    And at IBM, we've already begun the transition, for example, building our newest Generation Z 16 to be quantum safe from the firmware up. But we know we cannot execute a transformation of this complexity alone. It will take industry ecosystems and it will take government, industry, and supply chain providers to cooperate.

    At Mobile World Congress, the GSMA postquantum toco Network Task Force was formed, and they tapped IBM and Vodafone as initial members to support the industry's transition to quantum safe cryptography. We're also announcing IBM Quantum Safe Services to support our clients transformation to quantum safe cryptography. The new Quantum Safe Service offers IBM expertise to help you prepare and discover your cryptographic needs, then plan your transformation to quantum safe cryptography while building in the agility and observability to make future transformations simpler and more cost efficient.

    So now I'm excited to add two more announcements to the list our continuing growth of the quantum network to over 200 members, including the announcement of Bosch, Uptown, Basel, and Credit Mutual, and our new IBM quantum Safe offering in collaboration with Vodafone to help the telecommunications industry transformation to quantum safe cryptography. And now back to Jay. Thanks, Scott.

    So that's our State of the Union as it stands today. I hope you've seen that there's tremendous progress in performance, in scale, quality, and speed that we've extended what we can do in our software by integrating error suppression, error mitigation, and dynamic circuits into our software. We've seen the quantum network grow to over 200 members.

    And we've just announced the quantum safe offering to be added to our already simplified way of experiencing what IBM quantum has. But I want to change a little bit and think about what is next. So if we go back top line, we're hitting our goals and we're hitting that roadmap.

    The industry is growing, but what is the future going to bring? What is the next thing in quantum? And for that, I would like to invite Katie to the stage to give you to talk through what we're seeing next. Go for it, Katie. Thank you, Jay.

    So now we get to talk about what's next, which is going to be quite fun. So, the first major goal for next year is Condor. Condor will be the first processor to break the thousand qubit mark.

    This is a huge feat and will push all the limits of scale like no other quantum chip has previously. We also see Condor as a test of the limits of a single chip technology, and it really will help us show the path forward. So, moving on from just talking about processors, we've made a lot of progress this year understanding how to push the limits of quantum using our software.

    But we're finding there's many ways that we can weave quantum and classical together to extend what we can achieve. And we call this our circuit knitting toolbox. Let's take a look.

    So, first we discovered we can embed quantum simulations inside larger classical problems. We use quantum to treat pieces of the problem and use classical to approximate the rest. Also, with things like Entanglement forging, we can break the problem down into smaller circuits and run the smaller quantum circuits on the quantum hardware and then reconstruct them classically, which allows us to double the size of what we could do otherwise.

    And with circuit cutting, we really cut the less Entangled connections into subsystems. We commute the global energy by classically coupling each of the answers in each of the results from the QPUs. And so I'm happy to announce today that we're also releasing the Alpha version of the circuit knitting toolbox.

    So check it out in the cloud session later and you can start using it. So this brings us to the third thing we want to talk about today. All these tools have a very common approach.

    They decompose the problem, they run a lot of Kiskit Runtimes in parallel, and they reconstruct the outcomes into a single answer. And we hear from our users, like all of you, that you really want access to this, but you don't want to worry about the underlying infrastructure, you just want to run your code. So, to this end, I'm even happier to announce today that we're also releasing an alpha version of quantum serverless as well.

    And as an example, last year you heard us say that we were able to speed up a molecular simulation 120 times using Kismkit runtime. Now, with quantum serverless, we can run the same problem in times faster than that. And so with three quantum systems, we could have a 360 time speed up.

    So, as I've mentioned, we're learning to make the most of this parallelization, but we need to build it into the systems and the primitives. So next year, we're doing this using multiple Heron processors and what we're calling the Threaded runtime extensions. We're really excited about heron.

    It's not only going. To be the first processor to employ this multi QPU model. But it's also going to be the first processor with more than 100 qubits to cross the three nine to beat the three nine threshold with all the advances that Jerry talked about.

    So, looking forward, I'm happy to say that everything is on track for next year. Now, I didn't have a ton of time to talk about the applications, but Jamie is going to go into more detail in her session later on developing applications. So I get to add another breakthrough to our list the alpha release of both circuit knitting toolbox and the quantum serverless.

    Back to you, Jay. Thanks, Katie. I hope you didn't mind that we added two more announcements quantum serverless and the circuit knitting toolbox.

    Along with error mitigation built into the runtime and dynamic circuits and the 433 cooper processor. We're really setting up ourselves up for the future. But there's a catch.

    If you look at the roadmap, you can see it. Obviously, you can see that we need to keep expanding and doing and implementing the technology to make this roadmap happen. But it's more than that.

    In 2023 marks the point where everything changes. The future is no longer a continuation of all the great progress that you saw and we just announced. It's actually what we think of as the next wave in quantum computing.

    Hence the name for this theme summit the Next Wave. Today we build single processes, but we realize the path ahead is multiple processes. Today we build bespoke infrastructure solutions which aren't fast enough, aren't scalable, and they cost too much.

    In the future, we need scalable controls. Today, we're employing classical compute to enhance quantum hardware. But next we will develop what we're calling middleware for quantum that will enhance it further.

    This next wave is what we are calling quantum centric supercomputing. To me, a quantum centric supercomputer is a modular computing architecture which will enable scaling. It will use communication to increase the computational capacity, and it will use a hybrid cloud middleware to seamlessly integrate quantum and classical workflows.

    This is going to be a lot, but now I want to bring Jerry back to the stage to explain modularity for quantum in a lot more detail. All right. Thanks, Jalen.

    Let's start with that piece of quantum centric supercomputing with modularity for quantum. Now, this photo is striking, but it's really a relic of the past in some sense. I know it's just right out there and you can take a look at our chandelier, but a lot of the wiring that you see within it is there and built by hand.

    It's handcrafted. It's bespoke. And at 100 qubits with a few hundred cables, I can convince our team to actually do that busy work.

    But when we push this to 400 or 1000 and needing to hand tighten all the different bolts, this becomes impractical. And it's simply not cost effective and not nearly dense enough for the solutions that we need in the future. It has to change.

    And so we're really excited to show the next evolution of high density control signal delivery with cryogenic flex wiring. This is going to make it easier to wire hundreds to thousands of lines. And it's absolutely critical for the reliability of our deployed systems.

    Now, today, it's already 70% more dense and five times cheaper. And we have plans to make this even better. Besides scalable signal delivery, we also need to look forward with our cryogenic platforms.

    Last year, we introduced the World towards Keyday, a modular cryogenic platform from our friends at Blue Force. Now here's a sneak peek into their manufacturing lab in Helsinki, where we can see their exciting progress. You can see it's real, it's big, it's literally a walk in cryostat.

    And when I saw it, I thought it was a walk in freezer for meat and ice cream. But it's actually for milli kelvin temperatures and qubits, all with the potential for modularity and scalability for the future. Now, another challenge is scaling.

    For scaling is control. I've been dignged back to the first cloud system that we put online with five qubits, and it was quite amazing that we got it all together using zip ties and dundalfloss. But we didn't think too much about all the costs and space that it would take to make it.

    We just wanted to get it running. We used a full rack of electronics that were commercially available to control five qubits. Quickly, we realized that we had to replace it with our Gen One control systems.

    It was a big deal because we were able to actually make one rack control. 20 qubits did everything that the commercial solutions did, but less expensively and a smaller footprint. But then we realized that we had to add in new capabilities with Generation Two in 2020, we focused on adding in dynamic circuits, like those capabilities that Blake had mentioned earlier.

    And also it continued to drive down cost and footprint. But now I'm excited to show you guys gen Three control systems. This year is yet another huge step forward.

    By working with experts in control, we really put this into hyperdrive. Our new rack controls 400 qubits at an even lower price point. So 400 qubits of control in just a single rack.

    And that's not all. We've been working to make these systems easier to use, flexible, reliable, and certainly more serviceable. With hundreds and thousands of cubits coming in soon, the probability of something going wrong is really not negligible.

    And we need to be able to actually replace parts while other parts of the system remain live. And so you're looking at just that in the video. Our engineers working on a hot swap of our Gen Three control system.

    That's impressive, but still not enough. We wanted to go even one step further, and here's what we're working on next. This is a CMOs qubit controller, and we designed it to control four Qubits with the chip.

    That's the size of my fingernail. And we've already used it to control two Qubits to produce high fidelity gates. Now, it can also be placed inside the cryostat at a balmy four kelvin, which allows us to further reduce the line density and latency even further.

    So there's still a lot of work to do here, but I certainly expect that there are going to be aspects of this type of CMOS technology that will make it into our next generation scalable, fourth generation control systems. Now, besides modularity, on the second front for Quantum centric supercomputing, I want to talk a little bit about the communication aspects for computation. Now, as Katie had said, with regards to the circuit knitting toolbox, it's going to be important to be able to squeeze out our systems to the limits.

    But here the issue is going to be time. Considering that we want to use circuit cutting and cut a large circuit 14 times, and assuming our current run times and a repetition rate of around four khz, this would actually take around 181 years. Not a time that I want to wait.

    But with the kind of parallelization that Katie was talking about, we can bring this down to 1.8 years. Adding in classical communication in terms of dynamic circuits between the processors, and now this becomes just 18 hours.

    And so this is why our Heron target for next year is so important to bring in this classical parallelization. Looking even further, if we bring in quantum interconnects between the different processors and build in some of the 1 meter coherent l coupler links that we're planning to use with our Flamingo in 2024, we can bring this down to just milliseconds. Now, assuming all this works, then in fact, the next bottleneck becomes the locked in configurations of all the connected fridges.

    These connected configurations of processors would actually tie us down to specific topologies, and it'd be great if we could reconfigure it without having to actually physically move fridges around. And so if you look long term in the future, what we want to use is use transduction tied together with optical connections to enable reconfigurable networks. So in terms of communication for quantum, there's a lot to look forward to next.

    As Jay said, the third part of quantum centric supercomputing is middleware for quantum. And I'm going to bring Katie back to the stage and tell you about it's. Thanks, Jerry.

    Thanks, Jerry. Okay. Middleware for quantum is what will make quantum useful.

    And with the overhead Siri spoke about, it's really time for us to define what quantum in the cloud means. It's definitely nothing like you see from us today or our competitors. But simply put, we see the future really driven by quantum middleware that will bring the best solutions from any cloud provider together with our Kiskit runtime as a service.

    So I'm going to show you a video that explains multicloud and quantum and how middleware will make life easier for the users. There's three steps, as I explained earlier, decompose, run in parallel, and reconstruct. Each of these can be built on whatever cloud provides the best solution.

    So here we're considering a machine learning algorithm, which we call Quantum kernels, and combining it with the circuit Nating toolbox I talked about earlier. First, we need to define the circuit and set up the clouds, the multicloud environment to run on. Then the quantum serverless tools will handle all the orchestration for you.

    Next, we compile the higher level circuits and map them to the physical circuits and the circuit knitting I described earlier before. Using the circuit cutting method, decomposes this into four smaller circuits. Thanks to the serverless and Kiskit Runtime, these subsurcuits can be sent in parallel.

    They're executed using the primitives error mitigation and suppression, as Blake talked about, are applied, and the results are sent back to Kiskit Runtime as a service. Then these reliable results can be combined in any other cloud for the final answer. And just like that, sent to the user.

    Here's the code, and you can see how it distributes work over three different clouds and how simple it is. So we have all these wonderful innovations in quantum middleware and a clear version for what we believe is the next wave of quantum computing. But the question is, how do we make a quantum centric supercomputer? What will the system need to create to hold all the innovations? And so now we're going to talk about System Two, which we do believe is a building block for quantum centric supercomputing.

    This has been a huge challenge in industrial design, and I'm going to invite David Bryant on stage to talk about it. Hey, thanks. Good morning.

    Thanks, Katie. So if you've taken anything away from today, it would be that nothing in quantum computing is that easy, and that would extend to the industrial design of the system. The brief was challenging, to say the least.

    When you have a challenging brief like this, it's always a good idea to work with brilliant people. So I do want to give a shout out to our design partners, Map Universal, who have been working very closely with us on the project over the last year. The brief was to design a quantum computing system capable of housing a three tiered chandelier capable of holding three different processors, or quantum processors held within a hexagonal cryostap.

    Weighing about nine tons, this cryostat maintains an almost perfect vacuum and temperatures colder than deep space, in fact, colder than anywhere in the known universe. So the design brief was really to design the coolest thing in the universe, so no pressure. The requirements were also to have the control systems be as physically close to the crystal as possible to reduce signal latency.

    And on top of this, the control systems could only be a few feet away from the gas handling and the classical compute banks that handle the cryogenics and kiskit runtime, respectively. And on top of this, we needed this system to be extensible so that we could add more control systems as the qubit count of the processor increases. Quantum System Two is not just a standalone system.

    It is designed to be the building block of quantum centric supercomputing. So to this end, we needed the system to be modular. In other words, it'll be possible to connect the cristats of multiple Quantum System Twos together with long range couplers connecting the processors.

    By connecting two crystals together, we can create a system of 8316 qubits. By connecting three crystals together, we can create a system of 16,632 qubits in one system. This modularity also extends to the compute and gas handling bank.

    We designed it to be 100% customizable. So we can extend the computational capacity of the system by swapping out classical racks and AI racks, and vice versa. There are also human factors to consider as well.

    Quantum System Two is not the kind of system you can just drop in a data warehouse and just forget about. The technology is nascent. It requires human interaction.

    So, inspired by the idea of modular furniture, we created a working environment that was considerate to engineers and to technicians. And on top of all of these requirements, quantum System Two, like Quantum System One, needed to look absolutely beautiful and iconic, driving an emotional connection through the power of design. In the words of TJ Watson, good design is good business.

    As in System One, the solid shapes that comprise of Quantum System Two are actually very simple. The central crystal is basically a hexagonal prism, and the rest of the systems are basically cuboids. So we clad these geometric shapes with anodized polished aluminum, or aluminium, if you'd like to pronounce it correctly, and a novel material.

    This is a very novel material. It basically softly reflects the environment around it. So in addition to this, we encased the system in 70 30 glass, which acts as both a mirror and a window to the system.

    And they both reflect off each other. This reflection creates a subtle hall of mirrors effect that we felt expressed the multidimensionality of the mathematics that we were trying to solve for. And here's the side view of the system.

    As you can see, that very beautiful reflective quality of the materials. Now, it is quite difficult to visualize these designs just using still images, so we created a short film to give you a sense of the full system. This has already been shared with you, but we thought it bears repeating close.

    Thank you. So we're excited to announce, as Daria mentioned, that we will have a live working system, Quantum System Two, to share with you at next year's Quantum summit in 2023. So that's two more announcements to add to our slide.

    Quantum centric supercomputing is what we're seeing as the next wave of quantum computing technologies. And IBM Quantum System Two is the building block for quantum centric technologies and supercomputing. So the question is now, when we have a Quantum System Two next year, what are we going to do with it? And with that, I'm going to hand back to Katie.

    Thank you. Me again promises the last time. So, earlier, Blake showed you this plot, and he showed you how error mitigation can enable better results.

    And he also told you that we wanted to simulate these circuits at a lower cost than classical computing. And I told you how the middleware is going to orchestrate this, and classical compute will allow us to extend what we can do. So today, we're setting out to build a tool that can push us in this direction.

    But we're issuing a challenge to all of you. We're calling it the 100 by Hundred Challenge, and we're pledging that in 2024, we'll offer our partners and clients and all of you a system that will generate reliable outcomes running 100 qubits and a gate depth of 100. We've said we've had a twofold path to quantum computing.

    We still have to make better hardware, software, and infrastructure, and our users have to devise use cases. And we see plenty of avenues to explore use cases using these reliable results, like ground states, thermodynamic properties, quantum kernels, and more. But we need everyone's help here and in our network partnerships to really think about what circuits they'd want to run on a processor like this.

    Why are we so confident that we can release hardware like this? I hope most of this morning help illuminate our excitement in the direction that we're going. We've showed you the power of our error mitigation techniques, and later today, Sarah and the team will also talk to you in our no nonsense Path to Quantum Advantage session about really some exciting demonstrations showing the power and scale of these techniques. Returning these results with the circuits in less than a day's runtime means that we need a processor to compute the 100 qubits that have error rates better than the three nines threshold and that it's really within reach based on what Jerry showed us today.

    We also need the software infrastructure that can quickly process and read out the circuits in concert with the classical resources. And we're really feeling confident that in 2024, we're going to have what it takes with Heron to do just that. So it's one last announcement to add to our slide and the first challenge to the audience, the 100 by hundred challenge.

    Back to you, Jay, for the wrap up. So we've shared a lot. The big news is we have 433 qubit processor, which we'll be making available to our clients in a few months.

    We have dynamic circuits now integrated into our software, and we shared a vision for the next wave of quantum computing technologies we're going to introduce in 2023. Today, we shared no less than twelve breakthroughs and announcements just to recap on them. We've made tremendous progress in performance with 433 qubits, pushing the quantum volume with the new architecture and driving the clops by a factor of ten.

    We've made announcements of how we're going to integrate powerful techniques such as error suppression, error mitigation and dynamic circuits into our services. We've launched a new offering, Quantum Safe, and we're already working with our clients and we've seen the quantum network grow to over 200 members with new clients. Just announced today.

    Today we released the first tools in middleware for quantum, the quantum serverless package and the circuit knitting toolbox. There will be many more tools to come, but these tools will set us up for a future where multicloud and quantum will work together seamlessly. We shared the next wave for quantum computing, which we call Quantum Centric supercomputing.

    And we showed a system that we're building which will be the building block for this, the IBM Quantum system too. And finally we announced the 100 by hundred challenge. I've been doing quantum computing now for over 20 years and it really feels different when your hardware developers, technicians and software really feel that they can achieve this.

    So creating this 100 by 100 device will really allow us to set up a path to understand how can we get quantum advantage in these systems and lay a future going forward. So, as I said at the start of the session, we talked about big problems we want to solve. I think most of the people in the IBM Quantum team, they come up, they come to work every day because they want to serve this single mission.

    That is, how do we bring useful quantum computing to the world and at the same time make the world quantum safe. We have a lot of science to do, so we're going to take a coffee break outside. You will get to see many of the things that were talked about today and we'll show you some of the software through a demonstration.

    There's a lot more sessions later on, so please join me in thanking everyone that talked. Thank you.

    ctaFooter
    Save your work, keep exploring
    Create an account to securely archive your transcriptions as you explore our capabilities.Get your API Key