Quantum Apocalypse Comes Real

Bogeyman 

Experienced member
Professional
Messages
9,192
Reactions
67 31,256
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

IBM Quantum breaks the 100‑qubit processor barrier​


Today, IBM Quantum unveiled Eagle, a 127-qubit quantum processor. Eagle is leading quantum computers into a new era — we’ve launched a quantum processor that has pushed us beyond the 100-qubit barrier. We anticipate that, with Eagle, our users will be able to explore uncharted computational territory — and experience a key milestone on the path towards practical quantum computation.


We view Eagle as a step in a technological revolution in the history of computation. As quantum processors scale up, each additional qubit doubles the amount of space complexity — the amount of memory space required to execute algorithms — for a classical computer to reliably simulate quantum circuits. We hope to see quantum computers bring real-world benefits across fields as this increase in space complexity moves us into a realm beyond the abilities of classical computers. While this revolution plays out, we hope to continue sharing our best quantum hardware with the community early and often. This approach allows IBM and our users to work together to understand how best to explore and develop on these systems to achieve quantum advantage as soon as possible.

Constructing a processor that breaks the hundred-qubit barrier wasn’t something we could do overnight. Scientists for decades have theorized that a computer based on the same mathematics followed by subatomic particles — quantum mechanics — could outperform classical computers at simulating nature. However, constructing one of these devices is an enormous challenge. Qubits can decohere — or forget their quantum information — with even the slightest nudge from the outside world. Producing Eagle on our short timeline was possible in part thanks to IBM’s legacy of pioneering new science and investing in core hardware technology, including processes for reliable semiconductor manufacturing and packaging and bringing nascent products to market.

IBM’s roadmap for scaling quantum technology​


Our quantum roadmap is leading to increasingly larger and better chips, with a 1,000-qubit chip, IBM Quantum Condor, targeted for the end of 2023.

Back in 1969, humans overcame unprecedented technological hurdles to make history: we put two of our own on the Moon and returned them safely. Today’s computers are capable, but assuredly earthbound when it comes to accurately capturing the finest details of our universe. Building a device that truly captures the behavior of atoms—and can harness these behaviors to solve some of the most challenging problems of our time—might seem impossible if you limit your thinking to the computational world you know. But like the Moon landing, we have an ultimate objective to access a realm beyond what’s possible on classical computers: we want to build a large-scale quantum computer. The future’s quantum computer will pick up the slack where classical computers falter, controlling the behavior of atoms in order to run revolutionary applications across industries, generating world-changing materials or transforming the way we do business.

Today, we are releasing the roadmap that we think will take us from the noisy, small-scale devices of today to the million-plus qubit devices of the future. Our team is developing a suite of scalable, increasingly larger and better processors, with a 1,000-plus qubit device, called IBM Quantum Condor, targeted for the end of 2023. In order to house even more massive devices beyond Condor, we’re developing a dilution refrigerator larger than any currently available commercially. This roadmap puts us on a course toward the future’s million-plus qubit processors thanks to industry-leading knowledge, multidisciplinary teams, and agile methodology improving every element of these systems. All the while, our hardware roadmap sits at the heart of a larger mission: to design a full-stack quantum computer deployed via the cloud that anyone around the world can program.

image

Figure 1:
Members of the IBM Quantum team at work investigating how to control increasingly large systems of qubits for long enough, and with few enough errors, to run the complex calculations required by future quantum applications.

The IBM Quantum team builds quantum processors—computer processors that rely on the mathematics of elementary particles in order to expand our computational capabilities, running quantum circuits rather than the logic circuits of digital computers. We represent data using the electronic quantum states of artificial atoms known as superconducting transmon qubits, which are connected and manipulated by sequences of microwave pulses in order to run these circuits. But qubits quickly forget their quantum states due to interaction with the outside world. The biggest challenge facing our team today is figuring out how to control large systems of these qubits for long enough, and with few enough errors, to run the complex quantum circuits required by future quantum applications.

IBM has been exploring superconducting qubits since the mid-2000s, increasing coherence times and decreasing errors to enable multi-qubit devices in the early 2010s. Continued refinements and advances at every level of the system from the qubits to the compiler allowed us to put the first quantum computer in the cloud in 2016. We are proud of our work. Today, we maintain more than two dozen stable systems on the IBM Cloud for our clients and the general public to experiment on, including our 5-qubit IBM Quantum Canary processors and our 27-qubit IBM Quantum Falcon processors—on one of which we recently ran a long enough quantum circuit to declare a Quantum Volume of 64 . This achievement wasn’t a matter of building more qubits; instead, we incorporated improvements to the compiler, refined the calibration of the two-qubit gates, and issued upgrades to the noise handling and readout based on tweaks to the microwave pulses. Underlying all of that is hardware with world-leading device metrics fabricated with unique processes to allow for reliable yield.

Simultaneous to our efforts to improve our smaller devices, we are also incorporating the many lessons learned into an aggressive roadmap for scaling to larger systems. In fact, this month we quietly released our 65-qubit IBM Quantum Hummingbird processor to our IBM Q Network members. This device features 8:1 readout multiplexing, meaning we combine readout signals from eight qubits into one, reducing the total amount of wiring and components required for readout and improving our ability to scale, while preserving all of the high performance features from the Falcon generation of processors. We have significantly reduced the signal processing latency time in the associated control system in preparation for upcoming feedback and feed-forward system capabilities, where we’ll be able to control qubits based on classical conditions while the quantum circuit runs.

Next year, we’ll debut our 127-qubit IBM Quantum Eagle processor. Eagle features several upgrades in order to surpass the 100-qubit milestone: crucially, through-silicon vias (TSVs) and multi-level wiring provide the ability to effectively fan-out a large density of classical control signals while protecting the qubits in a separated layer in order to maintain high coherence times. Meanwhile, we’ve struck a delicate balance of connectivity and reduction of crosstalk error with our fixed-frequency approach to two-qubit gates and hexagonal qubit arrangement introduced by Falcon. This qubit layout will allow us to implement the “heavy-hexagonal” error-correcting code that our team debuted last year, so as we scale up the number of physical qubits, we will also be able to explore how they’ll work together as error-corrected logical qubits—every processor we design has fault tolerance considerations taken into account.

With the Eagle processor, we will also introduce concurrent real-time classical compute capabilities that will allow for execution of a broader family of quantum circuits and codes.

The design principles established for our smaller processors will set us on a course to release a 433-qubit IBM Quantum Osprey system in 2022. More efficient and denser controls and cryogenic infrastructure will ensure that scaling up our processors doesn’t sacrifice the performance of our individual qubits, introduce further sources of noise, or take up too large a footprint.

In 2023, we will debut the 1,121-qubit IBM Quantum Condor processor, incorporating the lessons learned from previous processors while continuing to lower the critical two-qubit errors so that we can run longer quantum circuits. We think of Condor as an inflection point, a milestone that marks our ability to implement error correction and scale up our devices, while simultaneously complex enough to explore potential Quantum Advantages—problems that we can solve more efficiently on a quantum computer than on the world’s best supercomputers.

image


The development required to build Condor will have solved some of the most pressing challenges in the way of scaling up a quantum computer. However, as we explore realms even further beyond the thousand qubit mark, today’s commercial dilution refrigerators will no longer be capable of effectively cooling and isolating such potentially large, complex devices.

That’s why we’re also introducing a 10-foot-tall and 6-foot-wide “super-fridge,” internally codenamed “Goldeneye,” a dilution refrigerator larger than any commercially available today. Our team has designed this behemoth with a million-qubit system in mind—and has already begun fundamental feasibility tests. Ultimately, we envision a future where quantum interconnects link dilution refrigerators each holding a million qubits like the intranet links supercomputing processors, creating a massively parallel quantum computer capable of changing the world.

Knowing the way forward doesn’t remove the obstacles; we face some of the biggest challenges in the history of technological progress. But, with our clear vision, a fault-tolerant quantum computer now feels like an achievable goal within the coming decade.



 

Bogeyman 

Experienced member
Professional
Messages
9,192
Reactions
67 31,256
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

Introducing Quantum Serverless, a new programming model for leveraging quantum and classical resources​


To bring value to our users and clients with our systems we need our programing model to fit seamlessly into their workflows, where they can focus on their code and not have to worry about the deployment and infrastructure. In other words, we need a serverless architecture.

image


The rate of progress in any field is often dominated by iteration times, or how long it takes to try a new idea in order to discover whether it works. Long iteration times encourage careful behavior and incremental advances, because the cost of making a mistake is high. Fast iterations, meanwhile, unlock the ability to experiment with new ideas and break out of old ways of doing things. Accelerating progress therefore relies on increasing the speed we can iterate. It is time to bring a flexible platform that enables fast iteration to quantum computing.

At this year’s Quantum Summit, we debuted our 127-qubit Eagle quantum processor, and showed concepts for IBM Quantum System Two. Atop these crucial advances, we must prepare quantum computing to tackle some of the world’s most difficult challenges in energy, materials and chemistry, finance, and elsewhere — and perhaps areas that we haven’t considered yet. In order to apply quantum to real-world problems, though, we need to enter the realm of quantum advantage, where quantum computers are either cheaper, faster, or more accurate than classical computers at the same relevant task.

Integrating quantum into real-world workflows will take advancements across the stack. We need to think holistically about quantum performance, including the scale, quality, and speed of our processors. We are working to introduce new capabilities that take advantage of today’s classical computing resources if we hope to bring quantum advantage about, faster. Finally, we need to ensure that our users can take advantage of quantum resources at scale without having to worry about the intricacies of the hardware — we call this frictionless development — which we hope to achieve with a serverless execution model.

Turbocharging quantum performance​

Earlier in 2021, we began efforts to remove bottlenecks in the execution of actual use cases of quantum hardware through Qiskit Runtime. Qiskit Runtime provides a containerized execution environment for classical code that has low-latency access to quantum hardware. This enables a wide variety of workloads that make iterative or repeated use of quantum hardware to execute dramatically faster than was previously possible. In fact, with Qiskit Runtime we were able to show a 120x speed-up on a variational quantum eigensolver algorithm compared to our previous circuit API model.

In May, we launched Qiskit Runtime on a single IBM Quantum system and premium users within the IBM Quantum Network could execute programs pre-built by the IBM Developer team. Now, Qiskit Runtime is enabled on all IBM Quantum systems, and users can upload and execute custom programs.

We believe so strongly in the utility of this usage model that we are elevating Qiskit Runtime to be the standard interface for our quantum systems through two building block programs — the sampler and estimator — that will serve as primitives to connect quantum computation with classical computation via Quantum Serverless.

Mixing quantum with classical to bring quantum advantages faster​

Qiskit Runtime enables fast execution of classical code, near the quantum hardware, and in the case of Qiskit Runtime base programs, tightly coupled with quantum systems. However, its container execution environment is somewhat constrained by the computing resources that are co-located with IBM’s quantum systems.

What if developers could access and integrate with the broader world of classical computing and services? We’ve begun thinking of the classical resources that are coupled with quantum systems as coming in two flavors:

On the one hand, we employ a flavor of classical computing that is closely involved in the generation of quantum circuits with the goal of pushing speed and full utilization of the quantum processor (i.e. Qiskit Runtime). This flavor drives the performance of our systems.

On the other hand, we also employ Classical computing power to enable new capabilities in quantum computation by incorporating Classical operations as a part of quantum programs at the application level (Cloud/HPC/GPU).

Our team has been researching methods to employ this second flavor of classical resources to allow us to explore larger problems and find more-accurate solutions with our quantum computing systems. These methods include circuit knitting, quantum embedding, and error mitigation. Circuit knitting uses classical resources to find useful cuts in the problem to produce smaller quantum circuits to run on near-term quantum devices, and then use classical processing to combine the pieces back together to simulate a larger problem.

A recent example1 we demonstrated is entanglement forging which exploits symmetry in chemistry problems to simplify the knitting. Meanwhile, quantum embedding re-frames the problem to allow classical computers to simulate those pieces that can be well-approximated classically, while looping in quantum resources for only the classically difficult parts of the problem. In the context of chemistry, this might describe an active-space calculation that runs on the QPU with a Hamiltonian iteratively updated by a classical simulation of the inactive space.

Finally, error mitigation uses classical post-processing in order to reduce the impact of some classes of errors and get a more-accurate quantum solution. We hope that Quantum + Classical will allow us to realize quantum advantage in certain applications sooner than expected

Realizing frictionless development via a serverless architecture​

To bring value to our users and clients with our systems, we need our programing model to fit seamlessly into their workflows, where they can focus on their code and not have to worry about the deployment and infrastructure. In other words, we need a serverless architecture.

A serverless architecture incorporates four key attributes:

  1. A developer focuses on coding only, with no need for infrastructure management consideration
  2. Everything is a cloud service
  3. The service requires no capacity or life cycle management considerations and scales seamlessly
  4. Users pay only for consumption, never for idle time.
IBM Cloud Code Engine will allow us to establish a serverless programming and operations model. Today, we are giving a glimpse of the future with a simple demonstration of how we can use IBM Cloud Code Engine with Qiskit Runtime as the quantum system interface to allow users to seamlessly use CPUs, GPUs and QPUs as part of a single application.


What would this workflow look like for a complex quantum problem?

First, a user submits an application incorporating both classical and quantum code to the IBM Cloud. IBM Cloud Code Engine delegates classical logic to scalable classical computing resources, and quantum to Qiskit Runtime, communicating with classical compute resources when necessary to prepare circuits and perform post-processing. This architecture allows quantum and classical to work in parallel: quantum to call classical and classical to call quantum. Doing this will enable capabilities such as circuit knitting, error mitigation, and circuit embedding to become part of the standard development workflow.

By rethinking the way that quantum programs run, we hope that users will be able to explore even more complex quantum computations, and more-easily realize quantum advantage. Most importantly, it will be easier than ever for users to explore quantum processing as part of their computing workflow. We have shown a proof of concept for Quantum Serverless, and this is how we see the future programing of quantum computing evolving.

We look forward to bringing this flexible serverless architecture to our users soon. We’re glad to have you on this journey as we work to make quantum computing as accessible as classical computing to organizations around the world.
 

Zafer

Experienced member
Messages
4,683
Reactions
7 7,389
Nation of residence
Turkey
Nation of origin
Turkey
We should have our hands in quantum computing as well as optical and transistor based.
I believe this can happen once we have become energy positive with our hydrocarbon and renewables investments and have some extra cash to invest in these technologies.
 

Follow us on social media

Top Bottom