We derive an attainable bound on the precision of quantum state estimation for finite dimensional systems, providing a construction for the asymptotically optimal measurement. Our results hold under an assumption called local asymptotic covariance, which is weaker than unbiasedness or local unbiasedness. The derivation is based on an analysis of the limiting distribution of the estimator's deviation from the true value of the parameter, and takes advantage of quantum local asymptotic normality, a duality between sequences of identically prepared states and Gaussian states of continuous variable systems. We first prove our results for the mean square error of a special class of models, called D-invariant, and then extend the results to arbitrary models, generic cost functions, and global state estimation, where the unknown parameter is not restricted to a local neighbourhood of the true value. The extension includes a treatment of nuisance parameters, namely parameters that are not of interest to the experimenter but nevertheless affect the estimation. As an illustration of the general approach, we provide the optimal estimation strategies for the joint measurement of two qubit observables, for the estimation of qubit states in the presence of amplitude damping noise, and for noisy multiphase estimation.
Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beamsplitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.
In quantum Shannon theory, the way information is encoded and decoded takes advantage of the laws of quantum mechanics, while the way communication channels are interlinked is assumed to be classical. In this Letter we relax the assumption that quantum channels are combined classically, showing that a quantum communication network where quantum channels are combined in a superposition of different orders can achieve tasks that are impossible in conventional quantum Shannon theory. In particular, we show that two identical copies of a completely depolarizing channel become able to transmit information when they are combined in a quantum superposition of two alternative orders. This finding runs counter to the intuition that if two communication channels are identical, using them in different orders should not make any difference. The failure of such intuition stems from the fact that a single noisy channel can be a random mixture of elementary, non-commuting processes, whose order (or lack thereof) can affect the ability to transmit information.
Quantum mechanics, in principle, allows for processes with indefinite causal order. However, most of these causal anomalies have not yet been detected experimentally. We show that every such process can be simulated experimentally by means of non-Markovian dynamics with a measurement on additional degrees of freedom. Explicitly, we provide a constructive scheme to implement arbitrary acausal processes. Furthermore, we give necessary and sufficient conditions for open system dynamics with measurement to yield processes that respect causality locally, and find that tripartite entanglement and nonlocal unitary transformations are crucial requirements for the simulation of causally indefinite processes. These results show a direct connection between three counter-intuitive concepts: non-Markovianity, entanglement, and causal indefiniteness.
Entanglement in angular momentum degrees of freedom is a precious resource for quantum metrology and control. Here we study the conversions of this resource, focusing on Bell pairs of spin-J particles, where one particle is used to probe unknown rotations and the other particle is used as reference. When a large number of pairs are given, we show that every rotated spin-J Bell state can be reversibly converted into an equivalent number of rotated spin one-half Bell states, at a rate determined by the quantum Fisher information. This result provides the foundation for the definition of an elementary unit of information about rotations in space, which we call the Cartesian refbit. In the finite copy scenario, we design machines that approximately break down Bell states of higher spins into Cartesian refbits, as well as machines that approximately implement the inverse process. In addition, we establish a quantitative link between the conversion of Bell states and the simulation of unitary gates, showing that the fidelity of probabilistic state conversion provides upper and lower bounds on the fidelity of deterministic gate simulation. The result holds not only for rotation gates, but also to all sets of gates that form finite-dimensional representations of compact groups. For rotation gates, we show how rotations on a system of given spin can simulate rotations on a system of different spin.
Controlling quantum systems is crucial for quantum computation and a variety of new quantum technologies. The control is typically achieved by breaking down the target dynamics into a sequence of elementary gates,whose description can be stored into the memory of a classical computer. Here we explore a different approach, initiated by Nielsen and Chuang, where the target dynamics is encoded in the state of a quantum system, regarded as a "quantum program". We show that quantum strategies based on coherent interactions between the quantum program and the target system offer an advantage over all classical strategies that measure the program and conditionally operate on the system. To certify the advantage, we provide a benchmark that guarantees the successful demonstration of quantum-enhanced programming in realistic experiments.
Quantum mechanics imposes a fundamental tradeoff between the accuracy of time measurements and the size of the systems used as clocks. When the measurements of different time intervals are combined, the errors due to the finite clock size accumulate, resulting in an overall inaccuracy that grows with the complexity of the setup. Here we introduce a method that eludes the accumulation of errors, by coherently transferring information from a quantum clock to a quantum memory of the smallest possible size. Our method can be used to measure the total duration of a sequence of events with enhanced accuracy, and to reduce the amount of quantum communication needed to stabilize clocks in a quantum network.
The present short review article illustrates the latest theoretical developments on quantum tomography, regarding general optimization methods for both data-processing and setup. The basic theoretical tool is the informationally complete measurement. The optimization theory for the setup is based on the new theoretical approach of quantum combs.
Quantum-limited amplifiers increase the amplitude of quantum signals at the price of introducing additional noise. Quantum purification protocols operate in the reverse way, by reducing the noise while attenuating the signal. Here we investigate a scenario that interpolates between these two extremes. We search for the optimal physical process that generates $M$ approximate copies of pure and amplified coherent state, starting from $N$ copies of a noisy coherent state with Gaussian modulation. We prove that the optimal deterministic processes are always Gaussian, whereas non-Gaussianity powers up probabilistic advantages in suitable parameter regimes. The optimal processes are experimentally feasible, both in the deterministic and in the probabilistic scenario. In view of this fact, we provide benchmarks that can be used to certify the experimental demonstration of the quantum-enhanced amplification and purification of coherent states.
We study the compression of $n$ quantum systems, each prepared in the same state belonging to a given parametric family of quantum states. For a family of states with $f$ independent parameters, we devise an asymptotically faithful protocol that requires a hybrid memory of size $(f/2)\log n$, including both quantum and classical bits. Our construction uses a quantum version of local asymptotic normality and, as an intermediate step, solves the problem of compressing displaced thermal states of $n$ identically prepared modes. In both cases, we show that $(f/2)\log n$ is the minimum amount of memory needed to achieve asymptotic faithfulness. In addition, we analyze how much of the memory needs to be quantum. We find that the ratio between quantum and classical bits can be made arbitrarily small, but cannot reach zero: unless all the quantum states in the family commute, no protocol using only classical bits can be faithful, even if it uses an arbitrarily large number of classical bits.
Microcanonical thermodynamics studies the operations that can be performed on systems with well-defined energy. So far, this approach has been applied to classical and quantum systems. Here we extend it to arbitrary physical theories, proposing two requirements for the development of a general microcanonical framework. We then formulate three resource theories, corresponding to three different sets of basic operations: i) random reversible operations, resulting from reversible dynamics with fluctuating parameters, ii) noisy operations, generated by the interaction with ancillas in the microcanonical state, and iii) unital operations, defined as the operations that preserve the microcanonical state. We focus our attention on a class of physical theories, called sharp theories with purification, where these three sets of operations exhibit remarkable properties. Firstly, each set is contained into the next. Secondly, the convertibility of states by unital operations is completely characterised by a majorisation criterion. Thirdly, the three sets are equivalent in terms of state convertibility if and only if the dynamics allowed by theory satisfy a suitable condition, which we call unrestricted reversibility. Under this condition, we derive a duality between the resource theory of microcanonical thermodynamics and the resource theory of pure bipartite entanglement.
We propose four information-theoretic axioms for the foundations of statistical mechanics in general physical theories. The axioms---Causality, Purity Preservation, Pure Sharpness, and Purification---identify a class of theories where every mixed state can be modelled as the marginal of a pure entangled state and where every unsharp measurement can be modelled as a sharp measurement on a composite system. This class of theories, called sharp theories with purification, includes quantum theory both with complex and real amplitudes, as well as a suitable extension of classical probability theory where classical systems can be entangled with other, non-classical systems. Theories satisfying our axioms support well-behaved notions of majorization, entropy, and Gibbs states, allowing for an information-theoretic derivation of Landauer's principle. We conjecture that every theory admitting a sensible thermodynamics must be extendable to a sharp theory with purification.
We establish the ultimate limits to the compression of sequences of identically prepared qubits. The limits are determined by Holevo's information quantity and are attained through use of the optimal universal cloning machine, which finds here a novel application to quantum Shannon theory.
We develop a semidefinite programming method for the optimization of quantum networks, including both causal networks and networks with indefinite causal structure. Our method applies to a broad class of performance measures, defined operationally in terms of interactive tests set up by a verifier. We show that the optimal performance is equal to a max relative entropy, which quantifies the informativeness of the test. Building on this result, we extend the notion of conditional min-entropy from quantum states to quantum causal networks. The optimization method is illustrated in a number of applications, including the inversion, charge conjugation, and controlization of an unknown unitary dynamics. In the non-causal setting, we show a proof-of-principle application to the maximization of the winning probability in a non-causal quantum game.
Sudden changes are ubiquitous in nature. Identifying them is of crucial importance for a number of applications in medicine, biology, geophysics, and social sciences. Here we investigate the problem in the quantum domain, considering a source that emits particles in a default state, until a point where it switches to another state. Given a sequence of particles emitted by the source, the problem is to find out where the change occurred. For large sequences, we obtain an analytical expression for the maximum probability of correctly identifying the change point when joint measurements on the whole sequence are allowed. We also construct strategies that measure the particles individually and provide an online answer as soon as a new particle is emitted by the source. We show that these strategies substantially underperform the optimal strategy, indicating that quantum sudden changes, although happening locally, are better detected globally.
In this work we present a general mathematical framework to deal with Quantum Networks, i.e. networks resulting from the interconnection of elementary quantum circuits. The cornerstone of our approach is a generalization of the Choi isomorphism that allows one to efficiently represent any given Quantum Network in terms of a single positive operator. Our formalism allows one to face and solve many quantum information processing problems that would be hardly manageable otherwise, the most relevant of which are reviewed in this work: quantum process tomography, quantum cloning and learning of transformations, inversion of a unitary gate, information-disturbance tradeoff in estimating a unitary transformation, cloning and learning of a measurement device.
While the no-cloning theorem forbids the perfect replication of quantum information, it is sometimes possible to produce large numbers of replicas with vanishingly small error. This phenomenon, known as quantum superreplication, can take place both for quantum states and quantum gates. The aim of this paper is to review the central features of quantum superreplication, providing a unified view on the existing results. The paper also includes new results. In particular, we show that, when quantum superreplication can be achieved, it can be achieved through estimation, up to an error vanishing with a power law. Quantum strategies still offer an advantage for superreplication, in that they allow for an exponentially faster reduction of the error. Using the relation with estimation, we provide i) an alternative proof of the optimality of the Heisenberg scaling of quantum metrology, ii) a strategy to estimate arbitrary unitary gates with Heisenberg scaling, up to a logarithmic overhead, and iii) a protocol that generates M nearly perfect copies of a generic pure state with a number of queries to the corresponding unitary gate scaling as the square root of M. Finally, we point out that superreplication can be achieved using interactions among k systems, provided that k is large compared to square of the ratio between the numbers of input and output copies.
The existence of incompatible measurements, epitomized by Heisenberg's uncertainty principle, is one of the distinctive features of quantum theory. So far, quantum incompatibility has been studied for measurements that test the preparation of physical systems. Here we extend the notion to measurements that test dynamical processes, possibly consisting of multiple time steps. Such measurements are known as testers and are implemented by interacting with the tested process through a sequence of state preparations, interactions, and measurements. Our first result is a characterization of the incompatibility of quantum testers, for which we provide necessary and sufficient conditions. Then, we propose a quantitative measure of incompatibility. We call this measure the robustness of incompatibility and define it as the minimum amount of noise that has to be added to a set of testers in order to make them compatible. We show that (i) the robustness is lower bounded by the distinguishability of the sequence of interactions used by the tester and (ii) maximum robustness is attained when the interactions are perfectly distinguishable. The general results are illustrated in the concrete example of binary testers probing the time-evolution of a single-photon polarization.
We present one-shot compression protocols that optimally encode ensembles of $N$ identically prepared mixed states into $O(\log N)$ qubits. In contrast to the case of pure-state ensembles, we find that the number of encoding qubits drops down discontinuously as soon as a nonzero error is tolerated and the spectrum of the states is known with sufficient precision. For qubit ensembles, this feature leads to a 25% saving of memory space. Our compression protocols can be implemented efficiently on a quantum computer.
Quantum theory was discovered in an adventurous way, under the urge to solve puzzles-like the spectrum of the blackbody radiation-that haunted the physics community at the beginning of the 20th century. It soon became clear, though, that quantum theory was not just a theory of specific physical systems, but rather a new language of universal applicability. Can this language be reconstructed from first principles? Can we arrive at it from logical reasoning, instead of ad hoc guesswork? A positive answer was provided in Refs. [1, 2], where we put forward six principles that identify quantum theory uniquely in a broad class of theories. We first defined a class of "theories of information", constructed as extensions of probability theory in which events can be connected into networks. In this framework, we formulated the six principles as rules governing the control and the accessibility of information. Directly from these rules, we reconstructed a number of quantum information features, and eventually, the whole Hilbert space framework. In short, our principles characterize quantum theory as the theory of information that allows for maximal control of randomness.
In quantum theory every state can be diagonalized, i.e. decomposed as a convex combination of perfectly distinguishable pure states. This elementary structure plays an ubiquitous role in quantum mechanics, quantum information theory, and quantum statistical mechanics, where it provides the foundation for the notions of majorization and entropy. A natural question then arises: can we reconstruct these notions from purely operational axioms? We address this question in the framework of general probabilistic theories, presenting a set of axioms that guarantee that every state can be diagonalized. The first axiom is Causality, which ensures that the marginal of a bipartite state is well defined. Then, Purity Preservation states that the set of pure transformations is closed under composition. The third axiom is Purification, which allows to assign a pure state to the composition of a system with its environment. Finally, we introduce the axiom of Pure Sharpness, stating that for every system there exists at least one pure effect occurring with unit probability on some state. For theories satisfying our four axioms, we show a constructive algorithm for diagonalizing every given state. The diagonalization result allows us to formulate a majorization criterion that captures the convertibility of states in the operational resource theory of purity, where random reversible transformations are regarded as free operations.
Entanglement is one of the most striking features of quantum mechanics, and yet it is not specifically quantum. More specific to quantum mechanics is the connection between entanglement and thermodynamics, which leads to an identification between entropies and measures of pure state entanglement. Here we search for the roots of this connection, investigating the relation between entanglement and thermodynamics in the framework of general probabilistic theories. We first address the question whether an entangled state can be transformed into another by means of local operations and classical communication. Under two operational requirements, we prove a general version of the Lo-Popescu theorem, which lies at the foundations of the theory of pure-state entanglement. We then consider a resource theory of purity where free operations are random reversible transformations, modelling the scenario where an agent has limited control over the dynamics of a closed system. Our key result is a duality between the resource theory of entanglement and the resource theory of purity, valid for every physical theory where all processes arise from pure states and reversible interactions at the fundamental level. As an application of the main result, we establish a one-to-one correspondence between entropies and measures of pure bipartite entanglement and exploit it to define entanglement measures in the general probabilistic framework. In addition, we show a duality between the task of information erasure and the task of entanglement generation, whereby the existence of entropy sinks (systems that can absorb arbitrary amounts of information) becomes equivalent to the existence of entanglement sources (correlated systems from which arbitrary amounts of entanglement can be extracted).
Characterizing quantum correlations in terms of information-theoretic principles is a popular chapter of quantum foundations. Traditionally, the principles adopted for this scope have been expressed in terms of conditional probability distributions, specifying the probability that a black box produces a certain output upon receiving a certain input. This framework is known as "device-independent". Another major chapter of quantum foundations is the information-theoretic characterization of quantum theory, with its sets of states and measurements, and with its allowed dynamics. The different frameworks adopted for this scope are known under the umbrella term "general probabilistic theories". With only a few exceptions, the two programmes on characterizing quantum correlations and characterizing quantum theory have so far proceeded on separate tracks, each one developing its own methods and its own agenda. This paper aims at bridging the gap, by comparing the two frameworks and illustrating how the two programmes can benefit each other.
Quantum technologies are developing powerful tools to generate and manipulate coherent superpositions of different energy levels. Envisaging a new generation of energy-efficient quantum devices, here we explore how coherence can be manipulated without exchanging energy with the surrounding environment. We start from the task of converting a coherent superposition of energy eigenstates into another. We identify the optimal energy-preserving operations, both in the deterministic and in the probabilistic scenario. We then design a recursive protocol, wherein a branching sequence of energy-preserving filters increases the probability of success while reaching maximum fidelity at each iteration. Building on the recursive protocol, we construct efficient approximations of the optimal fidelity-probability trade-off, by taking coherent superpositions of the different branches generated by probabilistic filtering. The benefits of this construction are illustrated in applications to quantum metrology, quantum cloning, coherent state amplification, and ancilla-driven computation. Finally, we extend our results to transitions where the input state is generally mixed and we apply our findings to the task of purifying quantum coherence.
This paper provides a concise summary of the framework of operational-probabilistic theories, aimed at emphasizing the interaction between category-theoretic and probabilistic structures. Within this framework, we review an operational version of the GNS construction, expressed by the so-called purification principle, which under mild hypotheses leads to an operational version of Stinespring's theorem.
Quantum states obey an asymptotic no-cloning theorem, stating that no deterministic machine can reliably replicate generic sequences of identically prepared pure states. In stark contrast, we show that generic sequences of unitary gates can be replicated deterministically at nearly quadratic rates, with an error vanishing on most inputs except for an exponentially small fraction. The result is not in contradiction with the no-cloning theorem, since the impossibility of deterministically transforming pure states into unitary gates prevents the application of the gate replication protocol to states. In addition to gate replication, we show that $N$ parallel uses of a completely unknown unitary gate can be compressed into a single gate acting on $O(\log N)$ qubits, leading to an exponential reduction of the amount of quantum communication needed to implement the gate remotely.
Quantum particles with spin are the most elementary gyroscopes existing in nature. Can two such gyroscopes help two distant observers find out their relative orientation in space? Here we show that a single pair of gyroscopes in an EPR state gives little clue about the relative orientation, but when two or more EPR pairs are used in parallel, suddenly a common reference frame emerges, with an error that drops quickly with the size of the system, beating than the best classical scaling already for small number of copies. This activation phenomenon indicates the presence of a latent resource hidden into EPR correlations, which can be unlocked and turned into advantage when multiple copies are available.
We propose a notion of state distinguishability that does not refer to probabilities, but rather to the ability of a set of states to serve as programs for a desired set of gates. Using this notion, we reconstruct the structural features of the task of state discrimination, such as the equivalence with cloning and the impossibility to extract information from two non-distinguishable pure states without causing a disturbance. All these features express intrinsic links among operational tasks, which are valid independently of the particular theory under consideration.
We review a recent approach to the foundations of quantum mechanics inspired by quantum information theory. The approach is based on a general framework, which allows one to address a large class of physical theories which share basic information-theoretic features. We first illustrate two very primitive features, expressed by the axioms of causality and purity-preservation, which are satisfied by both classical and quantum theory. We then discuss the axiom of purification, which expresses a strong version of the Conservation of Information and captures the core of a vast number of protocols in quantum information. Purification is a highly non-classical feature and leads directly to the emergence of entanglement at the purely conceptual level, without any reference to the superposition principle. Supplemented by a few additional requirements, satisfied by classical and quantum theory, it provides a complete axiomatic characterization of quantum theory for finite dimensional systems.
Quantum technology promises revolutionary advantages in information processing and transmission compared to classical technology; however, determining which specific resources are needed to surpass the capabilities of classical machines often remains a nontrivial problem. To address such a problem, one first needs to establish the best classical solutions, which set benchmarks that must be beaten by any implementation claiming to harness quantum features for an enhanced performance. Here we introduce and develop a self-contained formalism to obtain the ultimate, generally probabilistic benchmarks for quantum information protocols including teleportation and approximate cloning, with arbitrary ensembles of input states generated by a group action, so-called Gilmore-Perelomov coherent states. This allows us to construct explicit fidelity thresholds for the transmission of multimode Gaussian and non-Gaussian states of continuous variable systems, as well as qubit and qudit pure states drawn according to nonuniform distributions on the Bloch hypersphere, which accurately model the current laboratory facilities. The performance of deterministic classical procedures such as square-root measurement strategies is further compared with the optimal probabilistic benchmarks, and the state-of-the-art performance of experimental quantum implementations against our newly derived thresholds is discussed. This work provides a comprehensive collection of directly useful criteria for the reliable certification of quantum communication technologies.
It has been recently shown that probabilistic protocols based on postselection boost the performances of phase estimation and the replication of quantum clocks. Here we demonstrate that the improvements in these two tasks have to match exactly in the macroscopic limit where the number of clones grows to infinity, preserving the equivalence between asymptotic cloning and estimation for arbitrary values of the success probability. Remarkably, the cloning fidelity depends critically on the number of rationally independent eigenvalues of the clock Hamiltonian. We also prove that probabilistic metrology can simulate cloning in the macroscopic limit for arbitrary sets of states, provided that the performance of the simulation is measured by testing small groups of clones.
Gathering data through measurements is at the basis of every experimental science. Ideally, measurements should be repeatable and, when extracting only coarse-grained data, they should allow the experimenter to retrieve the finer details at a later time. However, in practice most measurements appear to be noisy. Here we postulate that, despite the imperfections observed in real life experiments, there exists a fundamental level where all measurements are ideal. Combined with the requirement that ideal measurements remain so when coarse-grained or applied in parallel on spacelike separated systems, our postulate places a powerful constraint on the amount of nonlocality and contextuality that can be found in an arbitrary physical theory, bringing down the violation of Bell and Kocher-Specker inequalities near to its quantum value. In addition, it provides a new compelling motivation for the principles of Local Orthogonality and Consistent Exclusivity, recently proposed for the characterization of the quantum set of probability distributions.
We pose the question whether the asymptotic equivalence between quantum cloning and quantum state estimation, valid at the single-clone level, still holds when all clones are examined globally. We conjecture that the answer is affirmative and present a large amount of evidence supporting our conjecture, developing techniques to derive optimal asymptotic cloners and proving their equivalence with estimation in virtually all scenarios considered in the literature. Our analysis covers the case of arbitrary finite sets of states, arbitrary families of coherent states, arbitrary phase- and multiphase-covariant sets of states, and two-qubit maximally entangled states. In all these examples we observe that the optimal asymptotic fidelity enjoys a universality property, as its scaling does not depend on the specific details of the set of input states, but only on the number of parameters needed to specify them.
Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.
We pose the question whether the asymptotic equivalence between quantum cloning and quantum state estimation, valid at single-copy level, still holds when all the copies are examined jointly. For an N-to-M cloner, we consider the overall fidelity between the state of the M output systems and the state of M ideal copies, and we ask whether the optimal fidelity is attained by a measure-and- prepare protocol in the limit of large M. In order to gain intuition into the general problem, we analyze two concrete examples: i) cloning qubit states on the equator of the Bloch sphere and ii) cloning two-qubit maximally entangled states. In the first case, we show that the optimal measure-and- prepare fidelity converges to the fidelity of the optimal cloner in the limit of large M. In the second case, we restrict our attention to economical covariant cloners, and again, we exhibit a measure- and-prepare protocol that achieves asymptotically the optimal fidelity. Quite counterintuitively, in both cases the optimal states that have to be prepared in order to maximize the overall fidelity are not product states corresponding to M identical copies, but instead suitable M-partite entangled states: the simple protocol where one estimates the input state and re-prepares M identical copies of the estimated state is strictly suboptimal, even in the asymptotic limit.
We assess the resources needed to identify a reversible quantum gate among a finite set of alternatives, including in our analysis both deterministic and probabilistic strategies. Among the probabilistic strategies we consider unambiguous gate discrimination, where errors are not tolerated but inconclusive outcomes are allowed, and we prove that parallel strategies are sufficient to unambiguously identify the unknown gate with minimum number of queries. This result is used to provide upper and lower bounds on the query complexity and on the minimum ancilla dimension. In addition, we introduce the notion of generalized t-designs, which includes unitary t-designs and group representations as special cases. For gates forming a generalized t-design we give an explicit expression for the maximum probability of correct gate identification and we prove that there is no gap between the performances of deterministic strategies an those of probabilistic strategies. Hence, evaluating of the query complexity of perfect deterministic discrimination is reduced to the easier problem of evaluating the query complexity of unambiguous discrimination. Finally, we consider discrimination strategies where the use of ancillas is forbidden, providing upper bounds on the number of additional queries needed to make up for the lack of entanglement with the ancillas.
No process in nature can perfectly clone an arbitrary quantum state. But is it possible to engineer processes that replicate quantum information with vanishingly small error? Here we demonstrate the possibility of probabilistic super-replication phenomena where N equally prepared quantum clocks are transformed into a much larger number of M nearly perfect replicas, with an error that rapidly vanishes whenever M is small compared to the square of N. The quadratic replication rate is the ultimate limit imposed by Quantum Mechanics to the proliferation of information and is fundamentally linked with the Heisenberg limit of quantum metrology.
For a set of quantum states generated by the action of a group, we consider the graph obtained by considering two group elements adjacent whenever the corresponding states are non-orthogonal. We analyze the structure of the connected components of the graph and show two applications to the optimal estimation of an unknown group action and to the search for decoherence free subspaces of quantum channels with symmetry.
We establish the ultimate quantum limits to the amplification of an unknown coherent state, both in the deterministic and probabilistic case, investigating the realistic scenario where the expected photon number is finite. In addition, we provide the benchmark that experimental realizations have to surpass in order to beat all classical amplification strategies and to demonstrate genuine quantum amplification. Our result guarantees that a successful demonstration is in principle possible for every finite value of the expected photon number.
The paper provides a systematic characterization of quantum ergodic and mixing channels in finite dimensions and a discussion of their structural properties. In particular, we discuss ergodicity in the general case where the fixed point of the channel is not a full-rank (faithful) density matrix. Notably, we show that ergodicity is stable under randomizations, namely that every random mixture of an ergodic channel with a generic channel is still ergodic. In addition, we prove several conditions under which ergodicity can be promoted to the stronger property of mixing. Finally, exploiting a suitable correspondence between quantum channels and generators of quantum dynamical semigroups, we extend our results to the realm of continuous-time quantum evolutions, providing a characterization of ergodic Lindblad generators and showing that they are dense in the set of all possible generators.
After more than a century since its birth, Quantum Theory still eludes our understanding. If asked to describe it, we have to resort to abstract and ad hoc principles about complex Hilbert spaces. How is it possible that a fundamental physical theory cannot be described using the ordinary language of Physics? Here we offer a contribution to the problem from the angle of Quantum Information, providing a short non-technical presentation of a recent derivation of Quantum Theory from information-theoretic principles. The broad picture emerging from the principles is that Quantum Theory is the only standard theory of information compatible with the purity and reversibility of physical processes.
We investigate the optimal estimation of a quantum process that can possibly consist of multiple time steps. The estimation is implemented by a quantum network that interacts with the process by sending an input and processing the output at each time step. We formulate the search of the optimal network as a semidefinite program and use duality theory to give an alternative expression for the maximum payoff achieved by estimation. Combining this formulation with a technique devised by Mittal and Szegedy we prove a general product rule for the joint estimation of independent processes, stating that the optimal joint estimation can achieved by estimating each process independently, whenever the figure of merit is of a product form. We illustrate the result in several examples and exhibit counterexamples showing that the optimal joint network may not be the product of the optimal individual networks if the processes are not independent or if the figure of merit is not of the product form. In particular, we show that entanglement can reduce by a factor K the variance in the estimation of the sum of K independent phase shifts.
We introduce the study of quantum protocols that probabilistically simulate quantum channels from a sender in the future to a receiver in the past. The maximum probability of simulation is determined by causality and depends on the amount and type (classical or quantum) of information that the channel can transmit. We illustrate this dependence in several examples, including ideal classical and quantum channels, measure-and-prepare channels, partial trace channels, and universal cloning channels. For the simulation of partial trace channels, we consider generalized teleportation protocols that take N input copies of a pure state in the future and produce M < N output copies of the same state in the past. In this case, we show that the maximum probability of successful teleportation increases with the number of input copies, a feature that was impossible in classical physics. In the limit of asymptotically large N, the probability converges to the probability of simulation for an ideal classical channel. Similar results are found for universal cloning channels from N copies to M > N approximate copies, exploiting a time-reversal duality between universal cloning and partial trace.
A no-signalling channel transforming quantum systems in Alice's and Bob's laboratories is compatible with two different causal structures: (A < B) Alice's output causally precedes Bob's input and (B< A) Bob's output causally precedes Alice's input. I show that a quantum superposition of circuits operating within these two causal structures enables the perfect discrimination between no-signalling channels that can not be perfectly distinguished by any ordinary circuit.
Quantum supermaps are higher-order maps transforming quantum operations into quantum operations. Here we extend the theory of quantum supermaps, originally formulated in the finite dimensional setting, to the case of higher-order maps transforming quantum operations with input in a separable von Neumann algebra and output in the algebra of the bounded operators on a given separable Hilbert space. In this setting we prove two dilation theorems for quantum supermaps that are the analogues of the Stinespring and Radon-Nikodym theorems for quantum operations. Finally, we consider the case of quantum superinstruments, namely measures with values in the set of quantum supermaps, and derive a dilation theorem for them that is analogue to Ozawa's theorem for quantum instruments. The three dilation theorems presented here show that all the supermaps defined in this paper can be implemented by connecting devices in quantum circuits.
This paper presents a series of general results about the optimal estimation of physical transformations in a given symmetry group. In particular, it is shown how the different symmetries of the problem determine different properties of the optimal estimation strategy. The paper also contains a discussion about the role of entanglement between the representation and multiplicity spaces and about the optimality of square-root measurements.
Quantum theory can be derived from purely informational principles. Five elementary axioms-causality, perfect distinguishability, ideal compression, local distinguishability, and pure conditioning-define a broad class of theories of information-processing that can be regarded as a standard. One postulate-purification-singles out quantum theory within this class. The main structures of quantum theory, such as the representation of mixed states as convex combinations of perfectly distinguishable pure states, are derived directly from the principles without using the Hilbert space framework.
This paper presents a series of results on the interplay between quantum estimation, cloning and finite de Finetti theorems. First, we consider the measure-and-prepare channel that uses optimal estimation to convert M copies into k approximate copies of an unknown pure state and we show that this channel is equal to a random loss of all but s particles followed by cloning from s to k copies. When the number k of output copies is large with respect to the number M of input copies the measure-and-prepare channel converges in diamond norm to the optimal universal cloning. In the opposite case, when M is large compared to k, the estimation becomes almost perfect and the measure-and-prepare channel converges in diamond norm to the partial trace over all but k systems. This result is then used to derive de Finetti-type results for quantum states and for symmetric broadcast channels, that is, channels that distribute quantum information to many receivers in a permutationally invariant fashion. Applications of the finite de Finetti theorem for symmetric broadcast channels include the derivation of diamond-norm bounds on the asymptotic convergence of quantum cloning to state estimation and the derivation of bounds on the amount of quantum information that can be jointly decoded by a group of k receivers at the output of a symmetric broadcast channel.
We show that a quantum clock cannot be teleported without prior synchronization between sender and receiver: every protocol using a finite amount of entanglement and an arbitrary number of rounds of classical communication will necessarily introduce an error in the teleported state of the clock. Nevertheless, we show that entanglement can be used to achieve synchronization with precision higher than any classical correlation allows, and we give the optimized strategy for this task. The same results hold also for arbitrary continuous quantum reference frames, which encode general unspeakable information,-information that cannot be encoded into a number, but instead requires a specific physical support, like a clock or a gyroscope, to be conveyed.
We introduce a quantum packing bound on the minimal resources required by nondegenerate error correction codes for any kind of noise. We prove that degenerate codes can outperform nondegenerate ones in the presence of correlated noise, by exhibiting examples where the quantum packing bound is violated.
We address the problem of the information-disturbance trade-off associated to the estimation of a quantum transformation, and show how the extraction of information about the a black box causes a perturbation of the corresponding input-output evolution. In the case of a black box performing a unitary transformation, randomly distributed according to the invariant measure, we give a complete solution of the problem, deriving the optimal trade-off curve and presenting an explicit construction of the optimal quantum network.
A single-party strategy in a multi-round quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here provide the most efficient realization in terms of computational-space resources.
We show that quantum theory allows for transformations of black boxes that cannot be realized by inserting the input black boxes within a circuit in a pre-defined causal order. The simplest example of such a transformation is the classical switch of black boxes, where two input black boxes are arranged in two different orders conditionally on the value of a classical bit. The quantum version of this transformation-the quantum switch-produces an output circuit where the order of the connections is controlled by a quantum bit, which becomes entangled with the circuit structure. Simulating these transformations in a circuit with fixed causal structure requires either postselection, or an extra query to the input black boxes.
We investigate general probabilistic theories in which every mixed state has a purification, unique up to reversible channels on the purifying system. We show that the purification principle is equivalent to the existence of a reversible realization of every physical process, namely that every physical process can be regarded as arising from a reversible interaction of the system with an environment, which is eventually discarded. From the purification principle we also construct an isomorphism between transformations and bipartite states that possesses all structural properties of the Choi-Jamiolkowski isomorphism in quantum mechanics. Such an isomorphism allows one to prove most of the basic features of quantum mechanics, like e.g. existence of pure bipartite states giving perfect correlations in independent experiments, no information without disturbance, no joint discrimination of all pure states, no cloning, teleportation, no programming, no bit commitment, complementarity between correctable channels and deletion channels, characterization of entanglement-breaking channels as measure-and-prepare channels, and others, without resorting to the mathematical framework of Hilbert spaces.
Bit commitment protocols, whose security is based on the laws of quantum mechanics alone, are generally held to be impossible on the basis of a concealment-bindingness tradeoff. A strengthened and explicit impossibility proof has been given in: G. M. D'Ariano, D. Kretschmann, D. Schlingemann, and R. F. Werner, Phys. Rev. A 76, 032328 (2007), in the Heisenberg picture and in a C*-algebraic framework, considering all conceivable protocols in which both classical and quantum information are exchanged. In the present paper we provide a new impossibility proof in the Schrodinger picture, greatly simplifying the classification of protocols and strategies using the mathematical formulation in terms of quantum combs, with each single-party strategy represented by a conditional comb. We prove that assuming a stronger notion of concealment--worst-case over the classical information histories--allows Alice's cheat to pass also the worst-case Bob's test. The present approach allows us to restate the concealment-bindingness tradeoff in terms of the continuity of dilations of probabilistic quantum combs with respect to the comb-discriminability distance.
We present a framework to treat quantum networks and all possible transformations thereof, including as special cases all possible manipulations of quantum states, measurements, and channels, such as, e.g., cloning, discrimination, estimation, and tomography. Our framework is based on the concepts of quantum comb-which describes all transformations achievable by a given quantum network-and link product-the operation of connecting two quantum networks. Quantum networks are treated both from a constructive point of view-based on connections of elementary circuits-and from an axiomatic one-based on a hierarchy of admissible quantum maps. In the axiomatic context a fundamental property is shown, which we call universality of quantum memory channels: any admissible transformation of quantum networks can be realized by a suitable sequence of memory channels. The open problem whether this property fails for some nonquantum theory, e.g., for no-signaling boxes, is posed.
We address the problem of learning an unknown unitary transformation from a finite number of examples. The problem consists in finding the learning machine that optimally emulates the examples, thus reproducing the unknown unitary maximum fidelity. Learning a unitary is equivalent to storing it in the state of a quantum memory (the memory of the learning machine), and subsequently retrieving it. We prove that, whenever the unknown unitary is drawn from a group, the optimal strategy consists in a parallel call of the available uses followed by a "measure-and-rotate" retrieving. Differing from the case of quantum cloning, where the incoherent "measure-and-prepare" strategies are typically suboptimal, in the case of learning the "measure-and-rotate" strategy is optimal even when the learning machine is asked to reproduce a single copy of the unknown unitary. We finally address the problem of the optimal inversion of an unknown unitary evolution, showing also in this case the optimality of the "measure-and-rotate" strategies and applying our result to the optimal approximate realignment of reference frames for quantum communication.
A sequential network of quantum operations is efficiently described by its quantum comb, a non-negative operator with suitable normalization constraints. Here we analyze the case of networks enjoying symmetry with respect to the action of a given group of physical transformations, introducing the notion of covariant combs and testers, and proving the basic structure theorems for these objects. As an application, we discuss the optimal alignment of reference frames (without pre-established common references) with multiple rounds of quantum communication, showing that i) allowing an arbitrary amount of classical communication does not improve the alignment, and ii) a single round of quantum communication is sufficient.
We present a general dilation scheme for quantum instruments with continuous outcome space in finite dimensions, in terms of an indirect POVM measurement performed on a finite dimensional ancilla. The general result is then applied to a large class of instruments generated by operator frames, which contains group-covariant instruments as a particular case, and allows to construct dilation schemes based on a measurement on the ancilla followed by a conditional feed-forward operation on the output. In the case of tight operator frames our construction generalizes quantum teleportation and telecloning, producing a whole family of generalized teleportation schemes in which the instrument is realized via a joint POVM at the sender combined with a conditional feed-forward operation at the receiver.
We analyze the convex structure of the set of positive operator valued measures (POVMs) representing quantum measurements on a given finite dimensional quantum system, with outcomes in a given locally compact Hausdorff space. The extreme points of the convex set are operator valued measures concentrated on a finite set of k \le d^2 points of the outcome space, d< ∞being the dimension of the Hilbert space. We prove that for second countable outcome spaces any POVM admits a Choquet representation as the barycenter of the set of extreme points with respect to a suitable probability measure. In the general case, Krein-Milman theorem is invoked to represent POVMs as barycenters of a certain set of POVMs concentrated on k \le d^2 points of the outcome space.
We present the first complete optimization of quantum tomography, for states, POVMs, and various classes of transformations, for arbitrary prior ensemble and arbitrary representation, giving corresponding feasible experimental schemes.
We introduce the concept of quantum supermap, describing the most general transformation that maps an input quantum operation into an output quantum operation. Since quantum operations include as special cases quantum states, effects, and measurements, quantum supermaps describe all possible transformations between elementary quantum objects (quantum systems as well as quantum devices). After giving the axiomatic definition of supermap, we prove a realization theorem, which shows that any supermap can be physically implemented as a simple quantum circuit. Applications to quantum programming, cloning, discrimination, estimation, information-disturbance trade-off, and tomography of channels are outlined.
After proving a general no-cloning theorem for black boxes, we derive the optimal universal cloning of unitary transformations, from one to two copies. The optimal cloner is realized by quantum channels with memory, and greately outperforms the optimal measure-and-reprepare cloning strategy. Applications are outlined, including two-way quantum cryptographic protocols.
We consider quantum-memory assisted protocols for discriminating quantum channels. We show that for optimal discrimination of memory channels, memory assisted protocols are needed. This leads to a new notion of distance for channels with memory. For optimal discrimination and estimation of sets of unitary channels memory-assisted protocols are not required.
We present a method for optimizing quantum circuits architecture. The method is based on the notion of "quantum comb", which describes a circuit board in which one can insert variable subcircuits. The method allows one to efficiently address novel kinds of quantum information processing tasks, such as storing-retrieving, and cloning of channels.
We provide a quantum benchmark for teleportation and storage of single-mode squeezed states with zero displacement and a completely unknown degree of squeezing along a given direction. For pure squeezed input states, a fidelity higher than 81.5% has to be attained in order to outperform any classical strategy based on an estimation of the unknown squeezing and repreparation of squeezed states. For squeezed thermal input states, we derive an upper and a lower bound on the classical average fidelity which tighten for moderate degree of mixedness. These results enable a critical discussion of recent experiments with squeezed light.
Mar 14 2007 quant-ph
This note contains the complete mathematical proof of the main Theorem of the paper "How continuous measurements in finite dimension are actually discrete" (quant-ph/0702068), thus showing that in finite dimension any measurement with continuous set of outcomes can be simply realized by randomizing some classical parameter and conditionally performing a measurement with finite outcomes.
Feb 08 2007 quant-ph
We show that in finite dimension a quantum measurement with continuous set of outcomes is always equivalent to a continuous random choice of measurements with only finite outcomes.
Nov 08 2006 quant-ph
We consider the classical algebra of observables that are diagonal in a given orthonormal basis, and define a complete decoherence process as a completely positive map that asymptotically converts any quantum observable into a diagonal one, while preserving the elements of the classical algebra. For quantum systems in dimension two and three any decoherence process can be undone by collecting classical information from the environment and using such an information to restore the initial system state. As a relevant example, we illustrate the quantum eraser of Scully et al. [Nature 351, 111 (1991)] as an example of environment-assisted correction. Moreover, we present the generalization of the eraser setup for d-dimensional systems, showing that any von Neumann measurement on a system can be undone by a complementary measurement on the environment.
Oct 18 2006 quant-ph
This paper collects miscellaneous results about the group SU(1,1) that are helpful in applications in quantum optics. Moreover, we derive two new results, the first is about the approximability of SU(1,1) elements by a finite set of elementary gates, and the second is about the regularization of group identities for tomographic purposes.