- May 22 2017 physics.soc-ph quant-ph arXiv:1705.06768v1An imperative aspect of modern science is that scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them. This implies that science should not perpetuate existing or historical unequal social orders. Some scientific terminology, though, gives a very different impression. I will give two examples of terminology invented recently for the field of quantum information which use language associated with subordination, slavery, and racial segregation: 'ancilla qubit' and 'quantum supremacy'.
- A tripartite state $\rho_{ABC}$ forms a Markov chain if there exists a recovery map $\mathcal{R}_{B \to BC}$ acting only on the $B$-part that perfectly reconstructs $\rho_{ABC}$ from $\rho_{AB}$. To achieve an approximate reconstruction, it suffices that the conditional mutual information $I(A:C|B)_{\rho}$ is small, as shown recently. Here we ask what conditions are necessary for approximate state reconstruction. This is answered by a lower bound on the relative entropy between $\rho_{ABC}$ and the recovered state $\mathcal{R}_{B\to BC}(\rho_{AB})$. The bound consists of the conditional mutual information and an entropic correction term that quantifies the disturbance of the $B$-part by the recovery map.
- We introduce and physically motivate the following problem in geometric combinatorics, originally inspired by analysing Bell inequalities. A grasshopper lands at a random point on a planar lawn of area one. It then jumps once, a fixed distance $d$, in a random direction. What shape should the lawn be to maximise the chance that the grasshopper remains on the lawn after jumping? We show that, perhaps surprisingly, a disc shaped lawn is not optimal for any $d>0$. We investigate further by introducing a spin model whose ground state corresponds to the solution of a discrete version of the grasshopper problem. Simulated annealing and parallel tempering searches are consistent with the hypothesis that for $ d < \pi^{-1/2}$ the optimal lawn resembles a cogwheel with $n$ cogs, where the integer $n$ is close to $ \pi ( \arcsin ( \sqrt{\pi} d /2 ) )^{-1}$. We find transitions to other shapes for $d \gtrsim \pi^{-1/2}$.
- We survey various convex optimization problems in quantum information theory involving the relative entropy function. We show how to solve these problems numerically using off-the-shelf semidefinite programming solvers, via the approximation method proposed in [Fawzi, Saunderson, Parrilo, Semidefinite approximations of the matrix logarithm, arXiv:1705.00812]. In particular we use this method to provide numerical counterexamples for a proposed lower bound on the quantum conditional mutual information in terms of the relative entropy of recovery.
- May 25 2017 quant-ph arXiv:1705.08869v1We show that Clifford operations on qubit stabilizer states are non-contextual and can be represented by non-negative quasi-probability distributions associated with a Wigner-Weyl-Moyal formalism. This is accomplished by generalizing the Wigner-Weyl-Moyal formalism to three generators instead of two---producing an exterior, or Grassmann, algebra---which results in Clifford group gates for qubits that act as a permutation on the finite Weyl phase space points naturally associated with stabilizer states. As a result, a non-negative probability distribution can be associated with each stabilizer state's three-generator Wigner function, and these distributions evolve deterministically to one other under Clifford gates. This corresponds to a hidden variable theory that is non-contextual and local for qubit Clifford operations. Equivalently, we show that qubit Clifford gates can be expressed as propagators within the three-generator Wigner-Weyl-Moyal formalism whose semiclassical expansion is truncated at order $\hbar^0$ with a finite number of terms. The $T$-gate, which extends the Clifford gate set to one capable of universal quantum computation, requires a semiclassical expansion of its propagator to order $\hbar^1$. We compare this approach to previous quasi-probability descriptions of qubits that relied on the two-generator Wigner-Weyl-Moyal formalism and find that the two-generator Weyl symbols of stabilizer states result in a description of evolution under Clifford gates that is state-dependent. We show that this two-generator description of stabilizer evolution is thus a non-local and contextual hidden variable theory---it is a contextual description of a non-contextual process. We have thus extended the established result that Clifford stabilizer operations are non-contextual and have non-negative quasi-probability distributions in the odd $d$-dimensional case to $d=2$ qubits.
- May 24 2017 quant-ph arXiv:1705.08023v1One of the most widely known building blocks of modern physics is Heisenberg's indeterminacy principle. Among the different statements of this fundamental property of the full quantum mechanical nature of physical reality, the uncertainty relation for energy and time has a special place. Its interpretation and its consequences have inspired continued research efforts for almost a century. In its modern formulation, the uncertainty relation is understood as setting a fundamental bound on how fast any quantum system can evolve. In this Topical Review we describe important milestones, such as the Mandelstam-Tamm and the Margolus-Levitin bounds on the quantum speed limit, and summarise recent applications in a variety of current research fields -- including quantum information theory, quantum computing, and quantum thermodynamics amongst several others. To bring order and to provide an access point into the many different notions and concepts, we have grouped the various approaches into the minimal time approach and the geometric approach, where the former relies on quantum control theory, and the latter arises from measuring the distinguishability of quantum states. Due to the volume of the literature, this Topical Review can only present a snapshot of the current state-of-the-art and can never be fully comprehensive. Therefore, we highlight but a few works hoping that our selection can serve as a representative starting point for the interested reader.
- May 23 2017 quant-ph cond-mat.dis-nn arXiv:1705.07855v1A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom) decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X) and phase-flip (Z) errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed to achieve optimal performance. The long short-term memory cell of the recurrent neural network maintains this performance over a large number of error correction cycles, making it a practical decoder for forthcoming experimental realizations. On a density-matrix simulation of the 17-qubit surface code, our neural network decoder achieves a substantial performance improvement over a state-of-the-art blossom decoder.
- May 22 2017 quant-ph cond-mat.str-el arXiv:1705.06833v1Recent progress in characterization for gapped quantum phases has also triggered the search of universal resource for quantum computation in symmetric gapped phases. Prior works in one dimension suggest that it is a feature more common than previously thought that nontrivial 1D symmetry-protected topological (SPT) phases provide quantum computational power characterized by the algebraic structure defining these phases. Progress in two and higher dimensions so far has been limited to special fixed points in SPT phases. Here we provide two families of 2D $Z_2$ symmetric wave functions such that there exists a finite region of the parameter in the SPT phases that supports universal quantum computation. The quantum computational power loses its universality at the boundary between the SPT and symmetry-breaking phases.
- May 24 2017 quant-ph arXiv:1705.07918v1We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e. tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programming; it is monotone with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement based quantum computing.
- May 19 2017 cond-mat.str-el arXiv:1705.06728v1Symmetry protected topological (SPT) states have boundary anomalies that obstruct the effective boundary theory realized in its own dimension with UV completion and with an on-site $G$-symmetry. In this work, yet we show that a certain anomalous non-on-site $G$ symmetry along the boundary becomes on-site when viewed as a larger $H$ symmetry, via a suitable group extension $1\to K\to H\to G\to1$. Namely, a non-perturbative global (gauge/gravitational) anomaly in $G$ becomes anomaly-free in $H$. This guides us to formulate exactly soluble lattice path integral and Hamiltonian constructions of symmetric gapped boundaries applicable to any SPT state of any finite symmetry group, including on-site unitary and anti-unitary time-reversal symmetries. The resulting symmetric gapped boundary can be described either by an $H$-symmetry extended boundary in any spacetime dimension, or more naturally by a topological $K$-gauge theory with a global symmetry $G$ on a 3+1D bulk or above. The excitations on such a symmetric topologically ordered boundary can carry fractional quantum numbers of the symmetry $G$, described by representations of $H$. (Apply our approach to a 1+1D boundary of 2+1D bulk, we find that a deconfined topologically ordered boundary indeed has spontaneous symmetry breaking with long-range order. The deconfined symmetry-breaking phase crosses over smoothly to a confined phase without a phase transition.) In contrast to known gapped boundaries/interfaces obtained via symmetry breaking (either global symmetry breaking or Anderson-Higgs mechanism for gauge theory), our approach is based on symmetry extension. More generally, applying our approach to SPT states, topologically ordered gauge theories and symmetry enriched topologically ordered (SET) states, lead to generic boundaries/interfaces constructed with a mixture of symmetry breaking, symmetry extension, and dynamical gauging.
- An important class of contextuality arguments in quantum foundations are the All-versus-Nothing (AvN) proofs, generalising a construction originally due to Mermin. We present a general formulation of All-versus-Nothing arguments, and a complete characterisation of all such arguments which arise from stabiliser states. We show that every AvN argument for an n-qubit stabiliser state can be reduced to an AvN proof for a three-qubit state which is local Clifford-equivalent to the tripartite GHZ state. This is achieved through a combinatorial characterisation of AvN arguments, the AvN triple Theorem, whose proof makes use of the theory of graph states. This result enables the development of a computational method to generate all the AvN arguments in $\mathbb{Z}_2$ on n-qubit stabiliser states. We also present new insights into the stabiliser formalism and its connections with logic.
- We present a bipartite partial function, whose communication complexity is $O((\log n)^2)$ in the model of quantum simultaneous message passing and $\tilde\Omega(\sqrt n)$ in the model of randomised simultaneous message passing. In fact, our function has a poly-logarithmic protocol even in the (restricted) model of quantum simultaneous message passing without shared randomness, thus witnessing the possibility of qualitative advantage of this model over randomised simultaneous message passing with shared randomness. This can be interpreted as the strongest known $-$ as of today $-$ example of "super-classical" capabilities of the weakest studied model of quantum communication.
- May 22 2017 quant-ph arXiv:1705.07053v1We consider fundamental limits on the detectable size of macroscopic quantum superpositions. We argue that a full quantum mechanical treatment of system plus measurement device is required, and that a (classical) reference frame for phase or direction needs to be established to certify the quantum state. When taking the size of such a classical reference frame into account, we show that to reliably distinguish a quantum superposition state from an incoherent mixture requires a measurement device that is quadratically bigger than the superposition state. Whereas for moderate system sizes such as generated in previous experiments this is not a stringent restriction, for macroscopic superpositions of the size of a cat the required effort quickly becomes intractable, requiring measurement devices of the size of the Earth. We illustrate our results using macroscopic superposition states of photons, spins, and position. Finally, we also show how this limitation can be circumvented by dealing with superpositions in relative degrees of freedom.
- Projection operators are central to the algebraic formulation of quantum theory because both wavefunction and hermitian operators(observables) have spectral decomposition in terms of the spectral projections. Projection operators are hermitian operators which are idempotents also. We call them quantum idempotents. They are also important for the conceptual understanding of quantum theory because projection operators also represent observation process on quantum system. In this paper we explore the algebra of quantum idempotents and show that they generate Iterant algebra (defined in the paper), Lie algebra, Grassmann algebra and Clifford algebra which is very interesting because these later algebras were introduced for the geometry of spaces and hence are called geometric algebras. Thus the projection operator representation gives a new meaning to these geometric algebras in that they are also underlying algebras of quantum processes and also they bring geometry closer to the quantum theory. It should be noted that projection operators not only make lattices of quantum logic but they also span projective geometry. We will give iterant representations of framed braid group algebras, parafermion algebras and the $su(3)$ algebra of quarks. These representations are very striking because iterant algebra encodes the spatial and temporal aspects of recursive processes. In that regard our representation of these algebras for physics opens up entirely new perspectives of looking at fermions,spins and parafermions(anyons).
- Communication over a noisy channel is often conducted in a setting in which different input symbols to the channel incur a certain cost. For example, for the additive white Gaussian noise channel, the cost associated with a real number input symbol is the square of its magnitude. In such a setting, it is often useful to know the maximum amount of information that can be reliably transmitted per cost incurred. This is known as the capacity per unit cost. In this paper, we generalize the capacity per unit cost to various communication tasks involving a quantum channel; in particular, we consider classical communication, entanglement-assisted classical communication, private communication, and quantum communication. For each task, we define the corresponding capacity per unit cost and derive a formula for it via the expression for the capacity per channel use. Furthermore, for the special case in which there is a zero-cost quantum state, we obtain expressions for the various capacities per unit cost in terms of an optimized relative entropy involving the zero-cost state. For each communication task, we construct an explicit pulse-position-modulation coding scheme that achieves the capacity per unit cost. Finally, we compute capacities per unit cost for various quantum Gaussian channels.
- May 24 2017 quant-ph arXiv:1705.07911v1Contextuality is a fundamental feature of quantum theory and is necessary for quantum computation and communication. Serious steps have therefore been taken towards a formal framework for contextuality as an operational resource. However, the most important component for a resource theory - a concrete, explicit form for the free operations of contextuality - was still missing. Here we provide such a component by introducing noncontextual wirings: a physically-motivated class of contextuality-free operations with a friendly parametrization. We characterize them completely for the general case of black-box measurement devices with arbitrarily many inputs and outputs. As applications, we show that the relative entropy of contextuality is a contextuality monotone and that maximally contextual boxes that serve as contextuality bits exist for a broad class of scenarios. Our results complete a unified resource-theoretic framework for contextuality and Bell nonlocality.
- May 23 2017 quant-ph arXiv:1705.07452v1The observation of a quantum speedup remains an elusive objective for quantum computing. The D-Wave quantum annealing processors have been at the forefront of experimental attempts to address this goal, given their relatively large numbers of qubits and programmability. A complete determination of the optimal time-to-solution (TTS) using these processors has not been possible to date, preventing definitive conclusions about the presence of a speedup. The main technical obstacle has been the inability to verify an optimal annealing time within the available range. Here we overcome this obstacle and present a class of problem instances for which we observe an optimal annealing time using a D-Wave 2000Q processor with more than $2000$ qubits. In doing so we are able to perform an optimal TTS benchmarking analysis, in comparison to several classical algorithms that implement the same algorithmic approach: single-spin simulated annealing, spin-vector Monte Carlo, and discrete-time simulated quantum annealing. We establish the first example of a limited quantum speedup for an experimental quantum annealer: we find that the D-Wave device exhibits certifiably better scaling than both simulated annealing and spin-vector Monte Carlo, with $95\%$ confidence, over the range of problem sizes that we can test. However, we do not find evidence for an unqualified quantum speedup: simulated quantum annealing exhibits the best scaling by a significant margin. Our construction of instance classes with verifiably optimal annealing times opens up the possibility of generating many new such classes, paving the way for further definitive assessments of speedups using current and future quantum annealing devices.
- May 22 2017 quant-ph arXiv:1705.06793v1Lidar is a well known optical technology for measuring a target's range and radial velocity. We describe two lidar systems that use entanglement between transmitted signals and retained idlers to obtain significant quantum enhancements in simultaneous measurement of these parameters. The first entanglement-enhanced lidar circumvents the Arthurs-Kelly uncertainty relation for simultaneous measurement of range and radial velocity from detection of a single photon returned from the target. This performance presumes there is no extraneous (background) light, but is robust to the roundtrip loss incurred by the signal photons. The second entanglement-enhanced lidar---which requires a lossless, noiseless environment---realizes Heisenberg-limited accuracies for both its range and radial-velocity measurements, i.e., their root-mean-square estimation errors are both proportional to $1/M$ when $M$ signal photons are transmitted. These two lidars derive their entanglement-based enhancements from use of a unitary transformation that takes a signal-idler photon pair with frequencies $\omega_S$ and $\omega_I$ and converts it to a signal-idler photon pair whose frequencies are $(\omega_S + \omega_I)/2$ and $\omega_S-\omega_I$. Insight into how this transformation provides its benefits is provided through an analogy to superdense coding.
- May 19 2017 quant-ph arXiv:1705.06343v1We introduce a framework for simulating quantum measurements based on classical processing of a set of accessible measurements. Well-known concepts such as joint measurability and projective simulability naturally emerge as particular cases of our framework, but our study also leads to novel results and questions. First, a generalisation of joint measurability is derived, which yields a hierarchy for the incompatibility of sets of measurements. A similar hierarchy is defined based on the number of outcomes used to perform the simulation of a given measurement. This general approach also allows us to identify connections between different types of simulability and, in particular, we characterise the qubit measurements that are projective-simulable in terms of joint measurability. Finally, we discuss how our framework can be interpreted in the context of resource theories.
- The discovery of topological states of matter has profoundly augmented our understanding of phase transitions in physical systems. Instead of local order parameters, topological phases are described by global topological invariants and are therefore robust against perturbations. A prominent example thereof is the two-dimensional integer quantum Hall effect. It is characterized by the first Chern number which manifests in the quantized Hall response induced by an external electric field. Generalizing the quantum Hall effect to four-dimensional systems leads to the appearance of a novel non-linear Hall response that is quantized as well, but described by a 4D topological invariant - the second Chern number. Here, we report on the first observation of a bulk response with intrinsic 4D topology and the measurement of the associated second Chern number. By implementing a 2D topological charge pump with ultracold bosonic atoms in an angled optical superlattice, we realize a dynamical version of the 4D integer quantum Hall effect. Using a small atom cloud as a local probe, we fully characterize the non-linear response of the system by in-situ imaging and site-resolved band mapping. Our findings pave the way to experimentally probe higher-dimensional quantum Hall systems, where new topological phases with exotic excitations are predicted.
- Chaotic dynamics in quantum many-body systems scrambles local information so that at late times it can no longer be accessed locally. This is reflected quantitatively in the out-of-time-ordered correlator of local operators which is expected to decay to zero with time. However, for systems of finite size, out-of-time-ordered correlators do not decay exactly to zero and we show in this paper that the residue value can provide useful insights into the chaotic dynamics. In particular, we show that when energy is conserved, the late-time saturation value of out-of-time-ordered correlators for generic traceless local operators scales inverse polynomially with the system size. This is in contrast to the inverse exponential scaling expected for chaotic dynamics without energy conservation. We provide both analytical arguments and numerical simulations to support this conclusion.
- May 23 2017 quant-ph arXiv:1705.07160v1We show that the two notions of entanglement: the maximum of the geometric measure of entanglement and the maximum of the nuclear norm is attained for the same states. We affirm the conjecture of Higuchi-Sudberry on the maximum entangled state of four qubits. We introduce the notion of d-density tensor for mixed d-partite states. We show that d-density tensor is separable if and only if its nuclear norm is $1$. We suggest an alternating method for computing the nuclear norm of tensors. We apply the above results to symmetric tensors. We give many numerical examples.
- May 22 2017 quant-ph arXiv:1705.07044v1Continuous-variable systems in quantum theory can be fully described through any one of the \bf s-ordered family of quasiprobabilities $\Lambda_{\rm s}(\alpha)$, ${\rm s} \in [-1,1]$. We ask for what values of $({\rm s}, a)$ is the scaling map $\Lambda_{\rm s}(\alpha) \rightarrow a^{-2} \Lambda_{\rm s}(a^{-1}\alpha)$ a positive map? Our analysis based on a duality we establish settles this issue (i) the scaling map generically fails to be positive, showing that there is no useful entanglement witness of the scaling type beyond the transpose map, and (ii) in the two particular cases $({\rm s}=1, |a| \leq 1)$ and $({\rm s}=-1, |a| \geq 1)$, and only in these two non-trivial cases, the map is not only positive but also completely positive as seen through the noiseless attenuator and amplifier channels. We also present the `phase diagram' for the behaviour of the scaling maps in the ${\rm s}-a$ parameter space in respect of its positivity, obtained from the viewpoint of symmetric-ordered characteristic functions. This also sheds light on similar phase diagrams for the practically relevant attenuation and amplification maps with respect to the noise parameter, and in particular the transition from being non-positive to completely positive.
- May 22 2017 quant-ph cond-mat.other arXiv:1705.06901v1Efficient communication between qubits relies on robust networks which allow for fast and coherent transfer of quantum information. It seems natural to harvest the remarkable properties of systems characterized by topological invariants to perform this task. Here we show that a linear network of coupled bosonic degrees of freedom, characterized by topological bands, can be employed for the efficient exchange of quantum information over large distances. Important features of our setup are that it is robust against quenched disorder, all relevant operations can be performed by global variations of parameters, and the time required for communication between distant qubits approaches linear scaling with their distance. We demonstrate that our concept can be extended to an ensemble of qubits embedded in a two-dimensional network to allow for communication between all of them.
- May 24 2017 quant-ph arXiv:1705.08008v1The aim of this contribution is to discuss relations between non-classical features, such as entanglement, incompatibility of measurements, steering and non-locality, in general probabilistic theories. We show that all these features are particular forms of entanglement, which leads to close relations between their quantifications. For this, we study the structure of the tensor products of a compact convex set with a semiclassical state space.
- May 19 2017 cond-mat.str-el cond-mat.dis-nn arXiv:1705.06724v1The recently-introduced self-learning Monte Carlo method is a general-purpose numerical method that speeds up Monte Carlo simulations by training an effective model to propose uncorrelated configurations in the Markov chain. We implement this method in the framework of continuous time Monte Carlo method with auxiliary field in quantum impurity models. We introduce and train a diagram generating function (DGF) to model the probability distribution of auxiliary field configurations in continuous imaginary time, at all orders of diagrammatic expansion. By using DGF to propose global moves in configuration space, we show that the self-learning continuous-time Monte Carlo method can significantly reduce the computational complexity of the simulation.
- When a two-dimensional electron gas is exposed to a perpendicular magnetic field and an in-plane electric field, its conductance becomes quantized in the transverse in-plane direction: this is known as the quantum Hall (QH) effect. This effect is a result of the nontrivial topology of the system's electronic band structure, where an integer topological invariant known as the first Chern number leads to the quantization of the Hall conductance. Interestingly, it was shown that the QH effect can be generalized mathematically to four spatial dimensions (4D), but this effect has never been realized for the obvious reason that experimental systems are bound to three spatial dimensions. In this work, we harness the high tunability and control offered by photonic waveguide arrays to experimentally realize a dynamically-generated 4D QH system using a 2D array of coupled optical waveguides. The inter-waveguide separation is constructed such that the propagation of light along the device samples over higher-dimensional momenta in the directions orthogonal to the two physical dimensions, thus realizing a 2D topological pump. As a result, the device's band structure is associated with 4D topological invariants known as second Chern numbers which support a quantized bulk Hall response with a 4D symmetry. In a finite-sized system, the 4D topological bulk response is carried by localized edges modes that cross the sample as a function of of the modulated auxiliary momenta. We directly observe this crossing through photon pumping from edge-to-edge and corner-to-corner of our system. These are equivalent to the pumping of charge across a 4D system from one 3D hypersurface to the opposite one and from one 2D hyperedge to another, and serve as first experimental realization of higher-dimensional topological physics.
- May 24 2017 quant-ph arXiv:1705.08028v1One of the most striking features of quantum theory is the existence of entangled states, responsible for Einstein's so called "spooky action at a distance". These states emerge from the mathematical formalism of quantum theory, but to date we do not have a clear idea of the physical principles that give rise to entanglement. Why does nature have entangled states? Would any theory superseding classical theory have entangled states, or is quantum theory special? One important feature of quantum theory is that it has a classical limit, recovering classical theory through the process of decoherence. We show that any theory with a classical limit must contain entangled states, thus establishing entanglement as an inevitable feature of any theory superseding classical theory.
- May 23 2017 cond-mat.supr-con arXiv:1705.07873v1Quantum control of atomic systems is largely enabled by the rich structure of selection rules in the spectra of most real atoms. Their macroscopic superconducting counterparts have been lacking this feature, being limited to a single transition type with a large dipole. Here we report a superconducting artificial atom with tunable transition dipoles, designed such that its forbidden (qubit) transition can dispersively interact with microwave photons due to the virtual excitations of allowed transitions. Owing to this effect, we have demonstrated an in-situ tuning of qubit's energy decay lifetime by over two orders of magnitude, exceeding a value of $2~\textrm{ms}$, while keeping the transition frequency fixed around $3.5~\textrm{GHz}$
- In a closely packed ensemble of quantum emitters, cooperative effects are typically suppressed due to the dephasing induced by the dipole-dipole interactions. Here, we show that by adding sufficiently strong collective dephasing cooperative effects can be restored. In particular, we show that the dipole force on a closely packed ensemble of strongly driven two-level quantum emitters, which collectively dephase, is enhanced in comparison to the dipole force on an independent non-interacting ensemble. Our results are relevant to solid state systems with embedded quantum emitters such as colour centers in diamond and superconducting qubits in microwave cavities and waveguides.
- We study the primary entanglement effect on the decoherence of fields reduced density matrix which are in interaction with another fields or independent mode functions. We show that the primary entanglement has a significant role in decoherence of the system quantum state. We find that the existence of entanglement could couple dynamical equations coming from Schrödinger equation. We show if one wants to see no effect of the entanglement parameter in decoherence then interaction terms in Hamiltonian can not be independent from each other. Generally, including the primary entanglement destroys the independence of the interaction terms. Our results could be generalized to every scalar quantum field theory with a well defined quantization in a given curved space time.
- Decoherence is the process via which quantum superpositions states are reduced to classical mixtures. Decoherence has been predicted for relativistically accelerated quantum systems, however examples to date have involved restricting the detected field modes to particular regions of space-time. If the global state over all space-time is measured then unitarity returns and the decoherence is removed. Here we study a decoherence effect associated with accelerated systems that cannot be explained in this way. In particular we study a uniformly accelerated source of a quantum field state - a single-mode squeezer. Even though the initial state of the field is vacuum (a pure state) and the interaction with the quantum source in the accelerated frame is unitary, we find that the final state detected by inertial observers is decohered, i.e. in a mixed state. This unexpected result may indicate new directions in resolving inconsistencies between relativity and quantum theory. We extend this result to a two-mode state and find entanglement is also decohered.
- This paper is in response to a recent comment by Bellissard [arXiv:1704.02644] on the paper [Phys. Rev. Lett. 118, 130201 (2017)]. It is explained that the issues raised in the comment have already been discussed in the paper and do not affect the conclusions of the paper.
- May 19 2017 quant-ph arXiv:1705.06649v1A game is rigid if a near-optimal score guarantees, under the sole assumption of the validity of quantum mechanics, that the players are using an approximately unique quantum strategy. Rigidity has a vital role in quantum cryptography as it permits a strictly classical user to trust behavior in the quantum realm. This property can be traced back as far as 1998 (Mayers and Yao) and has been proved for multiple classes of games. In this paper we prove ridigity for the magic pentagram game, a simple binary constraint satisfaction game involving two players, five clauses and ten variables. We show that all near-optimal strategies for the pentagram game are approximately equivalent to a unique strategy involving real Pauli measurements on three maximally-entangled qubit pairs.
- May 19 2017 quant-ph arXiv:1705.06646v1We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and Graph Theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups correspond to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the grpah. Calculating the final quantum state is in the complexity class #P-complete, thus cannot be done efficiently. To strengthen the link further, theorems from Graph Theory -- such as Hall's marriage problem -- are rephrased in the language of pair creation in quantum experiments. This link allows to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and potentially simulate problems in Graph Theory with quantum experiments.
- May 19 2017 cond-mat.str-el cond-mat.dis-nn cond-mat.quant-gas cond-mat.stat-mech quant-ph arXiv:1705.06290v1Many body localization (MBL) has emerged as a powerful paradigm for understanding non-equilibrium quantum dynamics. Folklore based on perturbative arguments holds that MBL only arises in systems with short range interactions. Here we advance non-perturbative arguments indicating that MBL can arise in systems with long range (Coulomb) interactions. In particular, we show using bosonization that MBL can arise in one dimensional systems with ~ r interactions, a problem that exhibits charge confinement. We also argue that (through the Anderson-Higgs mechanism) MBL can arise in two dimensional systems with log r interactions, and speculate that our arguments may even extend to three dimensional systems with 1/r interactions. Our arguments are `asymptotic' (i.e. valid up to rare region corrections), yet they open the door to investigation of MBL physics in a wide array of long range interacting systems where such physics was previously believed not to arise.
- May 25 2017 quant-ph arXiv:1705.08887v1We demonstrate a synchronized readout (SR) technique for spectrally selective detection of oscillating magnetic fields with sub-millihertz resolution, using coherent manipulation of solid state spins. The SR technique is implemented in a sensitive magnetometer (~50 picotesla/Hz^(1/2)) based on nitrogen vacancy (NV) centers in diamond, and used to detect nuclear magnetic resonance (NMR) signals from liquid-state samples. We obtain NMR spectral resolution ~3 Hz, which is nearly two orders of magnitude narrower than previously demonstrated with NV based techniques, using a sample volume of ~1 picoliter. This is the first application of NV-detected NMR to sense Boltzmann-polarized nuclear spin magnetization, and the first to observe chemical shifts and J-couplings.
- In this paper, we study convergence properties of the gradient Expectation-Maximization algorithm \citelange1995gradient for Gaussian Mixture Models for general number of clusters and mixing coefficients. We derive the convergence rate depending on the mixing coefficients, minimum and maximum pairwise distances between the true centers and dimensionality and number of components; and obtain a near-optimal local contraction radius. While there have been some recent notable works that derive local convergence rates for EM in the two equal mixture symmetric GMM, in the more general case, the derivations need structurally different and non-trivial arguments. We use recent tools from learning theory and empirical processes to achieve our theoretical results.
- May 24 2017 cs.CV arXiv:1705.08421v1This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 64k movie clips with actions localized in space and time, resulting in 197k action labels with multiple labels per human occurring frequently. The main differences with existing video datasets are: (1) the definition of atomic visual actions, which avoids collecting data for each and every complex action; (2) precise spatio-temporal annotations with possibly multiple annotations for each human; (3) the use of diverse, realistic video material (movies). This departs from existing datasets for spatio-temporal action recognition, such as JHMDB and UCF datasets, which provide annotations for at most 24 composite actions, such as basketball dunk, captured in specific environments, i.e., basketball court. We implement a state-of-the-art approach for action localization. Despite this, the performance on our dataset remains low and underscores the need for developing new approaches for video understanding. The AVA dataset is the first step in this direction, and enables the measurement of performance and progress in realistic scenarios.
- Even though the evolution of an isolated quantum system is unitary, the complexity of interacting many-body systems prevents the observation of recurrences of quantum states for all but the smallest systems. For large systems one can not access the full complexity of the quantum states and the requirements to observe a recurrence in experiments reduces to being close to the initial state with respect to the employed observable. Selecting an observable connected to the collective excitations in one-dimensional superfluids, we demonstrate recurrences of coherence and long range order in an interacting quantum many-body system containing thousands of particles. This opens up a new window into the dynamics of large quantum systems even after they reached a transient thermal-like state.
- May 24 2017 quant-ph cond-mat.stat-mech arXiv:1705.08117v1What are the conditions for adiabatic quantum computation (AQC) to outperform classical computation? We consider the strong quantum speedup: scaling advantage in computational time over the best classical algorithms. Although there exist several quantum adiabatic algorithms achieving the strong quantum speedup, the essential keys to their speedups are still unclear. Here, we propose a necessary condition for the quantum speedup in AQC. This is a conjecture that a superposition of macroscopically distinct states appears during AQC if it achieves the quantum speedup. This is a natural extension of the conjecture in circuit-based quantum computation [A. Shimizu et al., J. Phys. Soc. Jpn. 82, 054801 (2013)]. To describe the statement of the conjecture, we introduce an index $p$ that quantifies a superposition of macroscopically distinct states---macroscopic entanglement---from the asymptotic behaviors of fluctuations of additive observables. We theoretically test the conjecture by investigating five quantum adiabatic algorithms. All the results show that the conjecture is correct for these algorithms. We therefore expect that a superposition of macroscopically distinct states is an appropriate indicator of entanglement crucial to the strong quantum speedup in AQC.
- May 23 2017 quant-ph arXiv:1705.07201v1We begin with a brief summary of issues encountered involving causality in quantum theory, placing careful emphasis on the assumptions involved in results such as the EPR paradox and Bell's inequality. We critique some solutions to the resulting paradox, including Rovelli's relational quantum mechanics and the many-worlds interpretation. We then discuss how a spacetime manifold could come about on the classical level out of a quantum system, by constructing a space with a topology out of the algebra of observables, and show that even with an hypothesis of superluminal causation enforcing consistent measurements of entangled states, a causal cone structure arises on the classical level. Finally, we discuss the possibility that causality as understood in classical relativistic physics may be an emergent symmetry which does not hold on the quantum level.
- May 19 2017 cs.DS arXiv:1705.06730v1We consider the problem of approximating a given matrix by a low-rank matrix so as to minimize the entrywise $\ell_p$-approximation error, for any $p \geq 1$; the case $p = 2$ is the classical SVD problem. We obtain the first provably good approximation algorithms for this version of low-rank approximation that work for every value of $p \geq 1$, including $p = \infty$. Our algorithms are simple, easy to implement, work well in practice, and illustrate interesting tradeoffs between the approximation quality, the running time, and the rank of the approximating matrix.
- May 19 2017 quant-ph arXiv:1705.06719v1One of the main challenges of quantum information is the reliable verification of quantum entanglement. The conventional detection schemes require repeated measurement on a large number of identically prepared systems. This is hard to achieve in practice when dealing with large-scale entangled quantum systems. In this letter we develop a novel method by formulating verification as a decision procedure, i.e. entanglement is seen as the ability of quantum system to answer certain "yes-no questions". We show that for a variety of large quantum states even a single copy suffices to detect entanglement with a high probability by using local measurements. For example, a single copy of a $16$-qubit $k$-producible state or one copy of $24$-qubit linear cluster state suffices to verify entanglement with more than $95\%$ confidence. Our method is applicable to many important classes of states, such as cluster states or ground states of local Hamiltonians in general.
- May 19 2017 quant-ph arXiv:1705.06666v1Quantum Metrology calculates the ultimate precision of all estimation strategies, measuring what is their root mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover, namely we derive an information-theoretic quantum metrology. In this setting we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in quantum estimation theory), and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.
- We propose an alternative evaluation of quantum entanglement by measuring the maximum violation of the Bell's inequality without performing a partial trace operation. This proposal is demonstrated by bridging the maximum violation of the Bell's inequality and the concurrence of a pure state in an $n$-qubit system, in which one subsystem only contains one qubit and the state is a linear combination of two product states. We apply this relation to the ground states of four qubits in the Wen-Plaquette model and show that they are maximally entangled. A topological entanglement entropy of the Wen-Plaquette model could be obtained by relating the upper bound of the maximum violation of the Bell's inequality to the concurrences of a pure state with respect to different bipartitions.
- The resolution of linear system with positive integer variables is a basic yet difficult computational problem with many applications. We consider sparse uncorrelated random systems parametrised by the density $c$ and the ratio $\alpha=N/M$ between number of variables $N$ and number of constraints $M$. By means of ensemble calculations we show that the space of feasible solutions endows a Van-Der-Waals phase diagram in the plane ($c$, $\alpha$). We give numerical evidence that the associated computational problems become more difficult across the critical point and in particular in the coexistence region.
- In this work, we introduce the average top-$k$ (AT$_k$) loss as a new ensemble loss for supervised learning, which is the average over the $k$ largest individual losses over a training dataset. We show that the AT$_k$ loss is a natural generalization of the two widely used ensemble losses, namely the average loss and the maximum loss, but can combines their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based method. We provide an intuitive interpretation of the AT$_k$ loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MAT$_k$ learning on the classification calibration of the AT$_k$ loss and the error bounds of AT$_k$-SVM. We demonstrate the applicability of minimum average top-$k$ learning for binary classification and regression using synthetic and real datasets.
- May 25 2017 quant-ph arXiv:1705.08825v1Characterization and certification of nonlocal correlations is one of the the central topics in quantum information theory. In this work, we develop the detection methods of entanglement and steering based on the universal uncertainty relations and fine-grained uncertainty relations. In the course of our study, the uncertainty relations are formulated in majorization form, and the uncertainty quantifier can be chosen as any convex Schur concave functions, this leads to a large set of inequalities, including all existing criteria based on entropies. We address the question that if all steerable states (or entangled states) can be witnessed by some uncertainty-based inequality, we find that for pure states and many important families of states, this is the case.
- Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by taking a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.