- Jan 17 2017 quant-ph arXiv:1701.04299v1Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity that are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of samples required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits. We also show that the number of samples required with a single qubit is substantially smaller than previous rigorous results, especially in the limit of large sequence lengths. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.
- How many quantum queries are required to determine the coefficients of a degree-$d$ polynomial in $n$ variables? We present and analyze quantum algorithms for this multivariate polynomial interpolation problem over the fields $\mathbb{F}_q$, $\mathbb{R}$, and $\mathbb{C}$. We show that $k_{\mathbb{C}}$ and $2k_{\mathbb{C}}$ queries suffice to achieve probability $1$ for $\mathbb{C}$ and $\mathbb{R}$, respectively, where $k_{\mathbb{C}}=\smash{\lceil\frac{1}{n+1}{n+d\choose d}\rceil}$ except for $d=2$ and four other special cases. For $\mathbb{F}_q$, we show that $\smash{\lceil\frac{d}{n+d}{n+d\choose d}\rceil}$ queries suffice to achieve probability approaching $1$ for large field order $q$. The classical query complexity of this problem is $\smash{n+d\choose d}$, so our result provides a speedup by a factor of $n+1$, $\frac{n+1}{2}$, and $\frac{n+d}{d}$ for $\mathbb{C}$, $\mathbb{R}$, and $\mathbb{F}_q$, respectively. Thus we find a much larger gap between classical and quantum algorithms than the univariate case, where the speedup is by a factor of $2$. For the case of $\mathbb{F}_q$, we conjecture that $2k_{\mathbb{C}}$ queries also suffice to achieve probability approaching $1$ for large field order $q$, although we leave this as an open problem.
- We study the consequences of 'super-quantum non-local correlations' as represented by the PR-box model of Popescu and Rohrlich, and show PR-boxes can enhance the capacity of noisy interference channels between two senders and two receivers. PR-box correlations violate Bell/CHSH inequalities and are thus stronger -- more non-local -- than quantum mechanics; yet weak enough to respect special relativity in prohibiting faster-than-light communication. Understanding their power will yield insight into the non-locality of quantum mechanics. We exhibit two proof-of-concept channels: first, we show a channel between two sender-receiver pairs where the senders are not allowed to communicate, for which a shared super-quantum bit (a PR-box) allows perfect communication. This feat is not achievable with the best classical (senders share no resources) or quantum entanglement-assisted (senders share entanglement) strategies. Second, we demonstrate a class of channels for which a tunable parameter achieves a double separation of capacities; for some range of \epsilon, the super-quantum assisted strategy does better than the entanglement-assisted strategy, which in turn does better than the classical one.
- A recent analysis by Pikovski et al. [Nat. Phys. 11, 668 (2015)] has triggered interest in the question of how to include relativistic corrections in the quantum dynamics governing many-particle systems in a gravitational field. Here we show how the center-of-mass motion of a quantum system subject to gravity can be derived more rigorously, addressing the ambiguous definition of relativistic center-of-mass coordinates. We further demonstrate that, contrary to the prediction by Pikovski et al., external forces play a crucial role in the relativistic coupling of internal and external degrees of freedom, resulting in a complete cancellation of the alleged coupling in Earth-bound laboratories for systems supported against gravity by an external force. We conclude that the proposed decoherence effect is an effect of relative acceleration between quantum system and measurement device, rather than a universal effect in gravitational fields.
- Jan 17 2017 physics.data-an q-bio.NC arXiv:1701.04277v1In real-world applications, observations are often constrained to a small fraction of a system. Such spatial subsampling can be caused by the inaccessibility or the sheer size of the system, and cannot be overcome by longer sampling. Spatial subsampling can strongly bias inferences about a system's aggregated properties. To overcome the bias, we derive analytically a subsampling scaling framework that is applicable to different observables, including distributions of neuronal avalanches, of number of people infected during an epidemic outbreak, and of node degrees. We demonstrate how to infer the correct distributions of the underlying full system, how to apply it to distinguish critical from subcritical systems, and how to disentangle subsampling and finite size effects. Lastly, we apply subsampling scaling to neuronal avalanche models and to recordings from developing neural networks. We show that only mature, but not young networks follow power-law scaling, indicating self-organization to criticality during development.
- Jan 17 2017 astro-ph.CO arXiv:1701.04149v1Big Bang nucleosynthesis (BBN) theory predicts the abundances of the light elements D, $^3$He, $^4$He and $^7$Li produced in the early universe. The primordial abundances of D and $^4$He inferred from observational data are in good agreement with predictions, however, the BBN theory overestimates the primordial $^7$Li abundance by about a factor of three. This is the so-called "cosmological lithium problem". Solutions to this problem using conventional astrophysics and nuclear physics have not been successful over the past few decades, probably indicating the presence of new physics during the era of BBN. We have investigated the impact on BBN predictions of adopting a generalized distribution to describe the velocities of nucleons in the framework of Tsallis non-extensive statistics. This generalized velocity distribution is characterized by a parameter $q$, and reduces to the usually assumed Maxwell-Boltzmann distribution for $q$ = 1. We find excellent agreement between predicted and observed primordial abundances of D, $^4$He and $^7$Li for $1.069\leq q \leq 1.082$, suggesting a possible new solution to the cosmological lithium problem.
- Jan 17 2017 quant-ph arXiv:1701.04026v1The Euclidean plane is certainly the simplest example of real Hilbert space. Viewed as a space of quantum states, it can be used as a nice introductive example in teaching quantum formalism. The pure states form the unit circle (actually one half of it), the mixed states form the unit disk (actually one half of it), and rotations in the plane rule time evolution through a Majorana-like equation involving only real quantities. The set of pure states or the set of mixed states at fixed "temperature" solve the identity and they are used for the integral quantization of functions on the unit circle and to give a semi-classical portrait of quantum observables. Interesting probabilistic aspects are developed. Since the tensor product of two planes, their direct sum, their cartesian product, are isomorphic (2 is the unique solution to x^x= x X x = x+x), and they are also isomorphic to C^2, and to the quaternion field H (as a vector space), we describe an interesting relation between entanglement of real states, one-half spin cat states, and unit-norm quaternions which form the group SU(2). Finally, we explain the most general form of the Hamiltonian in the real plane by considering the integral quantization of a magnetic-like interaction potential viewed as a classical observable on the unit 2-sphere.
- We use quantum energy teleportation in the light-matter interaction as an operational means to create quantum field states that violate energy conditions and have negative local stress-energy densities. We show that the protocol is optimal in the sense that it scales in a way that saturates the quantum interest conjecture.
- Jan 17 2017 quant-ph arXiv:1701.04392v1To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of $n$ nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs $G(n,p)$, where $p$ is the probability that any two given nodes are connected: after every time interval $\tau$, a new graph $G(n,p)$ replaces the previous one. We prove analytically that for any given $p$, there is always a range of values of $\tau$ for which the running time of the algorithm is optimal, i.e.\ $\mathcal{O}(\sqrt{n})$, even when search on the individual static graphs constituting the temporal network is sub-optimal. On the other hand, when the energy $1/\tau$ is close to the site energy of the marked node, we need sufficiently high connectivity to ensure optimality, namely the average degree of each node should be at least $\mathcal{O}(\sqrt{n})$. From this first study of quantum spatial search on a time-dependent network, it emerges that the interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our study can be extended to establish high fidelity qubit transfer between any two nodes of the network. Overall, our findings show one can exploit temporality to achieve efficient quantum information tasks on dynamical random networks.
- We argue that $isotropic$ scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by $\zeta$ and $\mathcal R$, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.
- Based on the convex least-squares estimator, we propose two different procedures for testing convexity of a probability mass function supported on N with an unknown finite support. The procedures are shown to be asymptotically calibrated.
- A fourth-order theory of gravity is considered which in terms of dynamics has the same degrees of freedom and number of constraints as those of scalar-tensor theories. In addition it admits a canonical point-like Lagrangian description. We study the critical points of the theory and we show that it can describe the matter epoch of the universe and that two accelerated phases can be recovered one of which describes a de Sitter universe. Finally for some models exact solutions are presented.
- Based on the Effective Field Theory (EFT) of cosmological perturbations, we revisit the nonsingular cosmologices, without using the integral inequality. We clarify the pathology in nonsingular cubic Galileon models and show how to cure it in EFT with new insights into this issue. With a new application of $R^{(3)}\delta g^{00}$ operator, we build a model with a Genesis phase followed by slow-roll inflation. The spectrum of primordial perturbation may be simulated numerically, which shows itself a large-scale cutoff, as hinted by the CMB large scale anomalies.
- Briët et al. showed that an efficient communication protocol implies a reliable XOR game protocol. In this work, we improve this relationship, and obtain a nontrivial lower bound $2\log3\approx 3.1699$ of XOR-amortized communication complexity of the equality function. The proof uses an elegant idea of Pawłowski et al. in a paper on information causality. Although the improvement of a lower bound of a communication complexity is at most a factor 2, all arguments and proofs in this work are quite simple and intuitive.
- We consider the dynamics of message passing for spatially coupled codes and, in particular, the set of density evolution equations that tracks the profile of decoding errors along the spatial direction of coupling. It is known that, for suitable boundary conditions and after a transient phase, the error profile exhibits a "solitonic behavior". Namely, a uniquely-shaped wavelike solution develops, that propagates with constant velocity. Under this assumption we derive an analytical formula for the velocity in the framework of a continuum limit of the spatially coupled system. The general formalism is developed for spatially coupled low-density parity-check codes on general binary memoryless symmetric channels which form the main system of interest in this work. We apply the formula for special channels and illustrate that it matches the direct numerical evaluation of the velocity for a wide range of noise values. A possible application of the velocity formula to the evaluation of finite size scaling law parameters is also discussed. We conduct a similar analysis for general scalar systems and illustrate the findings with applications to compressive sensing and generalized low-density parity-check codes on the binary erasure or binary symmetric channels.
- Jan 17 2017 quant-ph physics.atom-ph arXiv:1701.04195v1A long-time quantum memory capable of storing and measuring quantum information at the single-qubit level is an essential ingredient for practical quantum computation and com-munication. Recently, there have been remarkable progresses of increasing coherence time for ensemble-based quantum memories of trapped ions, nuclear spins of ionized donors or nuclear spins in a solid. Until now, however, the record of coherence time of a single qubit is on the order of a few tens of seconds demonstrated in trapped ion systems. The qubit coherence time in a trapped ion is mainly limited by the increasing magnetic field fluctuation and the decreasing state-detection efficiency associated with the motional heating of the ion without laser cooling. Here we report the coherence time of a single qubit over $10$ minutes in the hyperfine states of a \Yb ion sympathetically cooled by a \Ba ion in the same Paul trap, which eliminates the heating of the qubit ion even at room temperature. To reach such coherence time, we apply a few thousands of dynamical decoupling pulses to suppress the field fluctuation noise. A long-time quantum memory demonstrated in this experiment makes an important step for construction of the memory zone in scalable quantum computer architectures or for ion-trap-based quantum networks. With further improvement of the coherence time by techniques such as magnetic field shielding and increase of the number of qubits in the quantum memory, our demonstration also makes a basis for other applications including quantum money.
- In this article we present a Bernstein inequality for sums of random variables which are defined on a graphical network whose nodes grow at an exponential rate. The inequality can be used to derive concentration inequalities in highly-connected networks. It can be useful to obtain consistency properties for nonparametric estimators of conditional expectation functions which are derived from such networks.
- Jan 17 2017 quant-ph arXiv:1701.04155v1We present a practical classification scheme for the four-partite entangled states under stochastic local operations and classical communication (SLOCC). By transforming a four-partite state into a triple-state set composed of two tripartite and one bipartite states, the entanglement classification is reduced to that of only tripartite and bipartite entanglements. This reduction method has the merit of being extendable to the classification of any multipartite entangled state, meanwhile provides an insight to the entanglement character of subsystem.
- Jan 17 2017 quant-ph arXiv:1701.04114v1Experimental demonstration of entanglement needs to have a precise control of experimentalist over the system on which the measurements are performed as prescribed by an appropriate entanglement witness. To avoid such trust problem, recently device-independent entanglement witnesses (\emphDIEWs) for genuine tripartite entanglement have been proposed where witnesses are capable of testing genuine entanglement without precise description of Hilbert space dimension and measured operators i.e apparatus are treated as black boxes. Here we design a protocol for enhancing the possibility of identifying genuine tripartite entanglement in a device independent manner. We consider three mixed tripartite quantum states none of whose genuine entanglement can be detected by applying standard \emphDIEWs, but their genuine tripartite entanglement can be detected by applying the same when distributed in some suitable entanglement swapping network.
- Jan 17 2017 physics.chem-ph quant-ph arXiv:1701.04100v1In a recent publication [J. Chem. Phys. 142, 034115 (2015)] we have derived a hierarchy of coupled differential equations in time domain to calculate the linear optical properties of molecular aggregates. Here we provide details about issues concerning the numerical implementation. In addition we present the corresponding hierarchy in frequency domain.
- We establish the existence of $1$-parameter families of $\epsilon$-dependent solutions to the Einstein-Euler equations with a positive cosmological constant $\Lambda >0$ and a linear equation of state $p=\epsilon^2 K \rho$, $0<K\leq 1/3$, for the parameter values $0<\epsilon < \epsilon_0$. These solutions exist globally to the future, converge as $\epsilon \searrow 0$ to solutions of the cosmological Poison-Euler equations of Newtonian gravity, and are inhomogeneous non-linear perturbations of FLRW fluid solutions.
- Jan 17 2017 astro-ph.CO arXiv:1701.03964v1The hypothesis is made that, at large scales where General Relativity may be applied, the empty space is scale invariant. This establishes a relation between the cosmological constant and the scale factor of the scale invariant framework. This relation brings major simplifications in the scale invariant equations for cosmology, which now contain a new term, depending on the derivative of the scale factor, that opposes to gravity and produces an accelerated expansion. The displacements due to the acceleration term make a high contribution Omega_l to the energy-density of the Universe, satisfying an equation of the form Omega_m+\Omega_k+Omega_l = 1. The models do not demand the existence of unknown particles. There is a family of flat models with different density parameters Omega_m < 1. Numerical integrations of the cosmological equations for different values of the curvature and density parameter k and Omega_m are performed. The presence of even tiny amounts of matter in the Universe tends to kill scale invariance. The point is that for Omega_m = 0.3 the effect is not yet completely killed. The models with non-zero density start explosively with first a braking phase followed by a continuously accelerating expansion. Several observational properties are examined, in particular the distances, the m--z diagram, the Omega_m vs. lambda plot. Comparisons with observations are also performed for the Hubble constant H_0 vs. Omega_m, for the expansion history in the plot H(z)/(z+1) vs. redshift z and for the transition redshift from braking to acceleration. These first dynamical tests are satisfied by the scale invariant models, which thus deserve further studies.
- Jan 17 2017 quant-ph arXiv:1701.03859v1In the paper, we give a characterization of arbitrary $n$-mode Gaussian coherence breaking channels (GCBCs) and show the tensor product of a GCBC and arbitrary a Gaussian channel maps all input states into product states. The inclusion relation among GCBCs, Gaussian positive partial transpose channels (GPPTC), entanglement breaking channels (GEBC), Gaussian classical-quantum channels (GCQC) and Gaussian quantum-classical channels (GQCC) is displayed. Finally, we prove that Gaussian classical ($\chi$-) capacity for the tensor products of GCBCs and arbitrary Gaussian channels is additive.
- In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced, and demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within Python.
- Jan 17 2017 cond-mat.mes-hall arXiv:1701.03796v1Decoherence due to charge noise is one of the central challenges in using spin qubits in semiconductor quantum dots as a platform for quantum information processing. Recently, it has been experimentally demonstrated in both Si and GaAs singlet-triplet qubits that the effects of charge noise can be suppressed if qubit operations are implemented using symmetric barrier control instead of the standard tilt control. Here, we investigate the key issue of whether the benefits of barrier control persist over the entire set of single-qubit gates by performing randomized benchmarking simulations. We find the surprising result that the improvement afforded by barrier control depends sensitively on the amount of spin noise: for the minimal nuclear spin noise levels present in Si, the coherence time improves by more than two orders of magnitude whereas in GaAs, by contrast the coherence time is essentially the same for barrier and tilt control. However, we establish that barrier control becomes beneficial if qubit operations are performed using a new family of composite pulses that reduce gate times by up to 90%. With these optimized pulses, barrier control is the best way to achieve high-fidelity quantum gates in singlet-triplet qubits.
- A hyperlink is a finite set of non-intersecting simple closed curves in $\mathbb{R} \times \mathbb{R}^3$. We compute the Wilson Loop observable using a path integral with an Einstein-Hilbert action. Using axial-gauge fixing, we can write this path integral as the limit of a sequence of Chern-Simons integrals, studied earlier in our previous work on the Chern-Simons path integrals in $\mathbb{R}^3$. We will show that the Wilson Loop observable can be computed from a link diagram of a hyperlink, projected on a plane. Only crossings in the diagram will contribute to the path integral. Furthermore, we will show that it is invariant under an equivalence relation defined on the set of hyperlinks.
- Jan 17 2017 astro-ph.EP arXiv:1701.04401v1
- Jan 17 2017 astro-ph.CO arXiv:1701.04396v1
- Jan 17 2017 cs.RO arXiv:1701.04395v1
- Jan 17 2017 quant-ph arXiv:1701.04393v1
- Jan 17 2017 cs.LO arXiv:1701.04391v1
- Jan 17 2017 cond-mat.soft cond-mat.stat-mech arXiv:1701.04390v1
- Jan 17 2017 math.FA arXiv:1701.04388v1
- Jan 17 2017 cond-mat.mes-hall arXiv:1701.04386v1
- Jan 17 2017 math.AG arXiv:1701.04385v1
- Jan 17 2017 math.NT arXiv:1701.04380v1
- Jan 17 2017 quant-ph arXiv:1701.04378v1
- Jan 17 2017 math.RT arXiv:1701.04377v1
- Jan 17 2017 astro-ph.SR arXiv:1701.04376v1
- Jan 17 2017 cs.DM arXiv:1701.04375v1
- Jan 17 2017 math.GR arXiv:1701.04374v1