# Top arXiv papers

• Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity that are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of samples required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits. We also show that the number of samples required with a single qubit is substantially smaller than previous rigorous results, especially in the limit of large sequence lengths. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.
• How many quantum queries are required to determine the coefficients of a degree-$d$ polynomial in $n$ variables? We present and analyze quantum algorithms for this multivariate polynomial interpolation problem over the fields $\mathbb{F}_q$, $\mathbb{R}$, and $\mathbb{C}$. We show that $k_{\mathbb{C}}$ and $2k_{\mathbb{C}}$ queries suffice to achieve probability $1$ for $\mathbb{C}$ and $\mathbb{R}$, respectively, where $k_{\mathbb{C}}=\smash{\lceil\frac{1}{n+1}{n+d\choose d}\rceil}$ except for $d=2$ and four other special cases. For $\mathbb{F}_q$, we show that $\smash{\lceil\frac{d}{n+d}{n+d\choose d}\rceil}$ queries suffice to achieve probability approaching $1$ for large field order $q$. The classical query complexity of this problem is $\smash{n+d\choose d}$, so our result provides a speedup by a factor of $n+1$, $\frac{n+1}{2}$, and $\frac{n+d}{d}$ for $\mathbb{C}$, $\mathbb{R}$, and $\mathbb{F}_q$, respectively. Thus we find a much larger gap between classical and quantum algorithms than the univariate case, where the speedup is by a factor of $2$. For the case of $\mathbb{F}_q$, we conjecture that $2k_{\mathbb{C}}$ queries also suffice to achieve probability approaching $1$ for large field order $q$, although we leave this as an open problem.
• We study the consequences of 'super-quantum non-local correlations' as represented by the PR-box model of Popescu and Rohrlich, and show PR-boxes can enhance the capacity of noisy interference channels between two senders and two receivers. PR-box correlations violate Bell/CHSH inequalities and are thus stronger -- more non-local -- than quantum mechanics; yet weak enough to respect special relativity in prohibiting faster-than-light communication. Understanding their power will yield insight into the non-locality of quantum mechanics. We exhibit two proof-of-concept channels: first, we show a channel between two sender-receiver pairs where the senders are not allowed to communicate, for which a shared super-quantum bit (a PR-box) allows perfect communication. This feat is not achievable with the best classical (senders share no resources) or quantum entanglement-assisted (senders share entanglement) strategies. Second, we demonstrate a class of channels for which a tunable parameter achieves a double separation of capacities; for some range of \epsilon, the super-quantum assisted strategy does better than the entanglement-assisted strategy, which in turn does better than the classical one.
• A recent analysis by Pikovski et al. [Nat. Phys. 11, 668 (2015)] has triggered interest in the question of how to include relativistic corrections in the quantum dynamics governing many-particle systems in a gravitational field. Here we show how the center-of-mass motion of a quantum system subject to gravity can be derived more rigorously, addressing the ambiguous definition of relativistic center-of-mass coordinates. We further demonstrate that, contrary to the prediction by Pikovski et al., external forces play a crucial role in the relativistic coupling of internal and external degrees of freedom, resulting in a complete cancellation of the alleged coupling in Earth-bound laboratories for systems supported against gravity by an external force. We conclude that the proposed decoherence effect is an effect of relative acceleration between quantum system and measurement device, rather than a universal effect in gravitational fields.
• In real-world applications, observations are often constrained to a small fraction of a system. Such spatial subsampling can be caused by the inaccessibility or the sheer size of the system, and cannot be overcome by longer sampling. Spatial subsampling can strongly bias inferences about a system's aggregated properties. To overcome the bias, we derive analytically a subsampling scaling framework that is applicable to different observables, including distributions of neuronal avalanches, of number of people infected during an epidemic outbreak, and of node degrees. We demonstrate how to infer the correct distributions of the underlying full system, how to apply it to distinguish critical from subcritical systems, and how to disentangle subsampling and finite size effects. Lastly, we apply subsampling scaling to neuronal avalanche models and to recordings from developing neural networks. We show that only mature, but not young networks follow power-law scaling, indicating self-organization to criticality during development.
• Big Bang nucleosynthesis (BBN) theory predicts the abundances of the light elements D, $^3$He, $^4$He and $^7$Li produced in the early universe. The primordial abundances of D and $^4$He inferred from observational data are in good agreement with predictions, however, the BBN theory overestimates the primordial $^7$Li abundance by about a factor of three. This is the so-called "cosmological lithium problem". Solutions to this problem using conventional astrophysics and nuclear physics have not been successful over the past few decades, probably indicating the presence of new physics during the era of BBN. We have investigated the impact on BBN predictions of adopting a generalized distribution to describe the velocities of nucleons in the framework of Tsallis non-extensive statistics. This generalized velocity distribution is characterized by a parameter $q$, and reduces to the usually assumed Maxwell-Boltzmann distribution for $q$ = 1. We find excellent agreement between predicted and observed primordial abundances of D, $^4$He and $^7$Li for $1.069\leq q \leq 1.082$, suggesting a possible new solution to the cosmological lithium problem.
• Jan 17 2017 quant-ph arXiv:1701.04026v1
The Euclidean plane is certainly the simplest example of real Hilbert space. Viewed as a space of quantum states, it can be used as a nice introductive example in teaching quantum formalism. The pure states form the unit circle (actually one half of it), the mixed states form the unit disk (actually one half of it), and rotations in the plane rule time evolution through a Majorana-like equation involving only real quantities. The set of pure states or the set of mixed states at fixed "temperature" solve the identity and they are used for the integral quantization of functions on the unit circle and to give a semi-classical portrait of quantum observables. Interesting probabilistic aspects are developed. Since the tensor product of two planes, their direct sum, their cartesian product, are isomorphic (2 is the unique solution to x^x= x X x = x+x), and they are also isomorphic to C^2, and to the quaternion field H (as a vector space), we describe an interesting relation between entanglement of real states, one-half spin cat states, and unit-norm quaternions which form the group SU(2). Finally, we explain the most general form of the Hamiltonian in the real plane by considering the integral quantization of a magnetic-like interaction potential viewed as a classical observable on the unit 2-sphere.
• We use quantum energy teleportation in the light-matter interaction as an operational means to create quantum field states that violate energy conditions and have negative local stress-energy densities. We show that the protocol is optimal in the sense that it scales in a way that saturates the quantum interest conjecture.
• To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of $n$ nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs $G(n,p)$, where $p$ is the probability that any two given nodes are connected: after every time interval $\tau$, a new graph $G(n,p)$ replaces the previous one. We prove analytically that for any given $p$, there is always a range of values of $\tau$ for which the running time of the algorithm is optimal, i.e.\ $\mathcal{O}(\sqrt{n})$, even when search on the individual static graphs constituting the temporal network is sub-optimal. On the other hand, when the energy $1/\tau$ is close to the site energy of the marked node, we need sufficiently high connectivity to ensure optimality, namely the average degree of each node should be at least $\mathcal{O}(\sqrt{n})$. From this first study of quantum spatial search on a time-dependent network, it emerges that the interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our study can be extended to establish high fidelity qubit transfer between any two nodes of the network. Overall, our findings show one can exploit temporality to achieve efficient quantum information tasks on dynamical random networks.
• Jan 17 2017 astro-ph.CO gr-qc hep-th arXiv:1701.04382v1
We argue that $isotropic$ scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by $\zeta$ and $\mathcal R$, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.
• Based on the convex least-squares estimator, we propose two different procedures for testing convexity of a probability mass function supported on N with an unknown finite support. The procedures are shown to be asymptotically calibrated.
• A fourth-order theory of gravity is considered which in terms of dynamics has the same degrees of freedom and number of constraints as those of scalar-tensor theories. In addition it admits a canonical point-like Lagrangian description. We study the critical points of the theory and we show that it can describe the matter epoch of the universe and that two accelerated phases can be recovered one of which describes a de Sitter universe. Finally for some models exact solutions are presented.
• Based on the Effective Field Theory (EFT) of cosmological perturbations, we revisit the nonsingular cosmologices, without using the integral inequality. We clarify the pathology in nonsingular cubic Galileon models and show how to cure it in EFT with new insights into this issue. With a new application of $R^{(3)}\delta g^{00}$ operator, we build a model with a Genesis phase followed by slow-roll inflation. The spectrum of primordial perturbation may be simulated numerically, which shows itself a large-scale cutoff, as hinted by the CMB large scale anomalies.
• Briët et al. showed that an efficient communication protocol implies a reliable XOR game protocol. In this work, we improve this relationship, and obtain a nontrivial lower bound $2\log3\approx 3.1699$ of XOR-amortized communication complexity of the equality function. The proof uses an elegant idea of Pawłowski et al. in a paper on information causality. Although the improvement of a lower bound of a communication complexity is at most a factor 2, all arguments and proofs in this work are quite simple and intuitive.
• We consider the dynamics of message passing for spatially coupled codes and, in particular, the set of density evolution equations that tracks the profile of decoding errors along the spatial direction of coupling. It is known that, for suitable boundary conditions and after a transient phase, the error profile exhibits a "solitonic behavior". Namely, a uniquely-shaped wavelike solution develops, that propagates with constant velocity. Under this assumption we derive an analytical formula for the velocity in the framework of a continuum limit of the spatially coupled system. The general formalism is developed for spatially coupled low-density parity-check codes on general binary memoryless symmetric channels which form the main system of interest in this work. We apply the formula for special channels and illustrate that it matches the direct numerical evaluation of the velocity for a wide range of noise values. A possible application of the velocity formula to the evaluation of finite size scaling law parameters is also discussed. We conduct a similar analysis for general scalar systems and illustrate the findings with applications to compressive sensing and generalized low-density parity-check codes on the binary erasure or binary symmetric channels.
• A long-time quantum memory capable of storing and measuring quantum information at the single-qubit level is an essential ingredient for practical quantum computation and com-munication. Recently, there have been remarkable progresses of increasing coherence time for ensemble-based quantum memories of trapped ions, nuclear spins of ionized donors or nuclear spins in a solid. Until now, however, the record of coherence time of a single qubit is on the order of a few tens of seconds demonstrated in trapped ion systems. The qubit coherence time in a trapped ion is mainly limited by the increasing magnetic field fluctuation and the decreasing state-detection efficiency associated with the motional heating of the ion without laser cooling. Here we report the coherence time of a single qubit over $10$ minutes in the hyperfine states of a \Yb ion sympathetically cooled by a \Ba ion in the same Paul trap, which eliminates the heating of the qubit ion even at room temperature. To reach such coherence time, we apply a few thousands of dynamical decoupling pulses to suppress the field fluctuation noise. A long-time quantum memory demonstrated in this experiment makes an important step for construction of the memory zone in scalable quantum computer architectures or for ion-trap-based quantum networks. With further improvement of the coherence time by techniques such as magnetic field shielding and increase of the number of qubits in the quantum memory, our demonstration also makes a basis for other applications including quantum money.
• In this article we present a Bernstein inequality for sums of random variables which are defined on a graphical network whose nodes grow at an exponential rate. The inequality can be used to derive concentration inequalities in highly-connected networks. It can be useful to obtain consistency properties for nonparametric estimators of conditional expectation functions which are derived from such networks.
• We present a practical classification scheme for the four-partite entangled states under stochastic local operations and classical communication (SLOCC). By transforming a four-partite state into a triple-state set composed of two tripartite and one bipartite states, the entanglement classification is reduced to that of only tripartite and bipartite entanglements. This reduction method has the merit of being extendable to the classification of any multipartite entangled state, meanwhile provides an insight to the entanglement character of subsystem.
• Experimental demonstration of entanglement needs to have a precise control of experimentalist over the system on which the measurements are performed as prescribed by an appropriate entanglement witness. To avoid such trust problem, recently device-independent entanglement witnesses (\emphDIEWs) for genuine tripartite entanglement have been proposed where witnesses are capable of testing genuine entanglement without precise description of Hilbert space dimension and measured operators i.e apparatus are treated as black boxes. Here we design a protocol for enhancing the possibility of identifying genuine tripartite entanglement in a device independent manner. We consider three mixed tripartite quantum states none of whose genuine entanglement can be detected by applying standard \emphDIEWs, but their genuine tripartite entanglement can be detected by applying the same when distributed in some suitable entanglement swapping network.
• In a recent publication [J. Chem. Phys. 142, 034115 (2015)] we have derived a hierarchy of coupled differential equations in time domain to calculate the linear optical properties of molecular aggregates. Here we provide details about issues concerning the numerical implementation. In addition we present the corresponding hierarchy in frequency domain.
• We establish the existence of $1$-parameter families of $\epsilon$-dependent solutions to the Einstein-Euler equations with a positive cosmological constant $\Lambda >0$ and a linear equation of state $p=\epsilon^2 K \rho$, $0<K\leq 1/3$, for the parameter values $0<\epsilon < \epsilon_0$. These solutions exist globally to the future, converge as $\epsilon \searrow 0$ to solutions of the cosmological Poison-Euler equations of Newtonian gravity, and are inhomogeneous non-linear perturbations of FLRW fluid solutions.
• The hypothesis is made that, at large scales where General Relativity may be applied, the empty space is scale invariant. This establishes a relation between the cosmological constant and the scale factor of the scale invariant framework. This relation brings major simplifications in the scale invariant equations for cosmology, which now contain a new term, depending on the derivative of the scale factor, that opposes to gravity and produces an accelerated expansion. The displacements due to the acceleration term make a high contribution Omega_l to the energy-density of the Universe, satisfying an equation of the form Omega_m+\Omega_k+Omega_l = 1. The models do not demand the existence of unknown particles. There is a family of flat models with different density parameters Omega_m < 1. Numerical integrations of the cosmological equations for different values of the curvature and density parameter k and Omega_m are performed. The presence of even tiny amounts of matter in the Universe tends to kill scale invariance. The point is that for Omega_m = 0.3 the effect is not yet completely killed. The models with non-zero density start explosively with first a braking phase followed by a continuously accelerating expansion. Several observational properties are examined, in particular the distances, the m--z diagram, the Omega_m vs. lambda plot. Comparisons with observations are also performed for the Hubble constant H_0 vs. Omega_m, for the expansion history in the plot H(z)/(z+1) vs. redshift z and for the transition redshift from braking to acceleration. These first dynamical tests are satisfied by the scale invariant models, which thus deserve further studies.
• Jan 17 2017 quant-ph arXiv:1701.03859v1
In the paper, we give a characterization of arbitrary $n$-mode Gaussian coherence breaking channels (GCBCs) and show the tensor product of a GCBC and arbitrary a Gaussian channel maps all input states into product states. The inclusion relation among GCBCs, Gaussian positive partial transpose channels (GPPTC), entanglement breaking channels (GEBC), Gaussian classical-quantum channels (GCQC) and Gaussian quantum-classical channels (GQCC) is displayed. Finally, we prove that Gaussian classical ($\chi$-) capacity for the tensor products of GCBCs and arbitrary Gaussian channels is additive.
• In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced, and demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within Python.
• Decoherence due to charge noise is one of the central challenges in using spin qubits in semiconductor quantum dots as a platform for quantum information processing. Recently, it has been experimentally demonstrated in both Si and GaAs singlet-triplet qubits that the effects of charge noise can be suppressed if qubit operations are implemented using symmetric barrier control instead of the standard tilt control. Here, we investigate the key issue of whether the benefits of barrier control persist over the entire set of single-qubit gates by performing randomized benchmarking simulations. We find the surprising result that the improvement afforded by barrier control depends sensitively on the amount of spin noise: for the minimal nuclear spin noise levels present in Si, the coherence time improves by more than two orders of magnitude whereas in GaAs, by contrast the coherence time is essentially the same for barrier and tilt control. However, we establish that barrier control becomes beneficial if qubit operations are performed using a new family of composite pulses that reduce gate times by up to 90%. With these optimized pulses, barrier control is the best way to achieve high-fidelity quantum gates in singlet-triplet qubits.
• A hyperlink is a finite set of non-intersecting simple closed curves in $\mathbb{R} \times \mathbb{R}^3$. We compute the Wilson Loop observable using a path integral with an Einstein-Hilbert action. Using axial-gauge fixing, we can write this path integral as the limit of a sequence of Chern-Simons integrals, studied earlier in our previous work on the Chern-Simons path integrals in $\mathbb{R}^3$. We will show that the Wilson Loop observable can be computed from a link diagram of a hyperlink, projected on a plane. Only crossings in the diagram will contribute to the path integral. Furthermore, we will show that it is invariant under an equivalence relation defined on the set of hyperlinks.
• Searches for stellar companions to hot Jupiters (HJs) have revealed that planetary systems hosting a HJ are approximately three times more likely to have a stellar companion with a semimajor axis between 50 and 2000 AU, compared to field stars. This correlation suggests that HJ formation is affected by the stellar binary companion. A potential model is high-eccentricity migration, in which the binary companion induces high-eccentricity Lidov-Kozai (LK) oscillations in the proto-HJ orbit, triggering orbital migration driven by tides. A pitfall of this `binary-LK' model is that the observed stellar binaries hosting HJs are typically too wide to produce HJs in sufficient numbers, because of suppression by short-range forces. We propose a modification to the binary-LK model in which there is a second giant planet orbiting the proto-HJ at a semimajor axis of several tens of AU. Such companions are currently hidden to observations, but their presence could be manifested by a propagation of the perturbation of the stellar binary companion inwards to the proto-HJ, thereby overcoming the barrier imposed by short-range forces. Our model does not require the planetary companion orbit to be eccentric and/or inclined with respect to the proto-HJ, but its semimajor axis should lie in a specific range given the planetary mass and binary semimajor axis, and the inclination with respect to the binary should be near $40^\circ$ or $140^\circ$. Our prediction for planetary companions to HJs in stellar binaries should be testable by future observations.
• A two-phase description of the quark-nuclear matter hybrid equation of state that takes into account the effect of excluded volume in both the hadronic and the quark-matter phases is introduced. The nuclear phase manifests a reduction of the available volume as density increases, leading to a stiffening of the matter. The quark-matter phase displays a reduction of the effective string-tension in the confining density-functional from available volume contributions. The nuclear equation of state is based upon the relativistic density functional model DD2 with excluded volume. The quark-matter is based upon a mean-field modification to the free fermi gas and will be discussed in greater detail. The interactions are decomposed into mean scalar and vector components. The scalar interaction is motivated by a string potential between quarks, whereas the vector interaction potential is motivated by higher-order interactions of quarks leading to an increased stiffening at high densities. As an application, we consider matter under compact star constraints of electric neutrality and $\beta$-equilibrium. We obtain mass-radius relations for hybrid stars that form a third family, disconnected from the purely hadronic star branch, and fulfill the $2M_\odot$ constraint.
• We present a demonstration of delensing the observed cosmic microwave background (CMB) B-mode polarization anisotropy. This process of reducing the gravitational-lensing generated B-mode component will become increasingly important for improving searches for the B modes produced by primordial gravitational waves. In this work, we delens B-mode maps constructed from multi-frequency SPTpol observations of a 90 deg$^2$ patch of sky by subtracting a B-mode template constructed from two inputs: SPTpol E-mode maps and a lensing potential map estimated from the $\textit{Herschel}$ $500\,\mu m$ map of the CIB. We find that our delensing procedure reduces the measured B-mode power spectrum by 28% in the multipole range $300 < \ell < 2300$; this is shown to be consistent with expectations from theory and simulations and to be robust against systematics. The null hypothesis of no delensing is rejected at $6.9 \sigma$. Furthermore, we build and use a suite of realistic simulations to study the general properties of the delensing process and find that the delensing efficiency achieved in this work is limited primarily by the noise in the lensing potential map. We demonstrate the importance of including realistic experimental non-idealities in the delensing forecasts used to inform instrument and survey-strategy planning of upcoming lower-noise experiments, such as CMB-S4.
• With the increased application of model-based whole-body control methods in legged robots, there has been an resurgence of interest in whole-body system identification strategies and adaptive control. An important class of methods relates to the identification of inertial parameters for rigid-body systems. That is, the mass, first mass moment (related to center of mass location), and 3D rotational inertia tensor for each link. It is known that the standard manipulator equations of a rigid-body system are linear in these inertial parameters, enabling many methods for offline model identification and online adaptive control. Recent work has shown that previous methods can lead to parameter estimates that are non-physical (i.e. can not possibly represent any physical rigid-body system), due to the violation of certain triangle inequalities on the principal moments of inertia. The main contribution of this paper is to formulate these added constraints as Linear Matrix Inequalities (LMIs). Matrix inequalities are developed not in terms of the inertia tensor, but in terms of the density-weighted covariance of each body, suggesting a statistical interpretation of the rigid body for inertia identification problems. Enforcing these extra conditions on physical realizability surprisingly results in an LMI of lower dimension in comparison to previous work. Through this insight, semi-definite programming approaches can be used to solve inertia parameter identification problems to global optimality
• A general procedure for constructing Yetter-Drinfeld modules from quantum principal bundles is introduced. As an application a Yetter-Drinfeld structure is put on the cotangent space of the Heckenberger-Kolb calculi of the quantum Grassmannians. For the special case of quantum projective space the associated braiding is shown to be non-diagonal and of Hecke type. Moreover, its Nichols algebra is shown to be finite-dimensional and equal to the anti-holomorphic part of the total differential calculus.
• Resonant activation in temporally driven spin-boson systems subjected to strong dissipation is researched by means of both, analytical and extensive numerical investigations. The phenomenon of resonant activation emerges in the presence of either fluctuating or periodically varying driving fields. Addressing the incoherent regime, a characteristic minimum emerges in the mean first passage time (MFPT) to reach an absorbing neighboring state whenever the intrinsic time scale of the modulation matches the characteristic time scale of the system dynamics. For the case of deterministic periodic driving the first passage time statistics displays a complex, multi-peaked first passage time probability density, which depends crucially on the details of the initial driving phase, the driving frequency and its strength. As an interesting feature we find that the MFPT enters the resonant activation regime at a critical frequency $\nu^*$ which depends very weakly on the strength of the driving.
• Congruence closure procedures are used extensively in automated reasoning and are a core component of most satisfiability modulo theories solvers. However, no known congruence closure algorithms can support any of the expressive logics based on intensional type theory (ITT), which form the basis of many interactive theorem provers. The main source of expressiveness in these logics is dependent types, and yet existing congruence closure procedures found in interactive theorem provers based on ITT do not handle dependent types at all and only work on the simply-typed subsets of the logics. Here we present an efficient and proof-producing congruence closure procedure that applies to every function in ITT no matter how many dependencies exist among its arguments, and that only relies on the commonly assumed uniqueness of identity proofs axiom. We demonstrate its usefulness by solving interesting verification problems involving functions with dependent types.
• The ability of a feed-forward neural network to learn and classify different states of polymer configurations is systematically explored. Performing numerical experiments, we find that a simple network model can, after adequate training, recognize multiple structures, including gas-like coil, liquid-like globular, and crystalline anti-Mackay and Mackay structures. The network can be trained to identify the transition points between various states, which compare well with those identified by independent specific-heat calculations. Our study demonstrates that neural network provides an unconventional tool to study the phase transitions in polymeric systems.
• Though distribution system operators have been adding more sensors to their networks, they still often lack an accurate real-time picture of the behavior of distributed energy resources such as demand responsive electric loads and residential solar generation. Such information could improve system reliability, economic efficiency, and environmental impact. Rather than installing additional, costly sensing and communication infrastructure to obtain additional real-time information, it may be possible to use existing sensing capabilities and leverage knowledge about the system to reduce the need for new infrastructure. In this paper, we disaggregate a distribution feeder's demand measurements into two components: 1) the demand of a population of air conditioners, and 2) the demand of the remaining loads connected to the feeder. We use an online learning algorithm, Dynamic Fixed Share (DFS), that uses the real-time distribution feeder measurements as well as models generated from historical building- and device-level data. We develop two implementations of the algorithm and conduct simulations using real demand data from households and commercial buildings to investigate the effectiveness of the algorithm. Case studies demonstrate that DFS can effectively perform online disaggregation and the choice and construction of models included in the algorithm affects its accuracy, which is comparable to that of a set of Kalman filters.
• In this paper, we introduce the new generalization of contraction mapping by a new control function and an altering distance . We establish some existence results of fixed point for such mappings. Our results reproduce several old and new results in the literature.
• Several genetic alterations are involved in the genesis and development of cancers. The determination of whether and how each genetic alterations contributes to cancer development is fundamental for a complete understanding of the human cancer etiology. Loss of heterozygosity (LOH) is one of such genetic phenomenon linked to a variate of diseases and characterized by the change from heterozygosity (the presence of both alleles of a gene) to to homozygosity (presence of only one type of allele) in a particular DNA locus. Thus identification of DNA regions where LOH has taken place is a important issue in the health sciences. In this article we formulate the LOH detection as the identification of change-points in the parameters of a mixture model and present a detection algorithm based on the cumulative sums (CUSUM) method. We found that even under mild contamination our proposal is a fast and reliable method.
• It is known that for an isolated dielectric cylinder waveguide there exists the cutoff frequency $\omega_\ast$ below which there is no guided mode. It is shown in the paper that the infinite plane periodic array of such waveguides possesses the guided modes in the frequency domain which is below the frequency $\omega_\ast$. In the case of a finite array, the modes in this frequency domain are weakly radiating ones, but their quality factor $Q$ increases with the number of waveguides $N$ as $Q(N)\sim N^3$. This dependence is obtained both numerically, using the multiple scattering formalism, and is justified with a simple analytical model.
• Jan 17 2017 math.AG arXiv:1701.04385v1
The moduli space $M_g^{trop}$ of tropical curves of genus $g$ is a generalized cone complex that parametrizes metric vertex-weighted graphs of genus $g$. For each such graph $\Gamma$, the associated canonical linear system $\vert K_\Gamma\vert$ has the structure of a polyhedral complex. In this article we propose a tropical analogue of the Hodge bundle on $M_g^{trop}$ and study its basic combinatorial properties. Our construction is illustrated with explicit computations and examples.
• Proteins are biological polymers that underlie all cellular functions. The first high-resolution protein structures were determined by x-ray crystallography in the 1960s. Since then, there has been continued interest in understanding and predicting protein structure and stability. It is well-established that a large contribution to protein stability originates from the sequestration from solvent of hydrophobic residues in the protein core. How are such hydrophobic residues arranged in the core? And how can one best model the packing of these residues? Here we show that to properly model the packing of residues in protein cores it is essential that amino acids are represented by appropriately calibrated atom sizes, and that hydrogen atoms are explicitly included. We show that protein cores possess a packing fraction of $\phi \approx 0.56$, which is significantly less than the typically quoted value of 0.74 obtained using the extended atom representation. We also compare the results for the packing of amino acids in protein cores to results obtained for jammed packings from disrete element simulations composed of spheres, elongated particles, and particles with bumpy surfaces. We show that amino acids in protein cores pack as densely as disordered jammed packings of particles with similar values for the aspect ratio and bumpiness as found for amino acids. Knowing the structural properties of protein cores is of both fundamental and practical importance. Practically, it enables the assessment of changes in the structure and stability of proteins arising from amino acid mutations (such as those identified as a result of the massive human genome sequencing efforts) and the design of new folded, stable proteins and protein-protein interactions with tunable specificity and affinity.
• In this paper, a new approach to solve the cubic B-spline curve fitting problem is presented based on a meta-heuristic algorithm called " dolphin echolocation ". The method minimizes the proximity error value of the selected nodes that measured using the least squares method and the Euclidean distance method of the new curve generated by the reverse engineering. The results of the proposed method are compared with the genetic algorithm. As a result, this new method seems to be successful.
• Observationally measuring the location of the H$_{2}$O snowline is crucial for understanding the planetesimal and planet formation processes, and the origin of water on Earth. In disks around Herbig Ae stars ($T_{\mathrm{*}}\sim$ 10,000K, $M_{\mathrm{*}}\gtrsim$ 2.5$M_{\bigodot}$), the position of the H$_{2}$O snowline is further from the central star compared with that around cooler, and less massive T Tauri stars. Thus, the H$_{2}$O emission line fluxes from the region within the H$_{2}$O snowline are expected to be stronger. In this paper, we calculate the chemical composition of a Herbig Ae disk using chemical kinetics. Next, we calculate the H$_{2}$O emission line profiles, and investigate the properties of candidate water lines across a wide range of wavelengths (from mid-infrared to sub-millimeter) that can locate the position of the H$_{2}$O snowline. Those line identified have small Einstein $A$ coefficients ($\sim 10^{-6} -10^{-3}$ s$^{-1}$) and relatively high upper state energies ($\sim$ 1000K). The total fluxes tend to increase with decreasing wavelengths. We investigate the possibility of future observations (e.g., ALMA, SPICA/SMI-HRS) to locate the position of the H$_{2}$O snowline. Since the fluxes of those identified lines from Herbig Ae disks are stronger than those from T Tauri disks, the possibility of a successful detection is expected to increase for a Herbig Ae disk.
• The purpose of this paper is to collect and make explicit the results of Gel'fand, Graev and Piatetski-Shapiro and Miyazaki for the $GL(3)$ cusp forms which are non-trivial on $SO(3,\mathbb{R})$. We give new descriptions of the spaces of cusp forms of minimal $K$-type and from the Fourier-Whittaker expansions of such forms give a complete and completely explicit spectral expansion for $L^2(SL(3,\mathbb{Z})\backslash PSL(3,\mathbb{R}))$, accounting for multiplicities, in the style of Duke, Friedlander and Iwaniec's paper on Artin $L$-functions. We directly compute the Jacquet integral for the Whittaker functions at the minimal $K$-type, improving Miyazaki's computation. The primary tool will be the study of the differential operators coming from the Lie algebra on vector-valued cusp forms.
• Let $M$ be an open Riemann surface and $n\ge 3$ be an integer. We prove that on any closed discrete subset of $M$ one can prescribe the values of a conformal minimal immersion $M\to\mathbb{R}^n$. Our result also ensures jet-interpolation of given finite order, and hence, in particular, one may in addition prescribe the values of the generalized Gauss map. Furthermore, the interpolating immersions can be chosen to be complete, proper into $\mathbb{R}^n$ if the prescription of values is proper, and one-to-one if $n\ge 5$ and the prescription of values is one-to-one. We may also prescribe the flux map of the examples. We also show analogous results for a large family of directed holomorphic immersions $M\to\mathbb{C}^n$, including null curves.
• A general quantum thermodynamics network is composed of thermal devices connected to the environments through quantum wires. The coupling between the devices and the wires may introduce additional decay channels which modify the system performance with respect to the directly-coupled device. We analyze this effect in a quantum three-level device connected to a heat bath or to a work source through a two-level wire. The steady state heat currents are decomposed into the contributions of the set of simple circuits in the graph representing the master equation. Each circuit is associated with a mechanism in the device operation and the system performance can be described by a small number of circuit representatives of those mechanisms. Although in the limit of weak coupling between the device and the wire the new irreversible contributions can become small, they prevent the system from reaching the Carnot efficiency.
• We consider a nonlinear representation of a Lie algebra which is regular on an abelian ideal, we define a normal form which generalizes that defined in [D. Arnal, M. Ben Ammar, M. Selmi, \rm Normalisation d'une représentation non linéaire d'une algèbre de Lie. \it Annales de la Faculté des Sciences de Toulouse, 5e série, tome 9, No 3, 1988, p 355--379.]
• We report the first detection of the prebiotic complex organic molecule CH$_3$NCO in a solar-type protostar, IRAS16293-2422 B. This species is one of the most abundant complex organic molecule detected on the surface of the comet 67P/Churyumov-Gerasimenko, and in the insterstellar medium it has only been found in hot cores around high-mass protostars. We have used multi-frequency ALMA observations from 90 GHz to 350 GHz covering 11 unblended transitions of CH$_3$NCO and 40 more transitions that appear blended with emission from other molecular species. Our Local Thermodynamic Equilibrium analysis provides an excitation temperature of 232$\pm$41 K and a column density of (7.9$\pm$1.7)$\times$10$^{15}$ cm$^{-2}$, which implies an abundance of (7$\pm$2)$\times$10$^{-10}$ with respect to molecular hydrogen. The derived column density ratios CH$_3$NCO/HNCO, CH$_3$NCO/NH$_2$CHO, and CH$_3$NCO/CH$_3$CN are $\sim$0.3, $\sim$0.8, and $\sim$0.2, respectively, which are different from those measured in hot cores and in comet 67P/Churyumov-Gerasimenko. Our chemical modelling of CH$_3$NCO reproduces well the abundances and column density ratios CH$_3$NCO/HNCO and CH$_3$NCO/NH$_2$CHO measured in IRAS16293-2422 B, suggesting that the production of CH$_3$NCO could occur mostly via gas-phase chemistry after the evaporation of HNCO from dust grains.
• Jan 17 2017 cs.DM arXiv:1701.04375v1
A graph is NIC-planar if it admits a drawing in the plane with at most one crossing per edge and such that two pairs of crossing edges share at most one common end vertex. NIC-planarity generalizes IC-planarity, which allows a vertex to be incident to at most one crossing edge, and specializes 1-planarity, which only requires at most one crossing per edge. We characterize embeddings of maximal NIC-planar graphs in terms of generalized planar dual graphs. The characterization is used to derive tight bounds on the density of maximal NIC-planar graphs which ranges between 3.2(n-2) and 3.6(n-2). Further, we show that optimal NIC-planar graphs with 3.6(n-2) edges have a unique embedding and can be recognized in linear time, whereas the recognition problem of NIC-planar graphs is NP-complete. In addition, we show that there are NIC-planar graphs that do not admit right angle crossing drawings, which distinguishes NIC-planar from IC-planar graphs.
• The degree of commutativity of a finite group $F$, defined as the probability that two randomly chosen elements in $F$ commute, has been studied extensively. Recently this definition was generalised to all finitely generated groups. In this paper the degree of commutativity is computed for right-angled Artin groups with respect to their natural generating set. An additional result concerning sphere sizes of groups with rational spherical growth series (with respect to some finite generating set) is obtained.
• The interplay between quantum fluctuations and disorder is investigated in a spin-glass model, in the presence of a uniform transverse field $\Gamma$, and a longitudinal random field following a Gaussian distribution with width $\Delta$. The model is studied through the replica formalism. This study is motivated by experimental investigations on the LiHo$_x$Y$_{1-x}$F$_4$ compound, where the application of a transverse magnetic field yields rather intriguing effects, particularly related to the behavior of the nonlinear magnetic susceptibility $\chi_3$, which have led to a considerable experimental and theoretical debate. We analyzed two situations, namely, $\Delta$ and $\Gamma$ considered as independent, as well as these two quantities related as proposed recently by some authors. In both cases, a spin-glass phase transition is found at a temperature $T_f$; moreover, $T_f$ decreases by increasing $\Gamma$ towards a quantum critical point at zero temperature. The situation where $\Delta$ and $\Gamma$ are related appears to reproduce better the experimental observations on the LiHo$_x$Y$_{1-x}$F$_4$ compound, with the theoretical results coinciding qualitatively with measurements of the nonlinear susceptibility. In this later case, by increasing $\Gamma$, $\chi_3$ becomes progressively rounded, presenting a maximum at a temperature $T^*$ ($T^*>T_f$). Moreover, we also show that the random field is the main responsible for the smearing of the nonlinear susceptibility, acting significantly inside the paramagnetic phase, leading to two regimes delimited by the temperature $T^*$, one for $T_f<T<T^*$, and another one for $T>T^*$. It is argued that the conventional paramagnetic state corresponds to $T>T^*$, whereas the temperature region $T_f<T<T^*$ may be characterized by a rather unusual dynamics, possibly including Griffiths singularities.

Zoltán Zimborás Jan 12 2017 20:38 UTC

Here is a nice description, with additional links, about the importance of this work if it turns out to be flawless (thanks a lot to Martin Schwarz for this link): [dichotomy conjecture][1].

[1]: http://processalgebra.blogspot.com/2017/01/has-feder-vardi-dichotomy-conjecture.html

Noon van der Silk Jan 05 2017 04:51 UTC

This is a cool paper!

Māris Ozols Dec 27 2016 19:34 UTC

Māris Ozols Dec 16 2016 15:38 UTC

Indeed, Schur complement is the answer to the ultimate question!

J. Smith Dec 14 2016 17:43 UTC

Very good Insight on android security problems and malware. Nice Work !

Stefano Pirandola Nov 30 2016 06:45 UTC

Dear Mark, thx for your comment. There are indeed missing citations to previous works by Rafal, Janek and Lorenzo that we forgot to add. Regarding your paper, I did not read it in detail but I have two main comments:

1- What you are using is completely equivalent to the tool of "quantum simulatio

...(continued)
Mark M. Wilde Nov 30 2016 02:18 UTC

An update http://arxiv.org/abs/1609.02160v2 of this paper has appeared, one day after the arXiv post http://arxiv.org/abs/1611.09165 . The paper http://arxiv.org/abs/1609.02160v2 now includes (without citation) some results for bosonic Gaussian channels found independently in http://arxiv.org/abs/16

...(continued)
Felix Leditzky Nov 29 2016 16:34 UTC

Thank you very much for the reply!

Martin Schwarz Nov 24 2016 13:53 UTC

Oded Regev writes [here][1]:

"Dear all,

Yesterday Lior Eldar and I found a flaw in the algorithm proposed
in the arXiv preprint. I do not see how to salvage anything from
the algorithm. The security of lattice-based cryptography against
quantum attacks therefore remains intact and uncha

...(continued)