# Top arXiv papers

• We introduce a class of so called Markovian marginals, which gives a natural framework for constructing solutions to the quantum marginal problem. We consider a set of marginals that possess a certain internal quantum Markov chain structure. If they are equipped with such a structure and are locally consistent on their overlapping supports, there exists a global state that is consistent with all the marginals. The proof is constructive, and relies on a reduction of the marginal problem to a certain combinatorial problem. By employing an entanglement entropy scaling law, we give a physical argument that the requisite structure exists in any states with finite correlation lengths. This includes topologically ordered states as well as finite temperature Gibbs states.
• It is a fundamental property of quantum mechanics that information is lost as a result of performing measurements. Indeed, with every quantum measurement one can associate a number -- its POVM norm constant -- that quantifies how much the distinguishability of quantum states degrades in the worst case as a result of the measurement. This raises the obvious question which measurements preserve the most information in these sense of having the largest norm constant. While a number of near-optimal schemes have been found (e.g. the uniform POVM, or complex projective 4-designs), they all seem to be difficult to implement in practice. Here, we analyze the distinguishability of quantum states under measurements that are orbits of the Clifford group. The Clifford group plays an important role e.g. in quantum error correction, and its elements are considered simple to implement. We find that the POVM norm constants of Clifford orbits depend on the effective rank of the states that should be distinguished, as well as on a quantitative measure of the "degree of localization in phase space" of the vectors in the orbit. The most important Clifford orbit is formed by the set of stabilizer states. Our main result implies that stabilizer measurements are essentially optimal for distinguishing pure quantum states. As an auxiliary result, we use the methods developed here to prove new entropic uncertainty relations for stabilizer measurements. This paper is based on a very recent analysis of the representation theory of tensor powers of the Clifford group.
• We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well - consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: The time-critical steps are implemented in quantum superposition, while an interjacent step, requiring only exponentially few parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
• We give a capacity formula for the classical information transmission over a noisy quantum channel, with separable encoding by the sender and limited resources provided by the receiver's pre-shared ancilla. Instead of a pure state, we consider the signal-ancilla pair in a mixed state, purified by a "witness". Thus, the signal-witness correlation limits the resource available from the signal-ancilla correlation. Our formula characterizes the utility of different forms of resources, including noisy or limited entanglement assistance, for classical communication. With separable encoding, the sender's signals between different channel uses are still allowed to be entangled, yet our capacity formula is additive. In particular, for generalized covariant channels our capacity formula has a simple closed-form. Moreover, our additive capacity formula upper bounds the general coherent attack's information gain in various two-way quantum key distribution protocols. For Gaussian protocols, the additivity of the formula indicates that the collective Gaussian attack is the most powerful.
• Device-independent quantum cryptography allows security even if the devices used to execute the protocol are untrusted - whether this is due to unknown imperfections in the implementation, or because the adversary himself constructed them to subvert the security of the protocol. While device-independence has seen much attention in the domain of quantum key distribution, relatively little is known for general protocols. Here we introduce a new model for device-independence for two-party protocols and position verification in the noisy-storage model. For the first time, we show that such protocols are secure in the most general device-independent model in which the devices may have arbitrary memory, states and measurements. In our analysis, we make use of a slight modification of a beautiful new tool developed in [arXiv:1607.01796] called "Entropy Accumulation Theorem". What's more, the protocols we analyze use only simple preparations and measurements, and can be realized using any experimental setup able to perform a CHSH Bell test. Specifically, security can be attained for any violation of the CHSH inequality, where a higher violation merely leads to a reduction in the amount of rounds required to execute the protocol.
• The Clifford group is a fundamental structure in quantum information with a wide variety of applications. We discuss the tensor representations of the $q$-qubit Clifford group, which is defined as the normalizer of the $q$-qubit Pauli group in $U(2^q)$. In particular, we characterize all irreducible subrepresentations of the two-copy representation $\varphi^{\otimes2}$ of the Clifford group on the matrix space $\mathbb{C}^{d\times d}\otimes \mathbb{C}^{d\times d}$ with $d=2^q$. In an upcoming companion paper we applied this result to cut down the number of samples necessary to perform randomised benchmarking, a method for characterising quantum systems.
• Using the notion of TRO's (ternary ring of operators) and independence from operator algebra theory, we discover a new class of channels which allow single-letter bounds for their quantum and private capacity, as well as strong converse rates. This class goes beyond degradable channels. The estimate are based on a "local comparison theorem" for sandwiched Rényi relative entropy and complex interpolation. As an application, we discover new small dimensional examples which admit an easy formula for quantum and private capacities.
• The Feynman-Kitaev Hamiltonian used in the proof of QMA-completeness of the local Hamiltonian problem has a ground state energy which scales as $\Omega((1-\sqrt{\epsilon}) T^{-3})$ when it is applied to a circuit of size $T$ and maximum acceptance probability $\epsilon$. We refer to this quantity as the quantum UNSAT penalty, and using a modified form of the Feynman Hamiltonian with a non-uniform history state as its ground state we improve its scaling to $\Omega((1-\sqrt{\epsilon})T^{-2})$, without increasing the number of local terms or their operator norms. As part of the proof we show how to construct a circuit Hamiltonian for any desired probability distribution on the time steps of the quantum circuit (which, for example, can be used to increase the probability of measuring a history state in the final step of the computation). Next we show a tight $\mathcal{O}(T^{-2})$ upper bound on the product of the spectral gap and ground state overlap with the endpoints of the computation for any clock Hamiltonian that is tridiagonal in the time register basis, which shows that the scaling of the quantum UNSAT penalty achieved by our construction cannot be further improved within this framework. Our proof of the upper bound applies a quantum-to-classical mapping for arbitrary tridiagonal Hermitian matrices combined with a sharp bound on the spectral gap of birth-and-death Markov chains. In the context of universal adiabatic computation we show how to reduce the number of qubits required to represent the clock by a constant factor over the standard construction, but show that it is otherwise already optimal in the sense we consider and cannot be further improved with tridiagonal clock Hamiltonians, which agrees with a similar upper bound from a previous study.
• A unitary t-design is a set of unitaries that is "evenly distributed" in the sense that the average of any t-th order polynomial over the design equals the average over the entire unitary group. In various fields -- e.g. quantum information theory -- one frequently encounters constructions that rely on matrices drawn uniformly at random from the unitary group. Often, it suffices to sample these matrices from a unitary t-design, for sufficiently high t. This results in more explicit, derandomized constructions. The most prominent unitary t-design considered in quantum information is the multi-qubit Clifford group. It is known to be a unitary 3-design, but, unfortunately, not a 4-design. Here, we give a simple, explicit characterization of the way in which the Clifford group fails to constitute a 4-design. Our results show that for various applications in quantum information theory and in the theory of convex signal recovery, Clifford orbits perform almost as well as those of true 4-designs. Technically, it turns out that in a precise sense, the 4th tensor power of the Clifford group affords only one more invariant subspace than the 4th tensor power of the unitary group. That additional subspace is a stabilizer code -- a structure extensively studied in the field of quantum error correction codes. The action of the Clifford group on this stabilizer code can be decomposed explicitly into previously known irreps of the discrete symplectic group. We give various constructions of exact complex projective 4-designs or approximate 4-designs of arbitrarily high precision from Clifford orbits. Building on results from coding theory, we give strong evidence suggesting that these orbits actually constitute complex projective 5-designs.
• We propose an extension of the sandwiched Rényi relative $\alpha$-entropy for states on arbitrary von Neumann algebra, for the values $\alpha>1$. For this, we use Kosaki's definition of noncommutative $L_p$-spaces with respect to a state. Some properties of these extensions are proved, in particular the data processing inequality with respect to positive trace preserving maps. It is also shown that equality in data processing inequality characterizes sufficiency of quantum channels.
• The de Finetti representation theorem for continuous variable quantum system is first developed to approximate an N-partite continuous variable quantum state with a convex combination of independent and identical subsystems, which requires the original state to obey permutation symmetry conditioned on successful experimental verification on k of N subsystems. We generalize the de Finetti theorem to include asymmetric bounds on the variance of canonical observables and biased basis selection during the verification step. Our result thereby enables application of infinite-dimensional de Finetti theorem to situations where two conjugate measurements obey different statistics, such as the security analysis of quantum key distribution protocols based on squeezed state against coherent attack.
• Understanding of the observed structure in the universe can be reached only in the theoretical framework of dark matter. N-body simulations are indispensable for the analysis of the formation and evolution of the dark matter web. Two primary fields - density and velocity fields - are used in most of studies. However dark matter provides two additional fields which are unique for collisionless media only. These are the multi- stream field in Eulerian space and flip-flop field in Lagrangian space. The flip-flop field represents the number of sign reversals of an elementary volume of each collisionless fluid element. This field can be estimated by counting the sign reversals of the Jacobian at each particle at every time step of the simulation. The Jacobian is evaluated by numerical differentiation of the Lagrangian submanifold, i.e., the three-dimensional dark matter sheet in the six-dimensional space formed by three Lagrangian and three Eulerian coordinates. We present the results of the statistical study of the evolution of the flip-flop field from z = 50 to the present time z = 0. A number of statistical characteristics show that the pattern of the flip-flop field remains remarkably stable from z = 30 to the present time. As a result the flip-flop field evaluated at z = 0 stores a wealth of information about the dynamical history of the dark matter web. In particular one of the most intriguing properties of the flip-flop is a unique capability to preserve the information about the merging history of dark matter haloes.
• Sep 28 2016 quant-ph arXiv:1609.08526v1
The roles of Lie groups in Feynman's path integrals in non-relativistic quantum mechanics are discussed. Dynamical as well as geometrical symmetries are found useful for path integral quantization. Two examples having the symmetry of a non-compact Lie group are considered. The first is the free quantum motion of a particle on a space of constant negative curvature. The system has a group SO(d,1) associated with the geometrical structure, to which the technique of harmonic analysis on a homogeneous space is applied. As an example of a system having a non-compact dynamical symmetry, the d-dimensional harmonic oscillator is chosen, which has the non-compact dynamical group SU(1,1) besides its geometrical symmetry SO(d). The radial path integral is seen as a convolution of the matrix functions of a compact group element of SU(1,1) on the continuous basis.
• We use the Fisher matrix formalism to study the expansion and growth history of the Universe using galaxy clustering with 2D angular cross-correlation tomography in spectroscopic or high resolution photometric redshift surveys. The radial information is contained in the cross correlations between narrow redshift bins. We show how multiple tracers with redshift space distortions cancel sample variance and arbitrarily improve the constraints on the dark energy equation of state $\omega(z)$ and the growth parameter $\gamma$ in the noiseless limit. The improvement for multiple tracers quickly increases with the bias difference between the tracers, up to a factor $\sim4$ in $\text{FoM}_{\gamma\omega}$. We model a magnitude limited survey with realistic density and bias using a conditional luminosity function, finding a factor 1.3-9.0 improvement in $\text{FoM}_{\gamma\omega}$ -- depending on global density -- with a split in a halo mass proxy. Partly overlapping redshift bins improve the constraints in multiple tracer surveys a factor $\sim1.3$ in $\text{FoM}_{\gamma\omega}$. This findings also apply to photometric surveys, where the effect of using multiple tracers is magnified. We also show large improvement on the FoM with increasing density, which could be used as a trade-off to compensate some possible loss with radial resolution.
• Most existing automatic house price estimation systems rely only on some textual data like its neighborhood area and the number of rooms. The final price is estimated by a human agent who visits the house and assesses it visually. In this paper, we propose extracting visual features from house photographs and combining them with the house's textual information. The combined features are fed to a fully connected multilayer Neural Network (NN) that estimates the house price as its single output. To train and evaluate our network, we have collected the first houses dataset (to our knowledge) that combines both images and textual attributes. The dataset is composed of 535 sample houses from the state of California, USA. Our experiments showed that adding the visual features increased the R-value by a factor of 3 and decreased the Mean Square Error (MSE) by one order of magnitude compared with textual-only features. Additionally, when trained on the benchmark textual-only features housing dataset, our proposed NN still outperformed the existing model published results.
• Nash equilibrium is not guaranteed in finite quantum games. In this letter, we revisit this fact using John Nash's original approach of countering sets and Kakutani's fixed point theorem. To the best of our knowledge, this mathematically formal approach has not been explored before in the context of quantum games. We use this approach to draw conclusions about Nash equilibrium states in quantum informational processes such as quantum computing and quantum communication protocols.
• Peculiar velocity surveys present a very promising route to measuring the growth rate of large-scale structure and its scale dependence. However, individual peculiar velocity surveys suffer from large statistical errors due to the intrinsic scatter in the relations used to infer a galaxy's true distance. In this context we use a Fisher Matrix formalism to investigate the statistical benefits of combining multiple peculiar velocity surveys. We find that for all cases we consider there is a marked improvement on constraints on the linear growth rate $f\sigma_{8}$. For example, the constraining power of only a few peculiar velocity measurements is such that the addition of the 2MASS Tully-Fisher survey (containing only $\sim2,000$ galaxies) to the full redshift and peculiar velocity samples of the 6-degree Field Galaxy Survey (containing $\sim 110,000$ redshifts and $\sim 9,000$ velocities) can improve growth rate constraints by $\sim20\%$. Furthermore, the combination of the future TAIPAN and WALLABY+WNSHS surveys has the potential to reach a $\sim3\%$ error on $f\sigma_{8}$, which will place tight limits on possible extensions to General Relativity. We then turn to look at potential systematics in growth rate measurements that can arise due to incorrect calibration of the peculiar velocity zero-point and from scale-dependent spatial and velocity bias. For next generation surveys, we find that neglecting velocity bias in particular has the potential to bias constraints on the growth rate by over $5\sigma$, but that an offset in the zero-point has negligible impact on the velocity power spectrum.
• In this letter, cosmology of a simple NMDC gravity with $\xi R \phi_{,\mu}\phi^{,\mu}$ term and a free kinetic term is considered in flat geometry and in presence of dust matter. A logarithm field transformation $\phi' = \mu \ln \phi$ is proposed phenomenologically to ensures domination of the NMDC term at small field values. Assuming slow-roll approximation, equation of motion, scalar field solution and potential are derived as function of kinematic variables. The field solution and potential are found straightforwardly for power-law, de-Sitter and super-acceleration expansions.
• Although the notion of superdeterminism can, in principle, account for the violation of the Bell inequalities, this potential explanation has been roundly rejected by the quantum foundations community. The arguments for rejection, one of the most substantive coming from Bell himself, are critically reviewed. In particular, analysis of Bell's argument reveals an implicit unwarranted assumption: that the Euclidean metric is the appropriate yardstick for measuring distances in state space. Bell's argument is largely negated if this yardstick is instead based on the alternative $p$-adic metric. Such a metric, common in number theory, arises naturally when describing chaotic systems which evolve precisely on self-similar invariant sets in their state space. A locally-causal realistic model of quantum entanglement is developed, based on the premise that the laws of physics ultimately derive from an invariant-set geometry in the state space of a deterministic quasi-cyclic mono-universe. Based on this, the notion of a complex Hilbert vector is reinterpreted in terms of an uncertain selection from a finite sample space of states, leading to a novel form of consistent histories' based on number-theoretic properties of the transcendental cosine function. This leads to novel realistic interpretations of position/momentum non-commutativity, EPR, the Bell Theorem and the Tsirelson bound. In this inherently holistic theory - neither conspiratorial, retrocausal, fine tuned nor nonlocal - superdeterminism is not invoked by fiat but is emergent from these consistent histories' number-theoretic constraints. Invariant set theory provides new perspectives on many of the contemporary problems at the interface of quantum and gravitational physics, and, if correct, may signal the end of particle physics beyond the Standard Model.
• A physical picture for Quantum Mechanics which permits to conciliate it with the usual common sense is proposed. The picture agrees with the canonical Copenhagen interpretation making more clear its statements.
• The critical Lyman-Werner (LW) flux required for direct collapse blackholes (DCBH) formation, or J$_{\rm crit}$, depends on the shape of the irradiating spectral energy distribution (SED). The SEDs employed thus far have been representative of realistic single stellar populations. We study the effect of binary stellar populations on the formation of DCBH, as a result of their contribution to the LW radiation field. Although binary populations with ages $>$ 10 Myr yield a larger LW photon output, we find that the corresponding values of J$_{\rm crit}$ can be up to 100 times higher than single stellar populations. We attribute this to the shape of the binary SEDs as they produce a sub-critical rate of H$^-$ photodetaching 0.76 eV photons as compared to single stellar populations, reaffirming the role that H$^-$ plays in DCBH formation. This further corroborates the idea that DCBH formation is better understood in terms of a critical region in the H$_2$-H$^-$ photodestruction rate parameter space, rather than a single value of LW flux.
• Sep 28 2016 astro-ph.HE arXiv:1609.08606v1
I discuss the spectral energy distribution (SED) of all blazars with redshift detected by the \it Fermi satellite and listed in the 3LAC catalog. I will update the so called "blazar sequence" from the phenomenological point of view, with no theory or modelling. I will show that: i) pure data show that jet and accretion power are related; ii) the updated blazar sequence maintains the properties of the old version, albeit with a less pronounced dominance of the $\gamma$--ray emission; iii) at low bolometric luminosities, two different type of objects have the same high energy power: low black hole mass flat spectrum radio quasars and high mass BL Lacs. Therefore, at low luminosities, there is a very large dispersion of SED shapes; iv) in low power BL Lacs, the contribution of the host galaxy is important. Remarkably, the luminosity distribution of the host galaxies of BL Lacs are spread in a very narrow range; v) a simple sum of two smoothly joining power laws can describe the blazar SEDs very well.
• We calculated the dimensionless gyromagnetic ratio ("$g$-factor") of self-gravitating, uniformly rotating disks of dust with a constant specific charge $\epsilon$. These disk solutions to the Einstein-Maxwell equations depend on $\epsilon$ and a "relativity parameter" $\gamma$ ($0<\gamma\le 1$) up to a scaling parameter. Accordingly, the $g$-factor is a function $g=g(\gamma,\epsilon)$. The Newtonian limit is characterized by $\gamma \ll 1$, whereas $\gamma\to 1$ leads to a black-hole limit. The $g$-factor, for all $\epsilon$, approaches the values $g=1$ as $\gamma\to 0$ and $g=2$ as $\gamma\to 1$.
• In this paper we present a surprisingly short proof of Minkowski's second theorem. The author hopes there is no mistake in it, though the argument seems to be too plain to contain one. Also, we apply the main construction of the proof to some problems concerning the anomaly of a convex body, the density of the densest lattice packing, lattice point enumerator, and Ehrhart polynomial.
• Let $f(n)$ denote the number of unordered factorizations of a positive integer $n$ into factors larger than $1$. We show that the number of distinct values of $f(n)$, less than or equal to $x$, is at most $\exp \left( C \sqrt{\frac{\log x}{\log \log x}} \left( 1 + o(1) \right) \right)$, where $C=2\pi\sqrt{2/3}$ and $x$ is sufficiently large. This improves upon a previous result of the first author and F. Luca.
• Sep 28 2016 math.DG arXiv:1609.08601v1
We consider higher dimensional generalisations of normal almost contact structures. Two types of these structures are discussed. In the first case we replace an action of $\mathbb{R}$ (which is the case of almost contact manifolds) by a free action of $\mathbb{R}^n$ on a manifold. We argue that the normality condition can be satisfied only in the case of an almost complex structure as defined for $\mathcal{K}-structure$. The second one is when the acting group is the Heisenberg $H_3$ and the almost complex structure is constructed in a different way. In both cases the normality conditions are expressed in terms of the structure tensors.
• Sep 28 2016 math.CV arXiv:1609.08600v1
We give a characterization of the ranges of real Smirnov functions. In addition, we discuss the valence of such functions.
• We study f-biharmonic and bi-f-harmonic submanifolds in both generalized complex and Sasakian space forms. We prove necessary and sufficient condition for f-biharmonicity and bi-f-harmonicity in the general case and many particular cases. Some non-existence results are also obtained.
• Fast magnetic reconnection may occur in different astrophysical sources, producing flare-like emission and particle acceleration. Currently, this process is being studied as an efficient mechanism to accelerate particles via a first-order Fermi process. In this work we analyse the acceleration rate and the energy distribution of test particles injected in three-dimensional magnetohydrodynamical (MHD) domains with large-scale current sheets where reconnection is made fast by the presence of turbulence. We study the dependence of the particle acceleration time with the relevant parameters of the embedded turbulence, i.e., the Alfvén speed $V_{\rm A}$, the injection power $P_{\rm inj}$ and scale $k_{\rm inj}$ ($k_{\rm inj} = 1/l_{\rm inj}$). We find that the acceleration time follows a power-law dependence with the particle kinetic energy: $t_{acc} \propto E^{\alpha}$, with $0.2 < \alpha < 0.6$ for a vast range of values of $c / V_{\rm A} \sim 20 - 1000$. The acceleration time decreases with the Alfvén speed (and therefore with the reconnection velocity) as expected, having an approximate dependence $t_{acc} \propto (V_{\rm A} / c)^{-\kappa}$, with $\kappa \sim 2.1- 2.4$ for particles reaching kinetic energies between $1 - 100 \, m_p c^2$, respectively. Furthermore, we find that the acceleration time is only weakly dependent on the $P_{\rm inj}$ and $l_{\rm inj}$ parameters of the turbulence. The particle spectrum develops a high-energy tail which can be fitted by a hard power-law already in the early times of the acceleration, in consistency with the results of kinetic studies of particle acceleration by magnetic reconnection in collisionless plasmas.
• We prove a quantitative estimate on the number of certain singularities in almost minimizing clusters. In particular, we consider the singular points belonging to the lowest stratum of the Federer-Almgren stratification (namely, where each tangent cone does not split a $\R$) with maximal density. As a consequence we obtain an estimate on the number of triple junctions in $2$-dimensional clusters and on the number of tetrahedral points in $3$ dimensions, that in turn implies that the boundaries of volume-constrained minimizing clusters form at most a finite number of equivalence classes modulo homeomorphism of the boundary, provided that the prescribed volumes vary in a compact set. The method is quite general and applies also to other problems: for instance, to count the number of singularities in a codimension 1 area-minimizing surface in $\R^8$.
• Sep 28 2016 math.CO math.MG arXiv:1609.08596v1
The Ehrhart polynomial of a lattice polytope $P$ encodes information about the number of integer lattice points in positive integral dilates of $P$. The $h^\ast$-polynomial of $P$ is the numerator polynomial of the generating function of its Ehrhart polynomial. A zonotope is any projection of a higher dimensional cube. We give a combinatorial description of the $h^\ast$-polynomial of a lattice zonotope in terms of refined descent statistics of permutations and prove that the $h^\ast$-polynomial of every lattice zonotope has only real roots and therefore unimodal coefficients. Furthermore, we present a closed formula for the $h^\ast$-polynomial of a zonotope in matroidal terms which is analogous to a result by Stanley (1991) on the Ehrhart polynomial. Our results hold not only for $h^\ast$-polynomials but carry over to general combinatorial positive valuations. Moreover, we give a complete description of the convex hull of all $h^\ast$-polynomials of zonotopes in a given dimension: it is a simplicial cone spanned by refined Eulerian polynomials.
• Sep 28 2016 math.RT arXiv:1609.08593v1
We discuss multi-graded nilpotent tuples of multi-graded vector spaces which are a generalization of graded nilpotent pairs. The multi-grading yields a natural notion of a shape of such tuple and our main interest is to answer the question "Is the number of multi-graded nilpotent tuples of a fixed shape, up to base change in the homogeneous components, finite?" Our methods make use of a translation to the class of so-called "Multi-staircase algebras" and we classify their representation types.
• We present the first spectroscopic studies of the $C \ ^1\Sigma^+$ electronic state and the $A \ ^1\Sigma^+$ - $b \ ^3\Pi_{0^+}$ complex in $^7$Li - $^{85}$Rb. Using resonantly-enhanced, two-photon ionization, we observed $v = 7$, 9, 12, 13 and $26-44$ of the $C \ ^1\Sigma^+$ state. We augment the REMPI data with a form of depletion spectra in regions of dense spectral lines. The $A \ ^1\Sigma^+$ - $b \ ^3\Pi_{0^+}$ complex was observed with depletion spectroscopy, depleting to vibrational levels $v=0 \rightarrow 29$ of the $A \ ^1\Sigma^+$ state and $v=8 \rightarrow 18$ of the $b \ ^3\Pi_{0^+}$ state. For all three series, we determine the term energy and vibrational constants. Finally, we outline several possible future projects based on the data presented here.
• We provide explicit lower bounds for the ground-state energy of the renormalized Nelson model in terms of the coupling constant $\alpha$ and the number of particles $N$, uniform in the meson mass and valid even in the massless case. In particular, for any number of particles $N$ and large enough $\alpha$ we provide a bound of the form $-C\alpha^2 N^3\log^2(\alpha N)$, where $C$ is an explicit positive numerical constant; and if $\alpha$ is sufficiently small, we give one of the form $-C\alpha^2 N^3\log^2 N$ for $N \geq 2$, and $-C\alpha^2$ for $N = 1$. Whereas it is known that the renormalized Hamiltonian of the Nelson model is bounded below (as realized by E. Nelson) and implicit lower bounds have been given elsewhere (as in a recent work by Gubinelli, Hiroshima, and Lörinczi), ours seem to be the first fully explicit lower bounds with a reasonable dependence on $\alpha$ and $N$. We emphasize that the logarithmic term in the bounds above is probably an artifact in our calculations, since one would expect that the ground-state energy should behave as $-C\alpha^2 N^3$ for large $N$ or $\alpha$, as in the polaron model of H. Fröhlich.
• A mechanism of a chiral spin wave rotation is introduced to systematically generate mesoscopic Greenberger-Horne-Zeilinger states.
• We study simple yet efficient algorithms for scheduling n independent monotonic moldable tasks on $m$ identical processors; the objective is to: (1) minimize the makespan, or (2) maximize the sum of values of tasks completed by a deadline.The workload of a monotonic task is non-decreasing with the number of assigned processors. In this paper, we propose a scheduling algorithm who achieves a processor utilization of r when the number of processors assigned to a task j is the minimal number of processors needed to complete j by a time d. Here, r equals (1-k/m)/2 in the general setting where k is the maximum number of processors allocated to the tasks (in large computing clusters, m>>k and k/m approaches 0). More importantly, in many real applications, when a parallel task is executed on a small set of f processors, the speedup is linearly proportional to f. This is to be proved to be powerful in designing more efficient algorithms than the existing algorithms which is illustrated in a typical case where f=5; we propose an algorithm who can achieve a utilization of r=3(1-(k+3)/m)/4 and the extension of this algorithm to the case with an arbitrary f is also discussed. Based on the above schedule, we propose an r(1+\epsilon)-approximation algorithm with a complexity of O(nlog(n/\epsilon)) for the first scheduling objective. We also propose a generic greedy algorithm for the second scheduling objective, and, by analyzing it, give an r-approximation algorithm with a complexity of O(n). So far, for the first scheduling objective, the algorithm proposed in the typical setting is simpler and has a better performance than most of the existing algorithms given m>>k; the second objective is considered for the first time.
• We consider the integer QH state on Riemann surfaces with conical singularities, with the main objective of detecting the effect of the gravitational anomaly directly from the form of the wave function on a singular geometry. We suggest the formula expressing the normalisation factor of the holomorphic state in terms of the regularized zeta determinant on conical surfaces and check this relation for some model geometries. We also comment on possible extensions of this result to the fractional QH states.
• We present a method to determine the proton-to-helium ratio in cosmic rays at ultra-high energies. It makes use of the exponential slope, $\Lambda$, of the tail of the $X_{\rm max}$ distribution measured by an air shower experiment. The method is quite robust with respect to uncertainties from modeling hadronic interactions and to systematic errors on $X_{\rm max}$ and energy, and to the possible presence of primary nuclei heavier than helium. Obtaining the proton-to-helium ratio with air shower experiments would be a remarkable achievement. To quantify the applicability of a particular mass-sensitive variable for mass composition analysis despite hadronic uncertainties we introduce as a metric the `analysis indicator' and find an improved performance of the $\Lambda$ method compared to other variables currently used in the literature. The fraction of events in the tail of the $X_{\rm max}$ distribution can provide additional information on the presence of nuclei heavier than helium in the primary beam.
• Given a $1$-cocycle $b$ with coefficients in an orthogonal representation, we show that any finite dimensional summand of $b$ is cohomologically trivial if and only if $\| b(X_n) \|^2/n$ tends to a constant in probability, where $X_n$ is the trajectory of the random walk $(G,\mu)$. As a corollary, we obtain sufficient conditions for $G$ to satisfy Shalom's property $H_{\mathrm{FD}}$. Another application is a convergence to a constant in probability of $\mu^{*n}(e) -\mu^{*n}(g)$, $n\gg m$, normalized by its average with respect to $\mu^{*m}$, for any amenable group without infinite virtually Abelian quotients. Finally, we show that the harmonic equivariant mapping of $G$ to a Hilbert space obtained as an $U$-ultralimit of normalized $\mu^{*n}- g \mu^{*n}$ can depend on the ultrafilter $U$ for some groups.
• Small satellite systems enable whole new class of missions for navigation, communications, remote sensing and scientific research for both civilian and military purposes. As individual spacecraft are limited by the size, mass and power constraints, mass-produced small satellites in large constellations or clusters could be useful in many science missions such as gravity mapping, tracking of forest fires, finding water resources, etc. Constellation of satellites provide improved spatial and temporal resolution of the target. Small satellite constellations contribute innovative applications by replacing a single asset with several very capable spacecraft which opens the door to new applications. With increasing levels of autonomy, there will be a need for remote communication networks to enable communication between spacecraft. These space based networks will need to configure and maintain dynamic routes, manage intermediate nodes, and reconfigure themselves to achieve mission objectives. Hence, inter-satellite communication is a key aspect when satellites fly in formation. In this paper, we present the various researches being conducted in the small satellite community for implementing inter-satellite communications based on the Open System Interconnection (OSI) model. This paper also reviews the various design parameters applicable to the first three layers of the OSI model, i.e., physical, data link and network layer. Based on the survey, we also present a comprehensive list of design parameters useful for achieving inter-satellite communications for multiple small satellite missions. Specific topics include proposed solutions for some of the challenges faced by small satellite systems, enabling operations using a network of small satellites, and some examples of small satellite missions involving formation flying aspects.
• For the two-dimensional one-component Coulomb plasma, we derive an asymptotic expansion of the free energy up to order $N$, the number of particles of the gas, with an effective error bound $N^{1-\kappa}$ for some constant $\kappa > 0$. This expansion is based on approximating the Coulomb gas by a quasi-free Yukawa gas. Further, we prove that the fluctuations of the linear statistics are given by a Gaussian free field at any positive temperature. Our proof of this central limit theorem uses a loop equation for the Coulomb gas, the free energy asymptotics, and rigidity bounds on the local density fluctuations of the Coulomb gas, which we obtained in a previous paper.
• The anarchy principle leading to the see-saw ensemble is studied analytically with the usual tools of random matrix theory. The probability density function for the see-saw ensemble of $N\times N$ matrices is obtained in terms of a multidimensional integral. This integral involves all light neutrino masses, leading to a complicated probability density function. It is shown that the probability density function for the neutrino mixing angles and phases is the appropriate Haar measure. The decoupling of the light neutrino masses and neutrino mixings implies no correlation between the neutrino mass eigenstates and the neutrino mixing matrix, in contradiction with observations but in agreement with some of the claims found in the literature.
• Motivated by the projectable Horava-Lifshitz model/mimetic matter scenario, we consider a particular modification of standard gravity, which manifests as an imperfect low pressure fluid. While practically indistinguishable from collection of non-relativistic weakly interacting particles on cosmological scales, it leaves drastically different signatures in the Solar system. The main effect stems from gravitational focusing of the flow of \it Imperfect Dark Matter passing near the Sun. This entails the strong amplification of Imperfect Dark Matter energy density compared to its average value in the surrounding halo. The enhancement is many orders of magnitude larger than in the case of Cold Dark Matter, provoking deviations of the metric in the second order in the Newtonian potential. Effects of gravitational focusing are prominent enough to substantially affect the planetary dynamics. Using the existing bound on the PPN parameter $\beta_{PPN}$, we deduce the stringent constraint on the unique constant of the model.
• Let $b(n)$ denote the number of cubic partition pairs of $n$. In this paper, we aim to provide a strategy to obtain arithmetic properties of $b(n)$. This gives affirmative answers to two of Lin's conjectures.
• In strong laser fields, sub-femtosecond control of chemical reactions with the carrier-envelope phase (CEP) becomes feasible. We have studied the control of reaction dynamics of acetylene and allene in intense few-cycle laser pulses at 750 nm, where ionic fragments are recorded with a reaction microscope. We find that by varying the CEP and intensity of the laser pulses it is possible to steer the motion of protons in the molecular dications, enabling control over deprotonation and isomerization reactions. The experimental results are compared to predictions from a quantum dynamical model, where the control is based on the manipulation of the phases of a vibrational wave packet by the laser waveform. The measured intensity dependence in the CEP-controlled deprotonation of acetylene is well captured by the model. In the case of the isomerization of acetylene, however, we find differences in the intensity dependence between experiment and theory. For the isomerization of allene, an inversion of the CEP-dependent asymmetry is observed when the intensity is varied, which we discuss in light of the quantum dynamical model. The inversion of the asymmetry is found to be consistent with a transition from non-sequential to sequential double ionization.
• We use recently published redshift space distortion measurements of the cosmological growth rate, f sigma_8(z), to examine whether the linear evolution of perturbations in the R_h=ct cosmology is consistent with the observed development of large scale structure. We find that these observations favour R_h=ct over the version of LCDM optimized with the joint analysis of Planck and linear growth rate data, particularly in the redshift range 0 < z < 1, where a significant curvature in the functional form of f sigma_8(z) predicted by the standard model---but not by R_h=ct---is absent in the data. When LCDM is optimized using solely the growth rate measurements, however, the two models fit the observations equally well though, in this case, the low-redshift measurements find a lower value for the fluctuation amplitude than is expected in Planck LCDM. Our results strongly affirm the need for more precise measurements of f sigma_8(z) at all redshifts, but especially at z < 1.
• Sep 28 2016 math.CA arXiv:1609.08575v1
We take a third-order approach to the fourth Painlevé equation and indicate the value of such an approach to other second-order ODEs in the Painlevé-Gambier list of 50.
• Remote-memory-access models, also known as one-sided communication models, are becoming an interesting alternative to traditional two-sided communication models in the field of High Performance Computing. In this paper we extend previous work on an MPI-based, locality-aware remote-memory-access model with a asynchronous progress-engine for non-blocking communication operations. Most previous related work suggests to drive progression on communication through an additional thread within the application process. In contrast, our scheme uses an arbitrary number of dedicated processes to drive asynchronous progression. Further, we describe a prototypical library implementation of our concepts, namely DART, which is used to quantitatively evaluate our design against a MPI-3 baseline reference. The evaluation consists of micro-benchmark to measure overlap of communication and computation and a scientific application kernel to assess total performance impact on realistic use-cases. Our benchmarks shows, that our asynchronous progression scheme can overlap computation and communication efficiently and lead to substantially shorter communication cost in real applications.
• Monte Carlo simulations and finite-size scaling analysis are used to investigate the phase transition and critical behavior of the nonequilibrium three-state block voter model on square lattices. We show that the collective behavior of this system exhibits a continuous order-disorder phase transition at a critical noise parameter, which is an increasing function of the number of spins inside the persuasive cluster. Our results for the critical exponents and other universal quantities indicate that the system belongs to the class of universality of the equilibrium three-state Potts model in two dimensions. Moreover, our analysis yields an estimation of the long-range exponents governing the decay of the critical amplitudes of relevant quantities with the range of the interactions.
• The motion of the solar system with respect to the cosmic rest frame modulates the monopole of the Epoch of Reionization 21-cm signal into a dipole. This dipole has a characteristic frequency dependence that is dominated by the frequency derivative of the monopole signal. We argue that although the signal is weaker by a factor of $\sim200$, there are significant benefits in measuring the dipole. Most importantly, the direction of the cosmic velocity vector is known exquisitely well from the cosmic microwave background and is not aligned with the galaxy velocity vector that modulates the foreground monopole. Moreover, an experiment designed to measure a dipole can rely on differencing patches of the sky rather than making an absolute signal measurement, which helps with some systematic effects.

Elizabeth Crosson Sep 28 2016 16:57 UTC

Thank you for the question! This construction due to Peres is interesting, but if I'm analyzing it correctly then I don't think it would work in the context of our paper. The ground state probability distribution of the Hamiltonian with couplings in Peres eq (20) looks like the ground state of a d

...(continued)
Barbara Terhal Sep 28 2016 14:09 UTC

In http://journals.aps.org/pra/abstract/10.1103/PhysRevA.32.3266 Asher Peres showed how to modify the Feynman Hamiltonian to make sure that a Hamiltonian evolution starting at t=0 lands after some fixed time at the desired output time so that the Hamiltonian effectively corresponds to that of large

...(continued)
Tom Wong Sep 26 2016 14:15 UTC

The supplemental material is in the arXiv source. Once you extract the tarball, it's under anc/supplemental_material.pdf.

HA Sep 22 2016 18:51 UTC

The supplemental material is missing! It would great to see the LP optimisation method used.

Marco Piani Sep 19 2016 20:13 UTC

Is it actually decidable? :-)

Toby Cubitt Sep 19 2016 15:00 UTC

I like this sentence from the conclusion: "There is, however, a second possible answer to our question: yes".

Māris Ozols Sep 15 2016 21:30 UTC

Here is a link for those who also haven't heard of SciPost before: https://scipost.org/

Zoltán Zimborás Sep 15 2016 18:12 UTC

This is the very first paper of SciPost, waiting for the first paper of "Quantum" (http://quantum-journal.org). There are radical (and good!) changes going on in scientific publishing.

JRW Sep 14 2016 07:46 UTC

"Ni." would be slightly shorter, but some may find it offensive.