# Top arXiv papers

• Jul 30 2015 quant-ph math-ph math.MP arXiv:1507.08255v1
We consider the problem of building an arbitrary $N\times N$ real orthogonal operator using a finite set, $S$, of elementary quantum optics gates operating on $m\leq N$ modes - the problem of universality of $S$ on $N$ modes. In particular, we focus on the universality problem of an $m$-mode beamsplitter. Using methods of control theory and some properties of rotations in three dimensions, we prove that any nontrivial real 2-mode and "almost" any nontrivial real $3$-mode beamsplitter is universal on $m\geq3$ modes.
• We introduce and study the task of assisted coherence distillation. This task arises naturally in bipartite systems where both parties work together to generate the maximal possible coherence on one of the subsystems. Only incoherent operations are allowed on the target system while general local quantum operations are permitted on the other, an operational paradigm that we call local quantum-incoherent operations and classical communication (LQICC). We show that the asymptotic rate of assisted coherence distillation for pure states is equal to the coherence of assistance, a direct analog of the entanglement of assistance, whose properties we characterize. Our findings imply a novel interpretation of the von Neumann entropy: it quantifies the maximum amount of extra quantum coherence a system can gain when receiving assistance from a collaborative party. Our results are generalized to coherence localization in a multipartite setting and possible applications are discussed.
• The performance of automatic speech recognition (ASR) has improved tremendously due to the application of deep neural networks (DNNs). Despite this progress, building a new ASR system remains a challenging task, requiring various resources, multiple training stages and significant expertise. This paper presents our Eesen framework which drastically simplifies the existing pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen involves learning a single recurrent neural network (RNN) predicting context-independent targets (phonemes or characters). To remove the need for pre-generated frame labels, we adopt the connectionist temporal classification (CTC) objective function to infer the alignments between speech and label sequences. A distinctive feature of Eesen is a generalized decoding approach based on weighted finite-state transducers (WFSTs), which enables the efficient incorporation of lexicons and language models into CTC decoding. Experiments show that compared with the standard hybrid DNN systems, Eesen achieves comparable word error rates (WERs), while at the same time speeding up decoding significantly.
• Jul 30 2015 cs.CV arXiv:1507.08173v1
Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with $\mathcal{O}(n \log(n))$ computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models.
• Jul 30 2015 cs.CR arXiv:1507.08094v1
We propose a public key encryption scheme based on the Boolean Satisfiability Problem (SAT). The public key is given by a SAT formula and the private key is the satisfying assignment. Encryption is a probabilistic algorithm that takes the bits of the message to randomly generated Boolean functions, represented in algebraic normal form. Those are implied to be true or false by the public key, hence bit-wise decryption is done by applying each function to the private key. Our scheme does not provide signatures.
• Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intutive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.
• Sparse principal component analysis (PCA) addresses the problem of finding a linear combination of the variables in a given data set with a sparse coefficients vector that maximizes the variability of the data. This model enhances the ability to interpret the principal components, and is applicable in a wide variety of fields including genetics and finance, just to name a few. We suggest a necessary coordinate-wise-based optimality condition, and show its superiority over the stationarity-based condition that is commonly used in the literature, and which is the basis for many of the algorithms designed to solve the problem. We devise algorithms that are based on the new optimality condition, and provide numerical experiments that support our assertion that algorithms which are guaranteed to converge to stronger optimality condition, perform better than algorithms that converge to points satisfying weaker optimality conditions.
• Spin-based electronics or spintronics relies on the ability to store, transport and manipulate electron spin polarization with great precision. In its ultimate limit, information is stored in the spin state of a single electron, at which point also quantum information processing becomes a possibility. Here we demonstrate the manipulation, transport and read-out of individual electron spins in a linear array of three semiconductor quantum dots. First, we demonstrate single-shot read-out of three spins with fidelities of 97% on average, using an approach analogous to the operation of a charge-coupled-device (CCD). Next, we perform site-selective control of the three spins thereby writing the content of each pixel of this "Single-Spin CCD". Finally, we show that shuttling an electron back and forth in the array hundreds of times, covering a cumulative distance of 80 $\mu$m, has negligible influence on its spin projection. Extrapolating these results to the case of much larger arrays, points at a diverse range of potential applications, from quantum information to imaging and sensing.
• This report gives the 2014 self-consistent set of values of the constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA). These values are based on a least-squares adjustment that takes into account all data available up to 31 December 2014. The recommended values may also be found on the World Wide Web at physics.nist.gov/constants.
• We investigate the origin of superconductivity in doped SrTiO$_3$ (STO) using a combination of density functional and strong coupling theories within the framework of quantum criticality. Our density functional calculations of the ferroelectric soft mode frequency as a function of doping reveal a crossover from quantum paraelectric to ferroelectric behavior at a doping level coincident with the experimentally observed top of the superconducting dome. Based on this finding, we explore a model in which the superconductivity in STO is enabled by its proximity to the ferroelectric quantum critical point and the soft mode fluctuations provide the pairing interaction on introduction of carriers. Within our model, the low doping limit of the superconducting dome is explained by the emergence of the Fermi surface, and the high doping limit by departure from the quantum critical regime. We predict that the highest critical temperature will increase and shift to lower carrier doping with increasing $^{18}$O isotope substitution, a scenario that is experimentally verifiable.
• The effect of boundary deformation on the non-separable entanglement which appears in the classical elec- tromagnetic field is considered. A quantum chaotic billiard geometry is used to explore the influence of a mechanical modification of the optical fiber cross-sectional geometry on the production of non-separable entan- glement within classical fields. For the experimental realization of our idea, we propose an optical fiber with a cross section that belongs to the family of Robnik chaotic billiards. Our results show that a modification of the fiber geometry from a regular to a chaotic regime can enhance the transverse mode entanglement. Our proposal can be realized in a very simple experimental set-up which consists of a specially designed optical fiber where non-entangled light enters at the input end and entangled light propagates out at the output end after interacting with a fiber boundary that is known to generate chaotic behavior.
• The ATLAS collaboration has recently reported a 2.6 sigma excess in the search for a heavy resonance decaying into a pair of weak gauge bosons. Only fully hadronic final states are being looked for in the analysis. If the observed excess really originates from the gauge bosons' decays, other decay modes of the gauge bosons would inevitably leave a trace on other exotic searches. In this paper, we propose the use of the Z boson into a pair of neutrinos to test the excess. This decay leads to a very large missing energy and can be probed with conventional dark matter searches at the LHC. We discuss the current constraints from the dark matter searches and the prospects. We find that optimizing these searches may give a very robust probe of the resonance, even with the currently available data of the 8 TeV LHC.
• This work introduces algorithms able to exploit contextual information in order to improve maximum-likelihood (ML) parameter estimation in finite mixture models (FMM), demonstrating their benefits and properties in several scenarios. The proposed algorithms are derived in a probabilistic framework with regard to situations where the regular FMM graphs can be extended with context-related variables, respecting the standard expectation-maximization (EM) methodology and, thus, rendering explicit supervision completely redundant. We show that, by direct application of the missing information principle, the compared algorithms' learning behaviour operates between the extremities of supervised and unsupervised learning, proportionally to the information content of contextual assistance. Our simulation results demonstrate the superiority of context-aware FMM training as compared to conventional unsupervised training in terms of estimation precision, standard errors, convergence rates and classification accuracy or regression fitness in various scenarios, while also highlighting important differences among the outlined situations. Finally, the improved classification outcome of contextually enhanced FMMs is showcased in a brain-computer interface application scenario.
• Approximate Newton methods are a standard optimization tool which aim to maintain the benefits of Newton's method, such as a fast rate of convergence, whilst alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov Decision Processes (MDPs). We first analyse the structure of the Hessian of the objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton Methods for MDPs. Like the Gauss-Newton method for non-linear least squares, these methods involve approximating the Hessian by ignoring certain terms in the Hessian which are difficult to estimate. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space, and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss-Newton algorithm is closely related to both the EM-algorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains.
• We study infinitely divisible distributions in bi-free probability. We prove a limit theorem of the sums of bi-free two-faced pairs of random variables within a triangular array. Then, by using the full Fock space operator model, we show that a two-faced pair of random variables has a bi-free infinitely divisible distribution if and only if its distribution is the limit distribution in our limit theorem. Finally, we characterize the bi-free infinite divisibility of the distribution of a pair of two random variables in terms of bi-free Levy processes.
• This paper focuses on the estimation of low-complexity signals when they are observed through $M$ uniformly quantized compressive observations. Among such signals, we consider 1-D sparse vectors, low-rank matrices, or compressible signals that are well approximated by one of these two models. In this context, we prove the estimation efficiency of a variant of Basis Pursuit Denoise, called Consistent Basis Pursuit (CoBP), enforcing consistency between the observations and the re-observed estimate, while promoting its low-complexity nature. We show that the reconstruction error of CoBP decays like $M^{-1/4}$ when all parameters but $M$ are fixed. Our proof is connected to recent bounds on the proximity of vectors or matrices when (i) those belong to a set of small intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they share the same quantized (dithered) random projections. By solving CoBP with a proximal algorithm, we provide some extensive numerical observations that confirm the theoretical bound as $M$ is increased, displaying even faster error decay than predicted. The same phenomenon is observed in the special, yet important case of 1-bit CS.
• In this paper we investigate the properties of minus partial order in unital rings. We generalize several results well known for matrices and bounded linear operators on Banach spaces. We also study linear maps preserving the minus partial order in unital semisimple Banach algebras with essential socle.
• Markov chain Monte Carlo (MCMC) algorithms are used to estimate features of interest of a distribution. The Monte Carlo error in estimation has an asymptotic normal distribution whose multivariate nature has so far been ignored in the MCMC community. We present a class of multivariate spectral variance estimators for the asymptotic covariance matrix in the Markov chain central limit theorem and provide conditions for strong consistency. We also show strong consistency of the eigenvalues of the estimator. Finally, we examine the finite sample properties of the multivariate spectral variance estimators and its eigenvalues in the context of a vector autoregressive process of order 1.
• We consider the Thomson scattering of an electron in an ultra-intense chirped laser pulse. It is found that the introduction of a negative chirp means the electron enters a high frequency region of the field while it still has a large proportion of its original energy. This results in a significant enhancement of the energy and intensity of the emitted radiation as compared to the case without chirping.
• Using superconformal methods we derive an explicit de Sitter supergravity action invariant under spontaneously broken local ${\cal N}=1$ supersymmetry. The supergravity multiplet interacts with a nilpotent goldstino multiplet. We present a complete locally supersymmetric action including the graviton and the fermionic fields, gravitino and goldstino, no scalars. In the global limit when supergravity multiplet decouples, our action reproduces the Volkov-Akulov theory. In the unitary gauge where goldstino vanishes we recover pure supergravity with the positive cosmological constant. The classical equations of motion, with all fermions vanishing, have a maximally symmetric solution: de Sitter space.
• A local convergence rate is established for an orthogonal collocation method based on Gauss quadrature applied to an unconstrained optimal control problem. If the continuous problem has a sufficiently smooth solution and the Hamiltonian satisfies a strong convexity condition, then the discrete problem possesses a local minimizer in a neighborhood of the continuous solution, and as the number of collocation points increases, the discrete solution convergences exponentially fast in the sup-norm to the continuous solution. This is the first convergence rate result for an orthogonal collocation method based on global polynomials applied to an optimal control problem.
• We study the stability of anti-de Sitter (AdS) spacetime to spherically symmetric perturbations of a real scalar field in general relativity. Further, we work within the context of the "two time framework" (TTF) approximation, which describes the leading nonlinear effects for small amplitude perturbations, and is therefore suitable for studying the weakly turbulent instability of AdS---including both collapsing and non-collapsing solutions. We have previously identified a class of quasi-periodic (QP) solutions to the TTF equations, and in this work we analyze their stability. We show that there exist several families of QP solutions that are stable to linear order, and we argue that these solutions represent islands of stability in TTF. We extract the eigenmodes of small oscillations about QP solutions, and we use them to predict approximate recurrence times for generic non-collapsing initial data in the full (non-TTF) system. Alternatively, when sufficient energy is driven to high-frequency modes, as occurs for initial data far from a QP solution, the TTF description breaks down as an approximation to the full system. Depending on the higher order dynamics of the full system, this often signals an imminent collapse to a black hole.
• We investigate the discrete $\beta$ function of the 2-flavor SU(3) sextet model using the finite volume gradient flow scheme. Our results, using clover improved nHYP smeared Wilson fermions, follow the (non-universal) 4-loop $\overline{\textrm{MS}}$ perturbative predictions closely up to $g^2 \approx 5.5$, the strongest coupling reached in our simulation. At strong couplings the results are in tension with a recently published work using the same gradient flow renormalization scheme with staggered fermions. Since these calculations define the discrete $\beta$ function in the same continuum renormalization scheme, they should lead to the same continuum predictions, irrespective of the lattice fermion action. In order to test systematic effects in our computation we compare two different lattice operators, three different flow definitions, and two volume extrapolations. We find agreement among these different approaches in the continuum limit when the gradient flow parameter $c\gtrsim0.35$. Considering the potential phenomenological impact of this model, it is important to understand the origin of the disagreement between our work and the staggered fermion results.
• Liquid crystal films have recently been demonstrated as variable thickness, planar targets for ultra-intense laser matter experiments and applications such as ion acceleration. By controlling the parameters of film formation, including liquid crystal temperature and volume, their thickness can be varied on-demand from 10 $nm$ to above 10 $\mu m$. This thickness range enables for the first time real-time selection and optimization of various ion acceleration mechanisms using low cost, high quality targets. Our previous work employed these targets in single shot configuration, requiring chamber cycling after the pre-made films were expended. Presented here is a film formation device capable of drawing films from a bulk liquid crystal source volume to any thickness in the aforementioned range. This device will form films under vacuum within 2 $\mu m$ of the same location each time, well within the Rayleigh range of even tight $F/ \#$ systems. The repetition rate of the device exceeds 0.1 $Hz$ for sub-100 $nm$ films, enabling inexpensive, moderate repetition rate plasma target insertion for state of the art lasers currently in use or under development. Characterization tests of the device performed at the Scarlet laser facility at Ohio State will be presented.
• We present a new idea to design perfectly secure information exchange protocol, based on so called Deep Randomness, which means randomness relying on hidden probability distribution. Such idea drives us to introduce a new axiom in probability theory, thanks to which we can design a protocol, beyond Shannon limit, enabling two legitimate partners, sharing originally no common private information, to exchange secret information with accuracy as close as desired from perfection, and knowledge as close as desired from zero by any unlimitedly powered opponent.
• Optimising queries in real-world situations under imperfect conditions is still a problem that has not been fully solved. We consider finding the optimal order in which to execute a given set of selection operators under partial ignorance of their selectivities. The selectivities are modelled as intervals rather than exact values and we apply a concept from decision theory, the minimisation of the maximum regret, as a measure of optimality. We show that the associated decision problem is NP-hard, which renders a brute-force approach to solving it impractical. Nevertheless, by investigating properties of the problem and identifying special cases which can be solved in polynomial time, we gain insight that we use to develop a novel heuristic for solving the general problem. We also evaluate minmax regret query optimisation experimentally, showing that it outperforms a currently employed strategy of optimisers that uses mean values for uncertain parameters.
• Discoveries from the prime Kepler mission demonstrated that small planets (< 3 Earth-radii) are common outcomes of planet formation. While Kepler detected many such planets, all but a handful orbit faint, distant stars and are not amenable to precise follow up measurements. Here, we report the discovery of two small planets transiting EPIC-206011691, a bright (K = 9.4) M0 dwarf located 65$\pm$6 pc from Earth. We detected the transiting planets in photometry collected during Campaign 3 of NASA's K2 mission. Analysis of transit light curves reveals that the planets have small radii compared to their host star, 2.60 $\pm$ 0.14% and 3.15 $\pm$ 0.20%, respectively. We obtained follow up NIR spectroscopy of \epic to constrain host star properties, which imply planet sizes of 1.59 $\pm$ 0.43 Earth-radii and 1.92 $\pm$ 0.53 Earth-radii, respectively, straddling the boundary between high-density, rocky planets and low-density planets with thick gaseous envelopes. The planets have orbital periods of 9.32414 days and 15.50120 days, respectively, and have a period ratio of 1.6624, very near to the 5:3 mean motion resonance, which may be a record of the system's formation history. Transit timing variations (TTVs) due to gravitational interactions between the planets may be detectable using ground-based telescopes. Finally, this system offers a convenient laboratory for studying the bulk composition and atmospheric properties of small planets with low equilibrium temperatures.
• We propose a new approach to the problem of compressive phase retrieval in which the goal is to reconstruct a sparse vector from the magnitude of a number of its linear measurements. The proposed framework relies on constrained sensing vectors and a two-stage reconstruction method that consists of two standard convex optimization programs that are solved sequentially. Various methods for compressive phase retrieval have been proposed in recent years, but none have come with strong efficiency and computational complexity guarantees. The main obstacle has been that there is no straightforward convex relaxations for the type of structure in the target. Given a set of underdetermined measurements, there is a standard framework for recovering a sparse matrix, and a standard framework for recovering a low-rank matrix. However, a general, efficient method for recovering a matrix which is jointly sparse and low-rank has remained elusive. In this paper, we show that if the sensing vectors are chosen at random from an incoherent subspace, then the low-rank and sparse structures of the target signal can be effectively decoupled. We show that a recovery algorithm that consists of a low-rank recovery stage followed by a sparse recovery stage will produce an accurate estimate of the target when the number of measurements is $\mathsf O(k\,\log\frac{d}{k})$, where $k$ and $d$ denote the sparsity level and the dimension of the input signal. We also evaluate the algorithm through numerical simulation.
• We prove that, for $C^1$-generic diffeomorphisms, if a homoclinic class is not hyperbolic, then there is a non-hyperbolic ergodic measure supported on it. This proves a conjecture by Díaz and Gorodetski [28]. We also discuss the conjectured existence of periodic points with different stable dimension in the class.
• The LHCb has discovered two new states with preferred $J^P$ quantum numbers $3/2^-$ and $5/2^+$ from $\Lambda_b$ decays. These new states can be interpreted as hidden charm pentaquarks.It has been argued that the main features of these pentaquarks can be described by diquark model. The diquark model predicts that the $3/2^+$ and $5/2^-$ are in two separate octet multiplets of flavor $SU(3)$ and there is also an additional decuplet pentaquark multiplet. Finding the states in these multiplets can provide crucial evidence for this model. The weak decays of b-baryon to a light meson and a pentaquark can have Cabibbo allowed and suppressed decay channels. We find that in the $SU(3)$ limit, for $U$-spin related decay modes the ratio of the decay rates of Cabibbo suppressed to Cabibbo allowed decay channels is given by $|V_{cd}|^2/|V_{cs}|^2$. There are also other testable relations for b-baryon weak decays into a pentaquark and a light pseudoscalar. These relations can be used as tests for the diquark model for pentaquark.
• We introduce a subfamily of additive enlargements of a maximally monotone operator. Our definition is inspired by the early work of Simon Fitzpatrick. These enlargements constitute a subfamily of the family of enlargements introduced by Svaiter. When the operator under consideration is the subdifferential of a convex lower semicontinuous proper function, we prove that some members of the subfamily are smaller than the classical $\epsilon$-subdifferential enlargement widely used in convex analysis. We also recover the epsilon-subdifferential within the subfamily. Since they are all additive, the enlargements in our subfamily can be seen as structurally closer to the $\epsilon$-subdifferential enlargement.
• Jul 30 2015 hep-th arXiv:1507.08250v1
We show that the classical S-matrix calculated from the recently proposed superstring field theories give the correct perturbative S-matrix. In the proof we exploit the fact that the vertices are obtained by a field redefinition in the large Hilbert space. The result extends to include the NS-NS subsector of type II superstring field theory and the recently found equations of motions for the Ramond fields. In addition, our proof implies that the S-matrix obtained from Berkovits' WZW-like string field theory then agrees with the perturbative S-matrix to all orders.
• We study the price of anarchy in a class of graph coloring games (a subclass of polymatrix common-payoff games). In those games, players are vertices of an undirected, simple graph, and the strategy space of each player is the set of colors from $1$ to $k$. A tight bound on the price of anarchy of $\frac{k}{k-1}$ is known (Hoefer 2007, Kun et al. 2013), for the case that each player's payoff is the number of her neighbors with different color than herself. The study of more complex payoff functions was left as an open problem. We generalize by computing payoff for a player by determining the distance of her color to each of her neighbors, applying a non-negative, real-valued, concave function $f$ to each of those distances, and then summing up the resulting values. Denote $f^*$ the maximum value that $f$ attains on the possible distances $0,\dots,k-1$. We prove an upper bound of $4$ on the price of anarchy for general $f$, a bound of $3 \, (1-\frac{1}{k})$ if $f(\frac{k}{4}) \geq f^*$, a tight bound of $2$ for non-decreasing $f$, and a tight bound of $2$ if $f$ attains $f^*$ in $\lfloor\frac{k}{2}\rfloor$ and in $\lceil\frac{k}{2}\rceil$ (tight if $k$ is even). The latter includes what is also known as cyclic payoff. Graph coloring, especially in distributed and game-theoretic settings, is often used to model spectrum sharing scenarios, such as the selection of WLAN frequencies. For such an application, our framework would allow to express a dependence of the degree of radio interference on the distance between two frequencies.
• When an impurity interacts with a bath of phonons it forms a polaron. For increasing interaction strengths the mass of the polaron increases and it can become self-trapped. For impurity atoms placed inside an atomic Bose-Einstein condensate (BEC) the nature of this self-trapping transition is subject of intense theoretical debate. While variational approaches suggest a sharp transition, renormalization group studies predict the presence of an intermediate-coupling region characterized by large phonon correlations. To investigate this widely unexplored regime we suggest a versatile experimental setup that allows to tune the mass of the impurity and its interactions with the BEC. The impurity is realized as a slow-light or dark-state polariton (DSP) inside a two-dimensional BEC. We show that the interactions of the DSP with the Bogoliubov phonons lead to formation of photonic polarons, described by the Bogoliubov-Fröhlich Hamiltonian, and make theoretical predictions by extending earlier renormalization group studies of this Hamiltonian.
• We consider Poisson's equation with a finite number of weighted Dirac masses as a source term, together with its discretization by means of conforming finite elements. For the error in fractional Sobolev spaces, we propose residual-type a posteriori estimators with a specifically tailored oscillation and show that, on two-dimensional polygonal domains, they are reliable and locally efficient. In numerical tests, their use in an adaptive algorithm leads to optimal error decay rates.
• We use a first-order energy quantity to prove a strengthened statement of uniqueness for the Ricci flow. One consequence of this statement is that if a complete solution on a noncompact manifold has uniformly bounded Ricci curvature, then its sectional curvature will remain bounded for a short time if it is bounded initially. In other words, the Weyl curvature tensor of a complete solution to the Ricci flow cannot become unbounded instantaneously if the Ricci curvature remains bounded.
• The fitness landscape defines the relationship between genotypes and fitness in a given environment, and underlies fundamental quantities such as the distribution of selection coefficient, or the magnitude and type of epistasis. A better understanding of variation of landscape structure across species and environments is thus necessary to understand and predict how populations adapt. An increasing number of experiments access the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is biased by the protocol used to identify mutations. Here we develop a rigorous and flexible statistical framework based on Approximate Bayesian Computation to address these concerns, and use this framework to fit a broad class of phenotypic fitness models (including Fisher's model) to 24 empirical landscapes representing 9 diverse biological systems. In spite of large uncertainty due to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness of fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible model in only 3 out of 9 biological systems. In most cases, including notably the landscapes of drug resistance, Fisher's model is not able to explain the structure of empirical landscapes and patterns of epistasis.
• The correct description of nondynamic correlation by electronic structure methods not belonging to the multireference family is a challenging issue. The transition of $D_{2h}$ to $D_{4h}$ symmetry in H$_4$ molecule is among the most simple archetypal examples to illustrate the consequences of missing nondynamic correlation effects. The resurge of interest in density matrix functional methods has brought several new methods including the family of Piris Natural Orbital Functionals (PNOF). In this work we compare PNOF5 and PNOF6, which include nondynamic electron correlation effects to some extent, with other standard ab initio methods in the H$_4$ $D_{4h}/D_{2h}$ potential energy surface. Thus far, the wrongful behavior of single-reference methods at the $D_{2h}-D_{4h}$ transition of H$_4$ has been attributed to wrong account of nondynamic correlation effects, whereas in geminal-based approaches it has been assigned to a wrong coupling of spins and the localized nature of the orbitals. We will show that actually $\textit{interpair}$ nondynamic correlation is the key to a cusp-free qualitatively correct description of H$_4$ PES. By introducing $\textit{interpair}$ nondynamic correlation, PNOF6 is shown to avoid cusps and provide the correct smooth PES features, total and local spin properties along with the correct electron delocalization, as reflected by natural orbitals and multicenter delocalization indices.
• Anisotropic mesh quality measures and anisotropic mesh adaptation are studied for polygonal meshes. Three sets of alignment and equidistribution measures are developed, one based on least squares fitting, one based on generalized barycentric mapping, and the other based on singular value decomposition of edge matrices. Numerical tests show that all three sets of mesh quality measures provide good measurements for the quality of polygonal meshes under given metrics. Based on one of these sets of quality measures and using a moving mesh partial differential equation, an anisotropic adaptive polygonal mesh method is constructed for the numerical solution of second order elliptic equations. Numerical examples are presented to demonstrate the effectiveness of the method.
• We provide a concise exposition with original proofs of combinatorial formulas for the 2D Ising model partition function, multi-point fermionic observables, spin and energy density correlations, for general graphs and interaction constants, using the language of Kac-Ward matrices. We also give a brief account of the relations between various alternative formalisms which have been used in the combinatorial study of the planar Ising model: dimers and Grassmann variables, spin and disorder operators, and, more recently, s-holomorphic observables. In addition, we point out that these formulas can be extended to the double-Ising model, defined as a pointwise product of two Ising spin configurations on the same discrete domain, coupled along the boundary.
• This is a discussion of the paper "Modeling an Augmented Lagrangian for Improved Blackbox Constrained Optimization," (Gramacy, R.~B., Gray, G.~A., Digabel, S.~L., Lee, H.~K.~H., Ranjan, P., Wells, G., and Wild, S.~M., Technometrics, 61, 1--38, 2015).
• The aim of this paper is to understand the tendency to organization of the turbulence in two-dimensional ideal fluids. We show that nonlinear processes as inverse cascade of the energy and vorticity concentration are essentially determined by trajectory trapping or eddying. The statistics of the trajectories of the vorticity elements is studied using a semianalytic method. The separation of the positive and negative vorticity is due to the attraction produced by a large scale vortex on the small scale vortices of the same sign. More precisely, a large scale velocity is shown to determine average transverse drifts, which have opposite orientations for positive and negative vorticity. They appear in the presence of trapping and lead to energy flow to large scales due to the increase of the circulation of the large vortex. Recent results on drift turbulence evolution in magnetically confined plasmas are discussed in order to underline the idea that there is a link between the inverse cascade and trajectory trapping. The physical mechanisms are different in fluids and plasmas due to the different types of nonlinearities of the two systems, but trajectory trapping has the main role in both cases.
• We prove that the triviality of the Galois action on the suitably twisted odd-dimensional étale cohomogy group of a smooth projective varietiy with finite coefficients implies the existence of certain primitive roots of unity in the field of definition of the variety. This text was inspired by an exercise in Serre's Lectures on the Mordell--Weil theorem.
• We design freeform lenses refracting an arbitrarily given incident field into a given fixed direction. In the near field case, we study the existence of lenses refracting a given bright object into a predefined image. We also analyze the existence of systems of mirrors that solve the near field and the far field problems for reflection.
• We define the analogue of linear equivalence of graph divisors for the rotor-router model, and use it to prove polynomial time computability of some problems related to rotor-routing. Using the connection between linear equivalence for chip-firing and for rotor-routing, we prove that the number of rotor-router unicycle-orbits equals the order of the Picard group. We also show that the rotor-router action of the Picard group on the set of spanning in-arborescences can be interpreted in terms of the linear equivalence.
• We present two novel models of document coherence and their application to information retrieval (IR). Both models approximate document coherence using discourse entities, e.g. the subject or object of a sentence. Our fi?rst model views text as a Markov process generating sequences of discourse entities (entity n-grams); we use the entropy of these entity n-grams to approximate the rate at which new information appears in text, reasoning that as more new words appear, the topic increasingly drifts and text coherence decreases. Our second model extends the work of Guinaudeau & Strube [28] that represents text as a graph of discourse entities, linked by diff?erent relations, such as their distance or adjacency in text. We use several graph topology metrics to approximate di?fferent aspects of the discourse flow that can indicate coherence, such as the average clustering or betweenness of discourse entities in text. Experiments with several instantiations of these models show that: (i) our models perform on a par with two other well-known models of text coherence even without any parameter tuning, and (ii) reranking retrieval results according to their coherence scores gives notable performance gains, con?rming a relation between document coherence and relevance. This work contributes two novel models of document coherence, the application of which to IR complements recent work in the integration of document cohesiveness or comprehensibility to ranking [5, 56].
• Although the machine to machine (M2M) communication has been emerging in recent years, many vendors' specific proprietary solutions are not suitable for vital M2M applications. While the main focus of those solutions is management and provisioning of machines, real-time monitoring and communication control are also required to handle a variety of access technologies, like WiFi and LTE, and unleash machine deployment. In this paper, we present a new architecture addressing these issues by leveraging the IP Multimedia Subsystem (IMS) deployed in operator's networks for RCS and VoLTE.
• Jul 30 2015 q-bio.PE q-bio.TO arXiv:1507.08232v1
Along an individual lifetime, stem cells replicate and suffer modifications in their DNA content. I model the modifications in the DNA of a single cell as Levy flights, made up of small amplitude Brownian motions plus rare large-jumps events. The distribution function of mutations has a long tail, in which cancer events are located far away. The probability of cancer in a given tissue is roughly estimated as a Ncell Nstep, where Ncell is the number of stem cells, and Nstep -- the number of replication steps in the evolution of a single cell. I test this expression against recent data on lifetime cancer risk, Ncell and Nstep in different tissues. The coefficient a takes values between 2x10^-15 and 2x10^-11, depending on the role played by carcinogenic factors and the immune response. The smallest values of a correspond to cancers in which randomness plays the major role.
• We study the regularization dependence on the quenched Schwinger-Dyson equations in general gauge by applying the two types of regularizations, the four and three dimensional momentum cutoffs. The obtained results indicate that the solutions are not drastically affected by the choice of two different cutoff prescriptions. We then think that both the regularizations can nicely be adopted in the analyses for the Schwinger-Dyson equations.
• We present results from Monte Carlo calculations investigating the properties of the homogeneous, spin-balanced unitary Fermi gas in three dimensions. The temperature is varied across the superfluid transition allowing us to determine the temperature dependence of the chemical potential, the energy per particle and the contact density. Numerical artifacts due to finite volume and discretization are systematically studied and removed.

Tom Wong Jul 29 2015 04:56 UTC
Dear Referee,&#13; &#13; I found your suggestion of exploring search on a weighted graph to be interesting, so I worked it out with one marked vertex: https://scirate.com/arxiv/1507.07590&#13; &#13; Besides the speedup, the new methods are important; I extended degenerate perturbation theory in a couple ways that s ...(continued)
hong Jul 29 2015 02:39 UTC
Sorry. Is it just quantum contextuality?
Richard Kueng Jul 28 2015 07:01 UTC
fyi: our quantum implications are presented in Subsection 2.2 (pp 7-9).
Perplexed Platypus Jul 18 2015 13:28 UTC
Dear Tom,&#13; &#13; Thank you again for engaging in this conversation. It definitely helped me to understand your paper and your point of view much better and hence provide a more accurate review. Unfortunately, not all of your arguments were convincing to me. Even though they improved my understanding, th ...(continued)
Marco Tomamichel Jul 17 2015 05:12 UTC
I am no expert at all on strongly correlated systems or topological order, but since you refer to information theory in your abstract, let me still ask you: What is the justification for using $I_2(A:B) = H_2(A) + H_2(B) - H_2(AB)$ for the Rényi mutual information? This quantity has no information-t ...(continued)
Tom Wong Jul 16 2015 15:13 UTC
Dear Perplexed Platypus,&#13; &#13; Thanks for taking the extra effort to engage with me during the review process, and I'm glad that we see more similarly now. I hope you don't mind me clarifying a little more, since it may also help others. Feel free to ignore my comments below since you need to wrap up t ...(continued)
Perplexed Platypus Jul 16 2015 14:00 UTC
Dear Tom,&#13; &#13; Thanks again for responding to my comments, I understand your point of view much better now.&#13; &#13; &gt; But this means it's actually a quantum walk on a different graph.&#13; &gt; Thus it is a different search problem from the one considered in this manuscript, which focuses on the unweighted “simpl ...(continued)
Tom Wong Jul 16 2015 11:13 UTC
I guess they just enabled it. I got a bunch of emails about these comments all at once.
Tom Wong Jul 16 2015 09:57 UTC
It seems that email notifications of comments still doesn't work, as was discussed by Perplexed Platypus, Ashley Montenaro, and Aram Harrow (https://scirate.com/arxiv/1408.1816#456). Perhaps this should be added to the project's GitHub issue list? Or are they already working on it?
Tom Wong Jul 16 2015 09:53 UTC
Dear Perplexed Platypus,&#13; &#13; &gt;Thanks a lot for your prompt response and also for engaging in this experiment to asses whether SciRate can be a feasible platform for reviewing papers publicly while engaging with the authors during the process.&#13; &#13; You're welcome. I appreciate the opportunity to discuss ...(continued)