# Top arXiv papers

• Quantum tensor network states and more particularly projected entangled-pair states provide a natural framework for representing ground states of gapped, topologically ordered systems. The defining feature of these representations is that topological order is a consequence of the symmetry of the underlying tensors in terms of matrix product operators. In this paper, we present a systematic study of those matrix product operators, and show how this relates entanglement properties of projected entangled-pair states to the formalism of fusion tensor categories. From the matrix product operators we construct a C*-algebra and find that topological sectors can be identified with the central idempotents of this algebra. This allows us to construct projected entangled-pair states containing an arbitrary number of anyons. Properties such as topological spin, the S matrix, fusion and braiding relations can readily be extracted from the idempotents. As the matrix product operator symmetries are acting purely on the virtual level of the tensor network, the ensuing Wilson loops are not fattened when perturbing the system, and this opens up the possibility of simulating topological theories away from renormalization group fixed points. We illustrate the general formalism for the special cases of discrete gauge theories and string-net models.
• A pure quantum state is called $k$-uniform if all its reductions to $k$-qudit are maximally mixed. We investigate the general constructions of $k$-uniform pure quantum states of $n$ subsystems with $d$ levels. We provide one construction via symmetric matrices and the second one through classical error-correcting codes. There are three main results arising from our constructions. Firstly, we show that for any given even $n\ge 2$, there always exists an $n/2$-uniform $n$-qudit quantum state of level $p$ for sufficiently large prime $p$. Secondly, both constructions show that their exist $k$-uniform $n$-qudit pure quantum states such that $k$ is proportional to $n$, i.e., $k=\Omega(n)$ although the construction from symmetric matrices outperforms the one by error-correcting codes. Thirdly, our symmetric matrix construction provides a positive answer to the open question in \citeDA on whether there exists $3$-uniform $n$-qudit pure quantum state for all $n\ge 8$. In fact, we can further prove that, for every $k$, there exists a constant $M_k$ such that there exists a $k$-uniform $n$-qudit quantum state for all $n\ge M_k$. In addition, by using concatenation of algebraic geometry codes, we give an explicit construction of $k$-uniform quantum state when $k$ tends to infinity.
• We give an introduction to some of the recent ideas that go under the name geometric complexity theory''. We first sketch the proof of the known upper and lower bounds for the determinantal complexity of the permanent. We then introduce the concept of a representation theoretic obstruction, which has close links to algebraic combinatorics, and we explain some of the insights gained so far. In particular, we address very recent insights on the complexity of testing the positivity of Kronecker coefficients. We also briefly discuss the related asymptotic version of this question.
• Nov 26 2015 cs.CC arXiv:1511.08189v1
We show that the Graph Automorphism problem is ZPP-reducible to MKTP, the problem of minimizing time-bounded Kolmogorov complexity. MKTP has previously been studied in connection with the Minimum Circuit Size Problem (MCSP) and is often viewed as essentially a different encoding of MCSP. All prior reductions to MCSP have applied equally well to MKTP, and vice-versa, and all such reductions have relied on the fact that functions computable in polynomial time can be inverted with high probability relative to MCSP and MKTP. Our reduction uses a different approach, and consequently yields the first example of a problem -- other than MKTP itself -- that is in ZPP^MKTP but that is not known to lie in NP intersect coNP. We also show that this approach can be used to provide a reduction of the Graph Isomorphism problem to MKTP.
• Boson Sampling represents a promising witness of the supremacy of quantum systems as a resource for the solution of computational problems. The classical hardness of Boson Sampling has been related to the so called Permanent-of-Gaussians Conjecture and has been extended to some generalizations such as scattershot Boson Sampling, approximate and lossy sampling under some reasonable constraints. However, it is still unclear how demanding these bounds are for a quantum experimental sampler. Starting from a state of the art analysis and focusing on the foreseeable practical conditions needed to reach quantum supremacy, we look at different techniques and present a more general and effective solution. We apply our approach to both the experimental suggestions presented to date and we eventually find in both cases a new threshold that is less error sensitive and experimentally more feasible.
• A method is discussed to analyze the dynamics of a dissipative quantum system. The method hinges upon the definition of an alternative (time-dependent) product among the observables of the system. In the long time limit this yields a contracted algebra. This contraction irreversibly affects some of the quantum features of the dissipative system.
• Despite the tremendous empirical success of equivalence principle, there are several theoretical motivations for existence of a preferred reference frame (or aether) in a consistent theory of quantum gravity. However, if quantum gravity had a preferred reference frame, why would high energy processes enjoy such a high degree of Lorentz symmetry? While this is often considered as an argument against aether, here I provide three independent arguments for why perturbative unitarity (or weak coupling) of the Lorentz-violating effective field theories put stringent constraints on possible observable violations of Lorentz symmetry at high energies. In particular, the interaction with the scalar graviton in a consistent low-energy theory of gravity and a (radiatively and dynamically) stable cosmological framework, leads to these constraints. The violation (quantified by the relative difference in maximum speed of propagation) is limited to $\lesssim 10^{-10} E({\rm eV})^{-4}$ (superseding all current empirical bounds), or the theory will be strongly coupled beyond meV scale. The latter happens in extended Horava-Lifshitz gravities, as a result of a previously ignored quantum anomaly. Finally, given that all cosmologically viable theories with significant Lorentz violation appear to be strongly coupled beyond meV scale, we conjecture that, similar to color confinement in QCD, or Vainshetin screening for massive gravity, high energy theories (that interact with gravity) are shielded from Lorentz violation (at least, up to the scale where gravity is UV-completed). In contrast, microwave or radio photons, cosmic background neutrinos, or gravitational waves may provide more promising candidates for discovery of violations of Lorentz symmetry.
• In this paper, we show how to create paraphrastic sentence embeddings using the Paraphrase Database (Ganitkevitch et al., 2013), an extensive semantic resource with millions of phrase pairs. We consider several compositional architectures and evaluate them on 24 textual similarity datasets encompassing domains such as news, tweets, web forums, news headlines, machine translation output, glosses, and image and video captions. We present the interesting result that simple compositional architectures based on updated vector averaging vastly outperform long short-term memory (LSTM) recurrent neural networks and that these simpler architectures allow us to learn models with superior generalization. Our models are efficient, very easy to use, and competitive with task-tuned systems. We make them available to the research community1 with the hope that they can serve as the new baseline for further work on universal paraphrastic sentence embeddings.
• Nov 26 2015 cs.AI cs.CL arXiv:1511.08130v1
The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning. We discuss a simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users. We also present some conjectures on the sort of algorithms the machine should support in order to profitably learn from the environment.
• We study the effective field theory of KKLT and LVS moduli stabilisation scenarios coupled to an anti-D3-brane at the tip of a warped throat. We describe the presence of the anti-brane in terms of a nilpotent goldstino superfield in a supersymmetric effective field theory. The introduction of this superfield produces a term that can lead to a de Sitter minimum. We fix the Kaehler moduli dependence of the nilpotent field couplings by matching this term with the anti-D3-brane uplifting contribution. The main result of this paper is the computation, within this EFT, of the soft supersymmetry breaking terms in both KKLT and LVS for matter living on D3-brane (leaving the D7-brane analysis to an appendix). A handful of distinct phenomenological scenarios emerge that could have low energy implications, most of them having a split spectrum of soft masses. Some cosmological and phenomenological properties of these models are discussed. We also check that the attraction between the D3-brane and the anti-D3-brane does not affect the leading contribution to the soft masses and does not destabilise the system.
• In this work we deal with the problem of high-level event detection in video. Specifically, we study the challenging problems of i) learning to detect video events from solely a textual description of the event, without using any positive video examples, and ii) additionally exploiting very few positive training samples together with a small number of related'' videos. For learning only from an event's textual description, we first identify a general learning framework and then study the impact of different design choices for various stages of this framework. For additionally learning from example videos, when true positive training samples are scarce, we employ an extension of the Support Vector Machine that allows us to exploit related'' event videos by automatically introducing different weights for subsets of the videos in the overall training set. Experimental evaluations performed on the large-scale TRECVID MED 2014 video dataset provide insight on the effectiveness of the proposed methods.
• Nov 26 2015 cs.AI cs.CL cs.LG arXiv:1511.07972v1
Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. These extended models were used successfully to predict clinical events like procedures, lab measurements, and diagnoses. In this paper, we attempt to map these embedding models, which were developed purely as solutions to technical problems, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory and sensory memory. We also make an analogy between a predictive model, which uses entity representations derived in memory models, to working memory. Cognitive memory functions are typically classified as long-term or short-term memory, where long-term memory has the subcategories declarative memory and non-declarative memory and the short term memory has the subcategories sensory memory and working memory. There is evidence that these main cognitive categories are partially dissociated from one another in the brain, as expressed in their differential sensitivity to brain damage. However, there is also evidence indicating that the different memory functions are not mutually independent. A hypothesis that arises out off this work is that mutual information exchange can be achieved by sharing or coupling of distributed latent representations of entities across different memory functions.
• We study non-convex empirical risk minimization for learning halfspaces and neural networks. For loss functions that are $L$-Lipschitz continuous, we present algorithms to learn halfspaces and multi-layer neural networks that achieve arbitrarily small excess risk $\epsilon>0$. The time complexity is polynomial in the input dimension $d$ and the sample size $n$, but exponential in the quantity $(L/\epsilon^2)\log(L/\epsilon)$. These algorithms run multiple rounds of random initialization followed by arbitrary optimization steps. We further show that if the data is separable by some neural network with constant margin $\gamma>0$, then there is a polynomial-time algorithm for learning a neural network that separates the training data with margin $\Omega(\gamma)$. As a consequence, the algorithm achieves arbitrary generalization error $\epsilon>0$ with ${\rm poly}(d,1/\epsilon)$ sample and time complexity. We establish the same learnability result when the labels are randomly flipped with probability $\eta<1/2$.
• Nov 26 2015 math.CO arXiv:1511.07920v1
The minimum rank problem is to determine for a graph $G$ the smallest rank of a Hermitian (or real symmetric) matrix whose off-diagonal zero-nonzero pattern is that of the adjacency matrix of $G$. Here $G$ is taken to be a circulant graph, and only circulant matrices are considered. The resulting graph parameter is termed the minimum circulant rank of the graph. This value is determined for every circulant graph in which a vertex neighborhood forms a consecutive set, and in this case is shown to coincide with the usual minimum rank. Under the additional restriction to positive semidefinite matrices, the resulting parameter is shown to be equal to the smallest number of dimensions in which the graph has an orthogonal representation with a certain symmetry property, and also to the smallest number of terms appearing among a certain family of polynomials determined by the graph. This value is then determined when the number of vertices is prime. The analogous parameter over the reals is also investigated.
• Person detection is a key problem for many computer vision tasks. While face detection has reached maturity, detecting people under a full variation of camera view-points, human poses, lighting conditions and occlusions is still a difficult challenge. In this work we focus on detecting human heads in natural scenes. Starting from the recent local R-CNN object detector, we extend it with two types of contextual cues. First, we leverage person-scene relations and propose a Global CNN model trained to predict positions and scales of heads directly from the full image. Second, we explicitly model pairwise relations among objects and train a Pairwise CNN model using a structured-output surrogate loss. The Local, Global and Pairwise models are combined into a joint CNN framework. To train and test our full model, we introduce a large dataset composed of 369,846 human heads annotated in 224,740 movie frames. We evaluate our method and demonstrate improvements of person head detection against several recent baselines in three datasets. We also show improvements of the detection speed provided by our model.
• Nov 26 2015 cs.NE cs.MS arXiv:1511.07889v1
The rnn package provides components for implementing a wide range of Recurrent Neural Networks. It is built withing the framework of the Torch distribution for use with the nn package. The components have evolved from 3 iterations, each adding to the flexibility and capability of the package. All component modules inherit either the AbstractRecurrent or AbstractSequencer classes. Strong unit testing, continued backwards compatibility and access to supporting material are the principles followed during its development. The package is compared against existing implementations of two published papers.
• Obtaining the exciton dynamics of large photosynthetic complexes by using mixed quantum mechanics/molecular mechanics (QM/MM) is computationally demanding. We propose a machine learning technique, multi-layer perceptrons, as a tool to reduce the time required to compute excited state energies. With this approach we predict time-dependent density functional theory (TDDFT) excited state energies of bacteriochlorophylls in the Fenna-Matthews-Olson (FMO) complex. Additionally we compute spectral densities and exciton populations from the predictions. Different methods to determine multi-layer perceptron training sets are introduced, leading to several initial data selections. In addition, we compute spectral densities and exciton populations. Once multi-layer perceptrons are trained, predicting excited state energies was found to be significantly faster than the corresponding QM/MM calculations. We showed that multi-layer perceptrons can successfully reproduce the energies of QM/MM calculations to a high degree of accuracy with prediction errors contained within 0.01 eV (0.5%). Spectral densities and exciton dynamics are also in agreement with the TDDFT results. The acceleration and accurate prediction of dynamics strongly encourage the combination of machine learning techniques with ab-initio methods.
• Cold atoms with laser-induced spin-orbit (SO) interactions provide intriguing new platforms to explore novel quantum physics beyond natural conditions of solids. Recent experiments demonstrated the one-dimensional (1D) SO coupling for boson and fermion gases. However, realization of 2D SO interaction, a much more important task, remains very challenging. Here we propose and experimentally realize, for the first time, 2D SO coupling and topological band with $^{87}$Rb degenerate gas through a minimal optical Raman lattice scheme, without relying on phase locking or fine tuning of optical potentials. A controllable crossover between 2D and 1D SO couplings is studied, and the SO effects and nontrivial band topology are observed by measuring the atomic cloud distribution and spin texture in the momentum space. Our realization of 2D SO coupling with advantages of small heating and topological stability opens a broad avenue in cold atoms to study exotic quantum phases, including the highly-sought-after topological superfluid phases.
• The mass of a star is arguably its most fundamental parameter. For red giant stars, tracers luminous enough to be observed across the Galaxy, mass implies a stellar evolution age. It has proven to be extremely difficult to infer ages and masses directly from red giant spectra using existing methods. From the KEPLER and APOGEE surveys, samples of several thousand stars exist with high-quality spectra and asteroseismic masses. Here we show that from these data we can build a data-driven spectral model using The Cannon, which can determine stellar masses to $\sim$ 0.07 dex from APOGEE DR12 spectra of red giants; these imply age estimates accurate to $\sim$ 0.2 dex (40 percent). We show that The Cannon constrains these ages foremost from spectral regions with CN absorption lines, elements whose surface abundances reflect mass-dependent dredge-up. We deliver an unprecedented catalog of 80,000 giants (including 20,000 red-clump stars) with mass and age estimates, spanning the entire disk (from the Galactic center to R $\sim$ 20 kpc). We show that the age information in the spectra is not simply a corollary of the birth-material abundances [Fe/H] and [$\alpha$/Fe], and that even within a mono-abundance population of stars, there are age variations that vary sensibly with Galactic position. Such stellar age constraints across the Milky Way open up new avenues in Galactic archeology.
• We show that the masses of red giant stars can be well predicted from their photospheric carbon and nitrogen abundances, in conjunction with their spectroscopic stellar labels log g, Teff, and [Fe/H]. This is qualitatively expected from mass-dependent post main sequence evolution. We here establish an empirical relation between these quantities by drawing on 1,475 red giants with asteroseismic mass estimates from Kepler that also have spectroscopic labels from APOGEE DR12. We assess the accuracy of our model, and find that it predicts stellar masses with fractional r.m.s. errors of about 14% (typically 0.2 Msun). From these masses, we derive ages with r.m.s errors of 40%. This empirical model allows us for the first time to make age determinations (in the range 1-13 Gyr) for vast numbers of giant stars across the Galaxy. We apply our model to 52,000 stars in APOGEE DR12, for which no direct mass and age information was previously available. We find that these estimates highlight the vertical age structure of the Milky Way disk, and that the relation of age with [alpha/M] and metallicity is broadly consistent with established expectations based on detailed studies of the solar neighbourhood.
• A new non-perturbative, gauge-invariant model QCD renormalization is applied to high energy elastic pp-scattering. The differential cross-section deduced from this model displays a diffraction dip that resembles those of experiments. Comparison with ISR and LHC data is currently underway.
• We continue our study of zero-dimensional field theories in which the fields take values in a strong homotopy Lie algebra. In a first part, we review in detail how higher Chern-Simons theories arise in the AKSZ-formalism. These theories form a universal starting point for the construction of $L_\infty$-algebra models. We then show how to describe superconformal field theories and how to perform dimensional reductions in this context. In a second part, we demonstrate that Nambu-Poisson and multisymplectic manifolds are closely related via their Heisenberg algebras. As a byproduct of our discussion, we find central Lie $p$-algebra extensions of $\mathfrak{so}(p+2)$. Finally, we study a number of $L_\infty$-algebra models which are physically interesting and which exhibit quantized multisymplectic manifolds as vacuum solutions.
• We introduce and demonstrate the power of a method to speed up current iterative techniques for N-body modified gravity simulations. Our method is based on the observation that the accuracy of the final result is not compromised if the calculation of the fifth force becomes less accurate, but substantially faster, in high-density regions where it is weak due to screening. We focus on the nDGP model which employs Vainshtein screening, and test our method by running AMR simulations in which the solutions on the finer levels of the mesh (high density) are not obtained iteratively, but instead interpolated from coarser levels. We show that the impact this has on the matter power spectrum is below $1\%$ for $k < 5h/{\rm Mpc}$ at $z = 0$, and even smaller at higher redshift. The impact on halo properties is also small ($\lesssim 3\%$ for abundance, profiles, mass; and $\lesssim 0.05\%$ for positions and velocities). The method can boost the performance of modified gravity simulations by more than a factor of 10, which allows them to be pushed to resolution levels that were previously hard to achieve.
• Dynamical estimates of the mass surface density at the solar radius can be made up to a height of 4 kpc using thick disk stars as tracers of the potential. We investigate why different Jeans estimators of the local surface density lead to puzzling and conflicting results. Using the Jeans equations, we compute the vertical (F_z) and radial (F_R) components of the gravitational force, as well as Gamma(z), defined as the radial derivative of V_c^2, with V_c^2= -RF_R. If we assume that the thick disk does not flare and that all the components of the velocity dispersion tensor of the thick disk have a uniform radial scalelength of 3.5 kpc, Gamma takes implausibly large negative values, when using the currently available kinematical data of the thick disk. This implies that the input parameters or the model assumptions must be revised. We have explored, using a simulated thick disk, the impact of the assumption that the scale lengths of the density and velocity dispersions do not depend on the vertical height z above the midplane. In the lack of any information about how these scale radii depend on z, we define a different strategy. By using a parameterized Galactic potential, we find that acceptable fits to F_z, F_R and Gamma are obtained for a flaring thick disk and a spherical dark matter halo with a local density larger than 0.0064 M_sun pc^-3. Disk-like dark matter distributions might be also compatible with the current data of the thick disk. A precise measurement of Gamma at the midplane could be very useful to discriminate between models.
• The detailed composition of most metal-poor halo stars has been found to be very uniform. However, a fraction of 20-70% (increasing with decreasing metallicity) exhibit dramatic enhancements in their abundances of carbon - the so-called carbon-enhanced metal-poor (CEMP) stars. A key question for Galactic chemical evolution models is whether this non-standard composition reflects that of the stellar natal clouds, or is due to local, post-birth mass transfer of chemically processed material from a binary companion; CEMP stars should then all be members of binary systems. Our aim is to determine the frequency and orbital parameters of binaries among CEMP stars with and without over-abundances of neutron-capture elements - CEMP-s and CEMP-no stars, respectively - as a test of this local mass-transfer scenario. This paper discusses a sample of 24 CEMP-no stars, while a subsequent paper will consider a similar sample of CEMP-s stars. Most programme stars exhibit no statistically significant radial-velocit variation over this period and appear to be single, while four are found to be binaries with orbital periods of 300-2,000 days and normal eccentricity; the binary frequency for the sample is 17+-9%. The single stars mostly belong to the recently-identified low-C band'', while the binaries have higher absolute carbon abundances. We conclude that the nucleosynthetic process responsible for the strong carbon excess in these ancient stars is unrelated to their binary status; the carbon was imprinted on their natal molecular clouds in the early Galactic ISM by an even earlier, external source, strongly indicating that the CEMP-no stars are likely bona fide second-generation stars. We discuss potential production sites for carbon and its transfer across interstellar distances in the early ISM, and implications for the composition of high-redshift DLA systems. Abridged.
• We present a combined experimental and theoretical study of highly charged and excited electron-hole complexes in strain-free (111) GaAs/AlGaAs quantum dots grown by droplet epitaxy. We address the complexes with one of the charge carriers residing in the excited state, namely, the hot'' trions X$^{-*}$ and X$^{+*}$, and the doubly negatively charged exciton X$^{2-}$. Our magneto-photoluminescence experiments performed on single quantum dots in the Faraday geometry uncover characteristic emission patterns for each excited electron-hole complex, which are very different from the photoluminescence spectra observed in (001)-grown quantum dots. We present a detailed theory of the fine structure and magneto-photoluminescence spectra of X$^{-*}$, X$^{+*}$ and X$^{2-}$ complexes, governed by the interplay between the electron-hole Coulomb exchange interaction and the heavy-hole mixing, characteristic for these quantum dots with a trigonal symmetry. Comparison between experiment and theory of the magneto-photoluminescence allows for precise charge state identification, as well as extraction of electron-hole exchange interaction constants and $g$-factors for the charge carriers occupying excited states.
• As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. We apply the method to a scalar field endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is shown that the Jeans wavenumber defined as $k_J = a \sqrt{mH}$ is directly related to the suppression of linear perturbations at wavenumbers $k>k_J$. We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in the prediction for cosmological observables.
• For every adapted, càglàd process (strategy) $G$ and typical cŕdlŕg price paths whose jumps are no greater than some $c>0$ we define integral $G\cdot S$ as a limit of simple integrals.
• In this paper we study the eigenvalue problems for a nonlocal operator of order $s$ that is analogous to the local pseudo $p-$Laplacian. We show that there is a sequence of eigenvalues $\lambda_n \to \infty$ and that the first one is positive, simple, isolated and has a positive and bounded associated eigenfunction. For the first eigenvalue we also analyze the limits as $p\to \infty$ (obtaining a limit nonlocal eigenvalue problem analogous to the pseudo infinity Laplacian) and as $s\to 1^-$ (obtaining the first eigenvalue for a local operator of $p-$Laplacian type). To perform this study we have to introduce anisotropic fractional Sobolev spaces and prove some of their properties.
• Strain engineering allows the physical properties of materials and devices to be widely tailored, as paradigmatically demonstrated by strained transistors and semiconductor lasers employed in consumer electronics. For this reason, its potential impact on our society has been compared to that of chemical alloying. Although significant progress has been made in the last years on strained nanomaterials, strain fields (which are of tensorial nature, with six independent components) are still mostly used in a "scalar" and/or static fashion. Here we present a new class of strain actuators which allow the three components of the in-plane stress tensor in a nanomembrane to be independently and reversibly controlled. The actuators are based on monolithic piezoelectric substrates, which are micro-machined via femtosecond-laser processing. Their functionality is demonstrated by "programming" arbitrary stress states in a semiconductor layer, whose light emission is used as a local and sensitive strain gauge. The results shown in this work open a new route to investigate and make use of strain effects in materials and devices.
• Under very mild assumptions, we give formulas for the correlation and local dimensions of measures on the limit set of a Moran construction by means of the data used to construct the set.
• This paper presents a new model for characterising temporal dependence in exceedances above a threshold. The model is based on the class of trawl processes, which are stationary, infinitely divisible stochastic processes. We review properties of trawl processes in the context of statistical modelling, and introduce a new representation that enables exact simulation for discrete observations. The model for extreme values is constructed by embedding a trawl process in a hierarchical framework, which ensures that the marginal distribution is generalised Pareto, as expected from classical extreme value theory. We also consider a modified version of this model that works with a wider class of generalised Pareto distributions, and has the advantage of separating marginal and temporal dependence properties. The model is illustrated by applications to environmental time series, and it is shown that the model offers considerable flexibility in capturing the dependence structure of extreme value data.
• In the recent paper on "The Higgs Legacy of the LHC Run I" we interpreted the LHC Higgs results in terms of an effective Lagrangian using the SFitter framework. For the on-shell Higgs analysis of rates and kinematic distributions we relied on a linear representation based on dimension-6 operators with a simplified fermion sector. In this addendum we describe how the extension of Higgs couplings modifications in a linear dimension-6 Lagrangian can be formally understood in terms of the non-linear effective field theory. It turns out that our previous results can be translated to the non-linear framework through a simple operator rotation.
• We study the time evolution of Wightman two point functions of scalar fields in AdS$_3$-Vaidya, a spacetime undergoing gravitational collapse. In the boundary field theory, the collapse corresponds to a quench process where the dual 1+1 dimensional CFT is taken out of equilibrium and subsequently thermalizes. From the two point function, we extract an effective occupation number in the boundary theory and study how it approaches the thermal Bose-Einstein distribution. We find that the Wightman functions, as well as the effective occupation numbers, thermalize with a rate set by the lowest quasinormal mode of the scalar field in the BTZ black hole background. We give a heuristic argument for the quasinormal decay, which applies to more general Vaidya spacetimes also in higher dimensions. This suggests a unified picture in which thermalization times of one and two point functions are determined by the lowest quasinormal mode. Finally, we study how these results compare to previous calculations of two point functions based on the geodesic approximation.
• We report a high-field magnetotransport study on selected low-carrier crystals of the topological insulator Bi$_{2-x}$Sb${_x}$Te$_{3-y}$Se$_{y}$. Monochromatic Shubnikov - de Haas (SdH) oscillations are observed at 4.2~K and their two-dimensional nature is confirmed by tilting the magnetic field with respect to the sample surface. With help of Lifshitz-Kosevich theory, important transport parameters of the surface states are obtained, including the carrier density, cyclotron mass and mobility. For $(x,y)=(0.50,1.3)$ the Landau level plot is analyzed in terms of a model based on a topological surface state in the presence of a non-ideal linear dispersion relation and a Zeeman term with $g_s = 70$ or $-54$. Input parameters were taken from the electronic dispersion relation measured directly by angle resolved photoemission spectroscopy on crystals from the same batch. The Hall resistivity of the same crystal (thickness of 40~$\mu$m) is analyzed in a two-band model, from which we conclude that the ratio of the surface conductance to the total conductance amounts to 32~\%.
• We present the calculation of the NLO QCD corrections to the electroweak production of top-antitop pairs at the CERN LHC in the presence of a new neutral gauge boson. The corrections are implemented in the parton shower Monte Carlo program POWHEG. Standard Model (SM) and new physics interference effects are properly taken into account. QED singularities, first appearing at this order, are consistently subtracted. Numerical results are presented for SM and $Z'$ total cross sections and distributions in invariant mass, transverse momentum, azimuthal angle and rapidity of the top-quark pair. The remaining theoretical uncertainty from scale and PDF variations is estimated, and the potential of the charge asymmetry to distinguish between new physics models is investigated for the Sequential SM and a leptophobic topcolor model.
• In this paper, we investigate perturbations of linear integrable Hamiltonian systems, with the aim of establishing results in the spirit of the KAM theorem (preservation of invariant tori), the Nekhoroshev theorem (stability of the action variables for a finite but long interval of time) and Arnold diffusion (instability of the action variables). Whether the frequency of the integrable system is resonant or not, it is known that the KAM theorem does not hold true for all perturbations; when the frequency is resonant, it is the Nekhoroshev theorem which does not hold true for all perturbations. Our first result deals with the resonant case: we prove a result of instability for a generic perturbation, which implies that the KAM and the Nekhoroshev theorem do not hold true even for a generic perturbation. The case where the frequency is non-resonant is more subtle. Our second result shows that for a generic perturbation, the KAM theorem holds true. Concerning the Nekhrosohev theorem, it is known that one has stability over an exponentially long interval of time, and that this cannot be improved for all perturbations. Our third result shows that for a generic perturbation, one has stability for a doubly exponentially long interval of time. The only question left unanswered is whether one has instability for a generic perturbation (necessarily after this very long interval of time).
• We introduce a damping term for the special relativistic Euler equations in $3$-D and show that the equations reduce to the non-relativistic damped Euler equations in the Newtonian limit. We then write the equations as a symmetric hyperbolic system for which local-in-time existence of smooth solutions can be shown.
• The functional measures for quantum massless and massive particles are shown to be equivalent up to the certain diffeomorphism.
• Nov 26 2015 stat.OT arXiv:1511.08180v1
This article details the historical developments that gave rise to the Bayes factor for testing a point null hypothesis against a composite alternative. In line with current thinking, we find that the conceptual innovation - to assign prior mass to a general law - is due to a series of three articles by Dorothy Wrinch and Sir Harold Jeffreys in (1919, 1921, 1923). However, our historical investigation also suggests that in 1932 it was J.B.S. Haldane who derived the first Bayes factor. Jeffreys was well aware of Haldane's work and it may have inspired him to pursue a more concrete statistical implementation for his conceptual ideas. It thus appears that Haldane may have had a much bigger role in the statistical development of the Bayes factor than has hitherto been assumed.
• We consider a class of fixed-charge transportation problems over graphs. We show that this problem is strongly NP-hard, but solvable in pseudo-polynomial time over trees using dynamic programming. We also show that the LP formulation associated to the dynamic program can be obtained from extended formulations of single-node flow polytopes. Given these results, we present a unary expansion-based formulation for general graphs that is computationally advantageous when compared to a standard formulation, even if its LP relaxation is not stronger.
• In this paper we explore two ways of using context for object detection. The first model focusses on people and the objects they commonly interact with, such as fashion and sports accessories. The second model considers more general object detection and uses the spatial relationships between objects and between objects and scenes. Our models are able to capture precise spatial relationships between the context and the object of interest, and make effective use of the appearance of the contextual region. On the newly released COCO dataset, our models provide relative improvements of up to 5% over CNN-based state-of-the-art detectors, with the gains concentrated on hard cases such as small objects (10% relative improvement).
• Let $E$ be an elliptic curve over $\mathbb{Q}$ and $A$ be another elliptic curve over a real quadratic number field. We construct a $\mathbb{Q}$-motive of rank $8$, together with a distinguished class in the associated Bloch-Kato Selmer group, using Hirzebruch-Zagier cycles, that is, graphs of Hirzebruch-Zagier morphisms. We show that, under certain assumptions on $E$ and $A$, the non-vanishing of the central critical value of the (twisted) triple product $L$-function attached to $(E,A)$ implies that the dimension of the associated Bloch-Kato Selmer group of the motive is $0$; and the non-vanishing of the distinguished class implies that the dimension of the associated Bloch-Kato Selmer group of the motive is $1$. This can be viewed as the triple product version of Kolyvagin's work on bounding Selmer groups of a single elliptic curve using Heegner points.
• We analyze the impact of electric field and magnetic field fluctuations in the decoherence of the electronic spin associated with a single nitrogen-vacancy (NV) defect in diamond by engineering spin eigenstates protected either against magnetic noise or against electric noise. The competition between these noise sources is analyzed quantitatively by changing their relative strength through modifications of the environment. This study provides significant insights into the decoherence of the NV electronic spin, which is valuable for quantum metrology and sensing applications.
• We confront a hybrid strong/weak coupling model for jet quenching to data from LHC heavy ion collisions. The model combines the perturbative QCD physics at high momentum transfer and the strongly coupled dynamics of non- abelian gauge theories plasmas in a phenomenological way. By performing a full Monte Carlo simulation, and after fitting one single parameter, we successfully describe several jet observables at the LHC, including dijet and photon jet measurements. Within current theoretical and experimental uncertainties, we find that such observables show little sensitivity to the specifics of the microscopic energy loss mechanism. We also present a new observable, the ratio of the fragmentation function of inclusive jets to that of the associated jets in dijet pairs, which can discriminate among different medium models. Finally, we discuss the importance of plasma response to jet passage in jet shapes.
• Nov 26 2015 math.NT arXiv:1511.08172v1
In this article, we study $p$-adic torus periods for certain $p$-adic valued functions on Shimura curves coming from classical origin. We prove a $p$-adic Waldspurger formula for these periods, generalizing the recent work of Bertolini, Darmon, and Prasanna. In pursuing such a formula, we construct a new anti-cyclotomic $p$-adic $L$-function of Rankin-Selberg type. At a character of positive weight, the $p$-adic $L$-function interpolates the central critical value of the complex Rankin-Selberg $L$-function. Its value at a Dirichlet character, which is outside the range of interpolation, essentially computes the corresponding $p$-adic torus period.
• We analyse a set of moments of minima of eclipsing variable V0873 Per. V0873 Per is a short period low mass binary star. Data about moments of minima of V0873 Per were taken from literature and our observations during 2013-2014. Our aim is to test the system on existence of new bodies using timing of minima of eclipses. We found the periodical variation of orbital period of V0873 Per. This variation can be explained by the gravitational influence of a third companion on the central binary star. The mass of third body candidate is $\approx 0.2 M_{\odot}$, its orbital period is $\approx 300$ days. The paper also includes a table with moments of minima calculated from our observations which can be used in future investigations of V0873 Per.
• Direct numerical integrations of the two-dimensional Fokker-Planck equation are carried out for compact objects orbiting a supermassive black hole (SBH) at the center of a galaxy. As in Papers I-III, the diffusion coefficients incorporate the effects of the lowest-order post-Newtonian corrections to the equations of motion. In addition, terms describing the loss of orbital energy and angular momentum due to the 5/2-order post-Newtonian terms are included. In the steady state, captures are found to occur in two regimes that are clearly differentiated in terms of energy, or semimajor axis; these two regimes are naturally characterized as "plunges" (low binding energy) and "EMRIs," or extreme-mass-ratio inspirals (high binding energy). The capture rate, and the distribution of orbital elements of the captured objects, are presented for two steady-state models based on the Milky Way: one with a relatively high density of remnants and one with a lower density. In both models, but particularly in the second, the steady-state energy distribution and the distribution of orbital elements of the captured objects are substantially different than if the Bahcall-Wolf energy distribution were assumed. The ability of classical relaxation to soften the blocking effects of the Schwarzschild barrier is quantified.These results, together with those of Papers I-III, suggest that a Fokker-Planck description can adequately represent the dynamics of collisional loss cones in the relativistic regime.
• An excess of X-ray emission below 1 keV, called soft-excess, is detected in a large fraction of Seyfert 1-1.5s. The origin of this feature remains debated, as several models have been suggested to explain it, including warm Comptonization and blurred ionized reflection. In order to constrain the origin of this component, we exploit the different behavior of these models above 10 keV. Ionized reflection covers a broad energy range, from the soft X-rays to the hard X-rays, while Comptonization drops very quickly in the soft X-rays. We present here the results of a study done on 102 Seyfert 1s (Sy 1.0, 1.2, 1.5 and NLSy1) from the Swift/BAT 70-Month Hard X-ray Survey catalog. The joint spectral analysis of Swift/BAT and XMM-Newton data allows a hard X-ray view of the soft-excess that is present in about 80% of the objects of our sample. We discuss how the soft-excess strength is linked to the reflection at high energy, to the photon index of the primary continuum and to the Eddington ratio. In particular, we find a positive dependence of the soft-excess intensity on the Eddington ratio. We compare our results to simulations of blurred ionized-reflection models and show that they are in contradiction. By stacking both XMM-Newton and Swift/BAT spectra per soft-excess strength, we see that the shape of reflection at hard X-rays stays constant when the soft-excess varies, showing an absence of link between reflection and soft-excess. We conclude that the ionized-reflection model as the origin of the soft-excess is disadvantaged in favour of the warm Comptonization model in our sample of Seyfert 1s.
• Indoor tracking has all-pervasive applications beyond mere surveillance, for example in education, health monitoring, marketing, energy management and so on. Image and video based tracking systems are intrusive. Thermal array sensors on the other hand can provide coarse-grained tracking while preserving privacy of the subjects. The goal of the project is to facilitate motion detection and group proxemics modeling using an 8 x 8 infrared sensor array. Each of the 8 x 8 pixels is a temperature reading in Fahrenheit. We refer to each 8 x 8 matrix as a scene. We collected approximately 902 scenes with different configurations of human groups and different walking directions. We infer direction of motion of a subject across a set of scenes as left-to-right, right-to-left, up-to-down and down-to-up using cross-correlation analysis. We used features from connected component analysis of each background subtracted scene and performed Support Vector Machine classification to estimate number of instances of human subjects in the scene.

Mile Gu Nov 20 2015 05:04 UTC

Good question! There shouldn't be any contradiction with the correspondence principle. The reason here is that the quantum models are built to simulate the output behaviour of macroscopic, classical systems, and are not necessarily macroscopic themselves. When we compare quantum and classical comple

...(continued)
hong Nov 20 2015 00:40 UTC

Interesting results. But, just wondering, does it contradict to the correspondence principle?

Marco Tomamichel Nov 17 2015 21:05 UTC

Thanks for pointing this out, this is an unintended omission and we will certainly fix it. I thought Koashi was first to use entropic uncertainty relations for QKD but apparently I was wrong.

Raul Garcia-Patron Nov 17 2015 14:42 UTC

Nice work, congratulations!
Please correct me if I am wrong, but there seems to be an important reference missing in the manuscript, the 2003 paper by Frederic Grosshans and Nicolas Cerf using uncertainty relations to prove the security of individual attacks against CV-QKD: arXiv:quant-ph/0311006

Marco Tomamichel Nov 12 2015 06:07 UTC

Okay, so my scite should not be considered as an endorsement. The only interesting part of this paper is Table I and II (minus the caption, which is wrong).

Chris Ferrie Nov 12 2015 05:36 UTC

Feels a bit like numerology, but the simple point that the setting choices are far from uniform is worrisome.

Marco Tomamichel Nov 12 2015 05:13 UTC

And looking forward to the response as well!

Tom Wong Nov 09 2015 11:12 UTC

This resolves an open problem of whether the procedure of Emms et al (2006), which is based on quantum walks, can distinguish all non-isomorphic strongly regular graphs. Their conclusion: no, because they came up with an example where the procedure fails.

Frédéric Grosshans Nov 02 2015 11:51 UTC

Nice work !

This paper answers a question which has obsessed me since 2002, and I’m more than happy to see that the answer is the one I would have guessed since 2004, but with no way to prove it! (Some people kept thinking I’m a bit too much obsessed by these 1.44 bits ;-) )

At that time ( ht

...(continued)
Aram Harrow Oct 21 2015 02:56 UTC

clearly that should be the last TODO to be removed.