# Top arXiv papers

• Feb 22 2018 math.RA arXiv:1802.07642v1
A Com-PreLie bialgebra is a commutative bialgebra with an extra preLie product satisfying some compatibilities with the product and the coproduct. We here give examples of cofree Com-PreLie bialgebras, including all the ones such that the preLie product is homogeneous of degree $\ge$ --1. We also give a graphical description of free unitary Com-PreLie algebras, explicit their canonical bialgebra structure and exhibit with the help of a rigidity theorem certain cofree quotients, including the Connes-Kreimer Hopf algebra of rooted trees. We finally prove that the dual of these bialgebras are also enveloping algebras of preLie algebras, combinatorially described.
• Be stars are main-sequence massive stars with emission features in their spectrum, which originates in circumstellar gaseous discs. Even though the viscous decretion disc (VDD) model can satisfactorily explain most observations, two important physical ingredients, namely the magnitude of the viscosity ($\alpha$) and the disk mass injection rate, remain poorly constrained. The light curves of Be stars that undergo events of disc formation and dissipation offer an opportunity to constrain these quantities. A pipeline was developed to model these events that uses a grid of synthetic light curves, computed from coupled hydrodynamic and radiative transfer calculations. A sample of 54 Be stars from the OGLE survey of the Small Magellanic Cloud (SMC) was selected for this study. Because of the way our sample was selected (bright stars with clear disc events), it likely represents the densest discs in the SMC. Like their siblings in the Galaxy, the mass of the disc in the SMC increases with the stellar mass. The typical mass and angular momentum loss rates associated with the disk events are of the order of $\sim$$10^{-10}\, M_\odot\,\mathrm{yr^{-1}} and \sim$$5\times 10^{36}\, \mathrm{g\, cm^{2}\, s^{-2}}$, respectively. The values of $\alpha$ found in this work are typically of a few tenths, consistent with recent results in the literature and with the ones found in dwarf novae, but larger than current theory predicts. Considering the sample as a whole, the viscosity parameter is roughly two times larger at build-up ($\left\langle\alpha_\mathrm{bu}\right\rangle = 0.63$) than at dissipation ($\left\langle\alpha_\mathrm{d}\right\rangle = 0.26$). Further work is necessary to verify whether this trend is real or a result of some of the model assumptions.
• In radio astronomy, the Ultra-Long Wavelengths (ULW) regime of longer than 10 m (frequencies below 30 MHz), remains the last virtually unexplored window of the celestial electromagnetic spectrum. The strength of the science case for extending radio astronomy into the ULW window is growing. However, the opaqueness of the Earth's ionosphere makes ULW observations by ground-based facilities practically impossible. Furthermore, the ULW spectrum is full of anthropogenic radio frequency interference (RFI). The only radical solution for both problems is in placing an ULW astronomy facility in space. We present a concept of a key element of a space-borne ULW array facility, an antenna that addresses radio astronomical specifications. A tripole-type antenna and amplifier are analysed as a solution for ULW implementation. A receiver system with a low power dissipation is discussed as well. The active antenna is optimized to operate at the noise level defined by the celestial emission in the frequency band 1 - 30 MHz. Field experiments with a prototype tripole antenna enabled estimates of the system noise temperature. They indicated that the proposed concept meets the requirements of a space-borne ULW array facility.
• We relate open book decompositions of a 4-manifold M with its Engel structures. Our main result is, given an open book decomposition of M whose binding is a collection of 2-tori and whose monodromy preserves a framing of a page, the construction of an En-gel structure whose isotropic foliation is transverse to the interior of the pages and tangent to the binding. In particular the pages are contact man-ifolds and the monodromy is a contactomorphism. As a consequence, on a parallelizable closed 4-manifold, every open book with toric binding carries in the previous sense an Engel structure. Moreover, we show that amongst the supported Engel structures we construct, there is a class of loose Engel structures.
• Magneto-inductive (MI) THz wireless communications is recently shown to provide significant theoretical performances for nanoscale applications with microscale transceivers and microwatt transmission powers. The energy harvesting (EH) based generation of carrier signals for MI transceivers is critical for the autonomous and noninvasive operation. State-of-the-art electromagnetic (EM) vibrational devices have millimeter dimensions while targeting only low frequency EH without any real-time communications purpose. In this article, graphene nanoscale resonators are combined with single molecular magnets (SMMs) to realize a simultaneous EH and MI transceiver by exploiting the unique advantages of graphene such as atomic thickness, ultra-low weight, high strain and the resonance frequencies reaching THz with the high magnetic moment of Terbium(III) bis(phthalocyanine) ($\mbox{TbPc}_2$) SMM. The special low complexity design is improved by novel modulation methods achieving simultaneous wireless information and power transfer (SWIPT). The numerical simulations provide tens of nanowatt powers and efficiencies of $10^4 \, W/m^3$ in acoustic and ultrasound frequencies comparable with state-of-the-art vibrational EH devices while millimeter wave carrier generation is numerically simulated. The proposed design presents a practical framework for nanoscale communications including cellular tracking.
• In this paper we prove that an aether term can be generated from radiative corrections of the (non-minimal) coupling between the gauge and the matter fields, in a Lorentz-breaking extended Yang-Mills theory. Furthermore, we show that the path integral quantization in the Landau gauge is still consistent according to the Gribov-Zwanziger framework.
• Let M be a compact surface, either orientable or non-orientable. We study the lower central and derived series of the braid and pure braid groups of M in order to determine the values of n for which B\_n(M) and P\_n(M) are residually nilpotent or residually soluble. First, we solve this problem for the case where M is the 2-torus. We then give a general description of these series for an arbitrary semi-direct product that allows us to calculate explicitly the lower central series of P\_2(K), where K is the Klein bottle, and to give an estimate for the derived series of P\_n(K). Finally, if M is a non-orientable compact surface without boundary, we determine the values of n for which B\_n(M) is residually nilpotent or residually soluble in the cases that were not already known in the literature.
• It is well known that in $R^n$ , Gâteaux (hence Fréchet) differ-entiability of a convex continuous function at some point is equivalent to the existence of the partial derivatives at this point. We prove that this result extends naturally to certain infinite dimensional vector spaces, in particular to Banach spaces having a Schauder basis.
• We study a natural problem in graph sparsification, the Spanning Tree Congestion (\STC) problem. Informally, the \STC problem seeks a spanning tree with no tree-edge \emphrouting too many of the original edges. The root of this problem dates back to at least 30 years ago, motivated by applications in network design, parallel computing and circuit design. Variants of the problem have also seen algorithmic applications as a preprocessing step of several important graph algorithms. For any general connected graph with $n$ vertices and $m$ edges, we show that its STC is at most $\mathcal{O}(\sqrt{mn})$, which is asymptotically optimal since we also demonstrate graphs with STC at least $\Omega(\sqrt{mn})$. We present a polynomial-time algorithm which computes a spanning tree with congestion $\mathcal{O}(\sqrt{mn}\cdot \log n)$. We also present another algorithm for computing a spanning tree with congestion $\mathcal{O}(\sqrt{mn})$; this algorithm runs in sub-exponential time when $m = \omega(n \log^2 n)$. For achieving the above results, an important intermediate theorem is \emphgeneralized Győri-Lovász theorem, for which Chen et al. gave a non-constructive proof. We give the first elementary and constructive proof by providing a local search algorithm with running time $\mathcal{O}^*\left( 4^n \right)$, which is a key ingredient of the above-mentioned sub-exponential time algorithm. We discuss a few consequences of the theorem concerning graph partitioning, which might be of independent interest. We also show that for any graph which satisfies certain \emphexpanding properties, its STC is at most $\mathcal{O}(n)$, and a corresponding spanning tree can be computed in polynomial time. We then use this to show that a random graph has STC $\Theta(n)$ with high probability.
• We prove Kollár's injectivity theorem for globally $F$-regular varieties.
• We demonstrate a novel imaging approach and associated reconstruction algorithm for far-field coherent diffractive imaging, based on the measurement of a pair of laterally sheared diffraction patterns. The differential phase profile retrieved from such a measurement leads to improved reconstruction accuracy, increased robustness against noise, and faster convergence compared to traditional coherent diffractive imaging methods. We measure laterally sheared diffraction patterns using Fourier-transform spectroscopy with two phase-locked pulse pairs from a high harmonic source. Using this approach, we demonstrate spectrally resolved imaging at extreme ultraviolet wavelengths between 28 and 35 nm.
• Entangled states are ubiquitous amongst fibrous materials, whether naturally occurring (keratin, collagen, DNA) or synthetic (nanotube assemblies, elastane). A key mechanical characteristic of these systems is their ability to reorganise in response to external stimuli, as implicated in e.g. hydration-induced swelling of keratin fibrils in human skin. During swelling, the curvature of individual fibres changes to give a cooperative and reversible structural reorganisation that opens up a pore network. The phenomenon is known to be highly dependent on topology, even if the nature of this dependence is not well understood: certain ordered entanglements (`weavings') can swell to many times their original volume while others are entirely incapable of swelling at all. Given this sensitivity to topology, it is puzzling how the disordered entanglements of many real materials manage to support cooperative dilation mechanisms. Here we use a combination of geometric and lattice-dynamical modelling to study the effect of disorder on swelling behaviour. The model system we devise spans a continuum of disordered topologies and is bounded by ordered states whose swelling behaviour is already known to be either vanishingly small or extreme. We find that while topological disorder often quenches swelling behaviour, certain disordered states possess a surprisingly large swelling capacity. Crucially, we show that the extreme swelling response previously observed only for certain specific weavings can be matched---and even superseded---by that of disordered entanglements. Our results establish a counterintuitive link between topological disorder and mechanical flexibility that has implications not only for polymer science but also for our broader understanding of collective phenomena in disordered systems.
• We investigate the connection between environment and the different quenching channels that galaxies are prone to follow in the rest-frame NUVrK (i.e., NUV-r vs. r-K) colour diagram, as identified by Moutard et al. (2016b). Namely, the (fast) quenching channel followed by ($young$) low-mass galaxies and the (slow) quenching channel followed by ($old$) high-mass ones. We make use of the >22 deg$^2$ covered the VIPERS Multi-Lambda Survey (VIPERS-MLS) to select a galaxy sample complete down to stellar masses of $M_* > 10^{9.4} M_\odot$ at $z < 0.65$ ($M_* > 10^{8.8} M_\odot$ at $z < 0.5$) and including 33,500 (43,000) quiescent galaxies properly selected at $0.2 < z < 0.65$, while being characterized by reliable photometric redshifts ($\sigma_{\delta z/(1+z)} \leq 0.04$) that we use to measure galaxy local densities. We find that (1) the quiescence of low-mass [$M_* \leq 10^{9.7} M_\odot$] galaxies requires a strong increase of the local density, which confirms the lead role played by environment in their fast quenching and, therefore, confirms that the low-mass upturn observed in the stellar mass function of quiescent galaxies is due to $environmental$ $quenching$. We also observe that (2) the reservoir of low-mass galaxies prone to environmental quenching has grown between $z \sim 0.6$ and $z \sim 0.4$ whilst the share of low-mass galaxies in the quiescent population may have simultaneously increased, which may be consistent with a rising importance of $environmental$ $quenching$ with cosmic time, compared to $mass$ $quenching$. We finally discuss the composite picture of such environmental quenching of low-mass galaxies and, in particular, how this picture may be consistent with a $delayed$-$then$-$rapid$ quenching scenario.
• We prove an existence and uniqueness result for Neumann boundary problem of a parabolic partial differential equation (PDE for short) with a singular nonlinear divergence term which can only be understood in a weak sense. A probabilistic approach is applied by studying the backward stochastic differential equations (BSDEs for short) corresponding to the PDEs, the solution of which turns out to be a limit of a sequence of BSDEs constructed by penalization method.
• Cartograms are maps that rescale geographic regions (e.g., countries, districts) such that their areas are proportional to quantitative demographic data (e.g., population size, gross domestic product). Unlike conventional bar or pie charts, cartograms can represent correctly which regions share common borders, resulting in insightful visualizations that can be the basis for further spatial statistical analysis. Computer programs can assist data scientists in preparing cartograms, but developing an algorithm that can quickly transform every coordinate on the map (including points that are not exactly on a border) while generating recognizable images has remained a challenge. Methods that translate the cartographic deformations into physics-inspired equations of motion have become popular, but solving these equations with sufficient accuracy can still take several minutes on current hardware. Here we introduce a flow-based algorithm whose equations of motion are numerically easier to solve compared with previous methods. The equations allow straightforward parallelization so that the calculation takes only a few seconds even for complex and detailed input. Despite the speedup, the proposed algorithm still keeps the advantages of previous techniques: with comparable quantitative measures of shape distortion, it accurately scales all areas, correctly fits the regions together and generates a map projection for every point. We demonstrate the use of our algorithm with applications to the 2016 US election results, the gross domestic products of Indian states and Chinese provinces, and the spatial distribution of deaths in the London borough of Kensington and Chelsea between 2011 and 2014.
• We give another proof of the existence of the endoscopic transfer for unitary Lie algebras and its compatibility with Fourier transforms. By the work of Kazhdan and Vashavsky, this implies the corresponding endoscopic fundamental lemma (theorem of Laumon--Ngô). We study the compatibility between Fourier transforms and transfers and we prove that the compatibility in the Jacquet-Rallis setting implies the compatibility in the endoscopic setting for unitary groups.
• In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily \emphabsent (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically \emphabsent is an important part of an explanation, which to the best of our knowledge, has not been touched upon by current explanation methods that attempt to explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and an fMRI brain imaging dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate.
• Predictions of the next-to-leading order, i.e. one-loop, halo power spectra depend on local and non-local bias parameters up to cubic order. The linear bias parameter can be estimated from the large scale limit of the halo-matter power spectrum, and the second order bias parameters from the large scale, tree-level, bispectrum. Cubic operators would naturally be quantified using the tree-level trispectrum. As the latter is computationally expensive, we extent the quadratic field method proposed in Schmittfull et al. 2014 to cubic fields in order to estimate cubic bias parameters. We cross-correlate a basis set of cubic bias operators with the halo field and express the result in terms of the cross-spectra of these operators in order to cancel cosmic variance. We obtain significant detections of local and non-local cubic bias parameters, which are partially in tension with predictions based on local Lagrangian bias schemes. We directly measure the Lagrangian bias parameters of the protohaloes associated with our halo sample and clearly detect a non-local quadratic term in Lagrangian space. We do not find a clear detection of non-local cubic Lagrangian terms for low mass bins, but there is some mild evidence for their presence for the highest mass bin. While the method presented here focuses on cubic bias parameters, the approach could also be applied to quantifications of cubic primordial non-Gaussianity.
• In this paper, we address the following problem due to Frankl and Füredi (1984). What is the maximum number of hyperedges in an $r$-uniform hypergraph with $n$ vertices, such that every set of $r+1$ vertices contains $0$ or exactly $2$ hyperedges? They solved this problem for $r=3$. For $r=4$, a partial solution is given by Gunderson and Semeraro (2017) when $n=q+1$ for some prime power number $q\equiv3\pmod{4}$. Assuming the existence of skew-symmetric conference matrices for every order divisible by $4$, we give a solution for $n\equiv0\pmod{4}$ and for $n\equiv3\pmod{4}$.
• We study the possibility of the LHeC facility to disentangle different new physics contributions to the production of heavy sterile Majorana neutrinos in the lepton number violating channel $e^{-}p\rightarrow l_{j}^{+} + 3 jets$ ($l_j\equiv e ,\mu$). This is done investigating the angular and polarization trails of effective operators with distinct Dirac-Lorentz structure contributing to the Majorana neutrino production, which parameterize new physics from a higher energy scale. We study an asymmetry in the angular distribution of the final anti-lepton and the initial electron polarization effect on the number of signal events produced by the vectorial and scalar effective interactions, finding both analyses could well separate their contributions.
• We compute some numerical invariants of local cohomology of the ring of invariants by a finite group, mainly in the modular case. Also, we present some applications. In particular, we study Cohen-Macaulay property of modular invariants from the viewpoints of depth, Serre's condition and the relevant generalizations (e.g., the Buchsbaum property, etc). The situation in the local case is different from the global case.
• We study the simplicial \ell q,p cohomology of Carnot groups G. We show vanishing and non-vanishing results depending of the range of the (p, q) gap with respect to the weight gaps in the Lie algebra cohomology of G.
• In this paper, we consider the estimation of a change-point for possibly high-dimensional data in a Gaussian model, using a k-means method. We prove that, up to a logarithmic term, this change-point estimator has a minimax rate of convergence. Then, considering the case of sparse data, with a Sobolev regularity, we propose a smoothing procedure based on Lepski's method and show that the resulting estimator attains the optimal rate of convergence. Our results are illustrated by some simulations. As the theoretical statement relying on Lepski's method depends on some unknown constant, practical strategies are suggested to perform an optimal smoothing.
• We study the sensitivity to the shape of the Higgs potential of single, double, and triple Higgs production at future $e^+e^-$ colliders. Physics beyond the Standard Model is parameterised through the inclusion of higher-dimensional operators $(\Phi^\dagger \Phi- v^2/2)^n/\Lambda^{(2n-4)}$ with $n=3,4$, which allows a consistent treatment of independent deviations of the cubic and quartic self couplings beyond the tree level. We calculate the effects induced by a modified potential up to one loop in single and double Higgs production and at the tree level in triple Higgs production, for both $Z$ boson associated and $W$ boson fusion production mechanisms. We consider two different scenarios. First, the dimension six operator provides the dominant contribution (as expected, for instance, in a linear EFT); we find in this case that the corresponding Wilson coefficient can be determined at $\mathcal{O}(10\%)$ accuracy by just combining accurate measurements of single Higgs cross sections at $\sqrt{\hat s}=$240-250 GeV and double Higgs production in $W$ boson fusion at higher energies. Second, both operators of dimension six and eight can give effects of similar order, i.e., independent quartic self coupling deviations are present. Constraints on Wilson coefficients can be best tested by combining measurements from single, double and triple Higgs production. Given that the sensitivity of single Higgs production to the dimension eight operator is presently unknown, we consider double and triple Higgs production and show that combining their information colliders at higher energies will provide first coarse constraints on the corresponding Wilson coefficient.
• We present exact N-soliton optical pulses riding on a continuous-wave (c.w.) beam that propagate through and interact with a two-level active optical medium. Their representation is derived via an appropriate generalization of the inverse scattering transform for the corresponding Maxwell-Bloch equations. We describe the single-soliton solutions in detail and classify them into several distinct families. In addition to the analogues of traveling-wave soliton pulses that arise in the absence of a c.w. beam, we obtain breather-like structures, periodic pulse-trains and rogue-wave-type (i.e., rational) pulses, whose existence is directly due to the presence of the c.w. beam. These soliton solutions are the analogues for Maxwell-Bloch systems of the four classical solution types of the focusing nonlinear Schrodinger equation with non-zero background, although the physical behavior of the corresponding solutions is quite different.
• We study the light scattering by localized quasi planar excitations of a Cholesteric Liquid Crystal known as spherulites. Due to the anisotropic optical properties of the medium and the peculiar shape of the excitations, we quantitatively evaluate the cross section of the axis-rotation of polarized light. Because of the complexity of the system under consideration, first we give a simplified, but analytical, description of the spherulite and we compare the Born approximation results in this setting with those obtained by resorting to a numerical exact solution. The effects of changing values of the driving external static electric (or magnetic) field is considered. Possible applications of the phenomenon are envisaged.
• Feb 22 2018 math.ST stat.ME stat.TH arXiv:1802.07613v1
Conditional Kendall's tau is a measure of dependence between two random variables, conditionally on some covariates. We study nonparametric estimators of such quantities using kernel smoothing techniques. Then, we assume a regression-type relationship between conditional Kendall's tau and covariates, in a parametric setting with possibly a large number of regressors. This model may be sparse, and the underlying parameter is estimated through a penalized criterion. The theoretical properties of all these estimators are stated. We prove non-asymptotic bounds with explicit constants that hold with high probability. We derive their consistency, their asymptotic law and some oracle properties. Some simulations and applications to real data conclude the paper.
• Feb 22 2018 cs.LO arXiv:1802.07612v1
We investigate data-enriched models, like Petri nets with data, where executability of a transition is conditioned by a relation between data values involved. Decidability status of various decision problems in such models may depend on the structure of data domain. According to the WQO Dichotomy Conjecture, if a data domain is homogeneous then it either exhibits a well quasi-order (in which case decidability follows by standard arguments), or essentially all the decision problems are undecidable for Petri nets over that data domain. We confirm the conjecture for data domains being 3-graphs (graphs with 2-colored edges). On the technical level, this results is a significant step beyond known classification results for homogeneous structures.
• PKS 0625-354 (z=0.055) was observed with the four H.E.S.S. telescopes in 2012 during 5.5 hours. The source was detected above an energy threshold of 200 GeV at a significance level of 6.1$\sigma$. No significant variability is found in these observations. The source is well described with a power-law spectrum with photon index $\Gamma=2.84\pm0.50_{stat}\pm0.10_{syst}$ and normalization (at $E_0$=1.0 TeV) $N_0(E_0)=(0.58\pm0.22_{stat}\pm0.12_{syst})\times10^{-12}$\u2009TeV$^{-1}$cm$^{-2}$s$^{-1}$. Multi-wavelength data collected with Fermi-LAT, Swift-XRT, Swift-UVOT, ATOM and WISE are also analysed. Significant variability is observed only in the Fermi-LAT $\gamma$-ray and Swift-XRT X-ray energy bands. Having a good multi-wavelength coverage from radio to very high energy, we performed a broadband modelling from two types of emission scenarios. The results from a one zone lepto-hadronic, and a multi-zone leptonic models are compared and discussed. On the grounds of energetics, our analysis favours a leptonic multi-zone model. Models associated to the X-ray variability constraint supports previous results suggesting a BL Lac nature of PKS 0625-354, with, however, a large-scale jet structure typical of a radio galaxy.
• Feb 22 2018 math.AT math.KT arXiv:1802.07610v1
We define two model structures on the category of bicomplexes concentrated in the right half plane. The first model structure has weak equivalences detected by the totalisation functor. The second model structure's weak equivalences are detected by the $E^2$-term of the spectral sequence associated to the filtration of the total complex by the horizontal degree. We then extend this result to twisted complexes.
• Feb 22 2018 math.NT arXiv:1802.07609v1
We survey some past conditional results on the distribution of large differences between consecutive primes and examine how the Hardy-Littlewood prime k-tuples conjecture can be applied to this question.
• Feb 22 2018 cs.SE arXiv:1802.07608v1
In many scenarios we need to find the most likely program under a local context, where the local context can be an incomplete program, a partial specification, natural language description, etc. We call such problem program estimation. In this paper we propose an abstract framework, learning to synthesis, or L2S in short, to address this problem. L2S combines four tools to achieve this: syntax is used to define the search space and search steps, constraints are used to prune off invalid candidates at each search step, machine-learned models are used to estimate conditional probabilities for the candidates at each search step, and search algorithms are used to find the best possible solution. The main goal of L2S is to lay out the design space to motivate the research on program estimation. We have performed a preliminary evaluation by instantiating this framework for synthesizing conditions of an automated program repair (APR) system. The training data are from the project itself and related JDK packages. Compared to ACS, a state-of-the-art condition synthesis system for program repair, our approach could deal with a larger search space such that we fixed 4 additional bugs outside the search space of ACS, and relies only on the source code of the current projects.
• In multi-objective decision planning and learning, much attention is paid to producing optimal solution sets that contain an optimal policy for every possible user preference profile. We argue that the step that follows, i.e, determining which policy to execute by maximising the user's intrinsic utility function over this (possibly infinite) set, is under-studied. This paper aims to fill this gap. We build on previous work on Gaussian processes and pairwise comparisons for preference modelling, extend it to the multi-objective decision support scenario, and propose new ordered preference elicitation strategies based on ranking and clustering. Our main contribution is an in-depth evaluation of these strategies using computer and human-based experiments. We show that our proposed elicitation strategies outperform the currently used pairwise methods, and found that users prefer ranking most. Our experiments further show that utilising monotonicity information in GPs by using a linear prior mean at the start and virtual comparisons to the nadir and ideal points, increases performance. We demonstrate our decision support framework in a real-world study on traffic regulation, conducted with the city of Amsterdam.
• In this letter we propose a new methodology for crystal structure prediction, which is based on the evolutionary algorithm USPEX and the machine-learning interatomic potentials (MLIPs) actively learning on-the-fly. Our methodology allows for an automated construction of an interatomic interaction model from scratch replacing the expensive DFT with a speedup of several orders of magnitude. Those structures are then tested on DFT, ensuring that our machine-learning model does not introduce any prediction error. We tested our methodology on a very challenging problem of prediction of boron allotropes including those which have more than 100 atoms in the primitive cell. All the the main allotropes have been reproduced and a new 54-atom structure have been found at very modest computational efforts.
• Feb 22 2018 math.NT arXiv:1802.07604v1
To each prime $p$, let $I_p \subset \mathbb{Z}/p\mathbb{Z}$ denote a collection of at most $C_0$ residue classes modulo $p$, whose cardinality $|I_p|$ is equal to 1 on the average. We show that for sufficiently large $x$, the sifted set $\{ n \in \mathbb{Z}: n \pmod{p} \not \in I_p \hbox{ for all }p \leq x\}$ contains gaps of size $x (\log x)^{1/\exp(C C_0)}$ for an absolute constant $C>0$; this improves over the "trivial" bound of $\gg x$. As a consequence, we show that for any degree d polynomial $f: \mathbb{Z} \to \mathbb{Z}$ mapping the integers to itself with positive leading coefficient, the set $\{ n \leq X: f(n) \hbox{ composite}\}$ contains an interval of consecutive integers of length $\ge (\log X) (\log\log X)^{1/\exp(Cd)}$ for some absolute constant $C>0$ and sufficiently large $X$.
• A general variational approach for computing the rovibrational dynamics of polyatomic molecules in the presence of external electric fields is presented. Highly accurate, full-dimensional variational calculations provide a basis of field-free rovibrational states for evaluating the rovibrational matrix elements of high-rank Cartesian tensor operators, and for solving the time-dependent Schrödinger equation. The effect of the external electric field is treated as a multipole moment expansion truncated at the second hyperpolarizability interaction term. Our fully numerical and computationally efficient method has been implemented in a new program, RichMol, which can simulate the effects of multiple external fields of arbitrary strength, polarization, pulse shape and duration. Illustrative calculations of two-color orientation and rotational excitation with an optical centrifuge of NH$_3$ are discussed.
• We analyze theoretically and experimentally the wake behind a horizontal cylinder of diameter $d$ horizontally translated at constant velocity $U$ in a fluid rotating about the vertical axis at a rate $\Omega$. Using particle image velocimetry measurements in the rotating frame, we show that the wake is stabilized by rotation for Reynolds number ${\rm Re}=Ud/\nu$ much larger than in a non-rotating fluid. Over the explored range of parameters, the limit of stability is ${\rm Re} \simeq (275 \pm 25) / {\rm Ro}$, with ${\rm Ro}=U/2\Omega d$ the Rossby number, indicating that the stabilizing process is governed by the Ekman pumping in the boundary layer. At low Rossby number, the wake takes the form of a stationary pattern of inertial waves, similar to the wake of surface gravity waves behind a ship. We compare this steady wake pattern to a model, originally developed by [Johnson, J. Fluid Mech. 120, 359 (1982)], assuming a free-slip boundary condition and a weak streamwise perturbation. Our measurements show a quantitative agreement with this model for ${\rm Ro}\lesssim 0.3$. At larger Rossby number, the phase pattern of the wake is close to the prediction for an infinitely small line object. However, the wake amplitude and phase origin are not correctly described by the weak-streamwise-perturbation model, calling for an alternative model for the boundary condition at moderate rotation rate.
• This work focuses on the development of a non-conforming domain decomposition method for the approximation of PDEs based on weakly imposed transmission conditions: the continuity of the global solution is enforced by a discrete number of Lagrange multipliers defined over the interfaces of adjacent subdomains. The method falls into the class of primal hybrid methods, which also include the well-known mortar method. Differently from the mortar method, we discretize the space of basis functions on the interface by spectral approximation independently of the discretization of the two adjacent domains; one of the possible choices is to approximate the interface variational space by Fourier basis functions. As we show in the numerical simulations, our approach is well-suited for the solution of problems with non-conforming meshes or with finite element basis functions with different polynomial degrees in each subdomain. Another application of the method that still needs to be investigated is the coupling of solutions obtained from otherwise incompatible methods, such as the finite element method, the spectral element method or isogeometric analysis.
• A sliding window algorithm receives a stream of symbols and has to output at each time instant a certain value which only depends on the last $n$ symbols. If the algorithm is randomized, then at each time instant it produces an incorrect output with probability at most $\epsilon$, which is a constant error bound. This work proposes a more relaxed definition of correctness which is parameterized by the error bound $\epsilon$ and the failure ratio $\phi$: A randomized sliding window algorithm is required to err with probability at most $\epsilon$ at a portion of $1-\phi$ of all time instants of an input stream. This work continues the investigation of sliding window algorithms for regular languages. In previous works a trichotomy theorem was shown for deterministic algorithms: the optimal space complexity is either constant, logarithmic or linear in the window size. The main results of this paper concerns three natural settings (randomized algorithms with failure ratio zero and randomized/deterministic algorithms with bounded failure ratio) and provide natural language theoretic characterizations of the space complexity classes.
• Controlled placement of nanomaterials at predefined locations with nanoscale precision remains among the most challenging problems that inhibit their large-scale integration in the field of semiconductor process technology. Methods based on surface functionalization have a drawback where undesired chemical modifications can occur and deteriorate the deposited material. The application of electric-field assisted placement techniques eliminates the element of chemical treatment; however, it requires an incorporation of conductive placement electrodes that limit the performance, scaling, and density of integrated electronic devices. Here, we report a method for electric-field assisted placement of solution-processed nanomaterials by using large-scale graphene layers featuring nanoscale deposition sites. The structured graphene layers are prepared via either transfer or synthesis on standard substrates, then are removed without residue once nanomaterial deposition is completed, yielding material assemblies with nanoscale resolution that cover surface areas larger than 1mm2. In order to demonstrate the broad applicability, we have assembled representative zero-, one-, and two-dimensional semiconductors at predefined substrate locations and integrated them into nanoelectronic devices. This graphene-based placement technique affords nanoscale resolution at wafer scale, and could enable mass manufacturing of nanoelectronics and optoelectronics involving a wide range of nanomaterials prepared via solution-based approaches.
• In this paper, we propose a new design method of irregular spatially-coupled low-density paritycheck (SC-LDPC) codes with non-uniform degree distributions by linear programming (LP). In general, irregular SC-LDPC codes with non-uniform degree distributions is difficult to design with low complexity because their density evolution equations are multi-dimensional. To solve the problem, the proposed method is based on two main ideas: a local design of the degree distributions for each pair of positions and pre-computation of the input/output message relationship. These ideas make it possible to design the degree distributions of irregular SC-LDPC codes by solving low complexity LP problems which are used when optimizing uncoupled low-density parity-check (LDPC) codes over the binary erasure channel. We also find a proper objective function for the proposed design methodology to improve the performance of SC-LDPC codes. It is shown that the irregular SC-LDPC codes obtained by the proposed method are superior to regular SC-LDPC codes in terms of both asymptotic and finite-length performances.
• We prove that for pairwise co-prime numbers $k_1,\dots,k_d \geq 2$ there does not exist any infinite set of positive integers $A$ such that the representation function $r_A (n) = \{ (a_1, \dots, a_d) \in A^d : k_1 a_1 + \dots + k_d a_d = n \}$ becomes constant for $n$ large enough. This result is a particular case of our main theorem, which poses a further step towards answering a question of Sárközy and Sós and widely extends a previous result of Cilleruelo and Rué for bivariate linear forms.
• Let $(R,\mathfrak{m})$ be a Noetherian local ring and $M$ a finitely generated $R$-module. We say $M$ has maximal depth if there is an associated prime $\mathfrak{p}$ of $M$ such that depth $M=\dim R/\mathfrak{p}$. In this paper, we study finitely generated modules with maximal depth. It is shown that the maximal depth property is preserved under some important module operations. Generalized Cohen--Macaulay modules with maximal depth are classified. Finally, the attached primes of $H^i_{\mathfrak{m}}(M)$ are considered for $i<\mathrm{dim} M$.
• The top-k error is a common measure of performance in machine learning and computer vision. In practice, top-k classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-k classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-k optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a naïve algorithm would require $\mathcal{O}(\binom{n}{k})$ operations, where n is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of $\mathcal{O}(k n)$. Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of k=5. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy.
• We study unextendible maximally entangled bases (UMEBs) in $$\mathbb C^d⊗\mathbb C^d^\prime$$ ($d<d'$). An operational method to construct UMEBs containing $d(d^{\prime}-1)$ maximally entangled vectors is established, and two UMEBs in $$\mathbb C^5⊗\mathbb C^6$$and $$\mathbb C^5⊗\mathbb C^12$$are given as examples. Furthermore, a systematic way of constructing UMEBs containing $d(d^{\prime}-r)$ maximally entangled vectors in $$\mathbb C^d⊗\mathbb C^d^\prime$$is presented for $r=1,2,\cdots, d-1$. Correspondingly, two UMEBs in $$\mathbb C^3⊗\mathbb C^10$$are obtained.
• We reconcile the Hamiltonian formalism and the zero curvature representation in the approach to integrable boundary conditions for a classical integrable system in 1+1 space-time dimensions. We start from an ultralocal Poisson algebra involving a Lax matrix and two (dynamical) boundary matrices. Sklyanin's formula for the double-row transfer matrix is used to derive Hamilton's equations of motion for both the Lax matrix \bf and the boundary matrices in the form of zero curvature equations. A key ingredient of the method is a boundary version of the Semenov-Tian-Shansky formula for the generating function of the time-part of a Lax pair. The procedure is illustrated on the finite Toda chain for which we derive Lax pairs of size $2\times 2$ for previously known Hamiltonians of type $BC_N$ and $D_N$ corresponding to constant and dynamical boundary matrices respectively.
• Batch normalization was introduced in 2015 to speed up training of deep convolution networks by normalizing the activations across the current batch to have zero mean and unity variance. The results presented here show an interesting aspect of batch normalization, where controlling the shape of the training batches can influence what the network will learn. If training batches are structured as balanced batches (one image per class), and inference is also carried out on balanced test batches, using the batch's own means and variances, then the conditional results will improve considerably. The network uses the strong information about easy images in a balanced batch, and propagates it through the shared means and variances to help decide the identity of harder images on the same batch. Balancing the test batches requires the labels of the test images, which are not available in practice, however further investigation can be done using batch structures that are less strict and might not require the test image labels. The conditional results show the error rate almost reduced to zero for nontrivial datasets with small number of classes such as the CIFAR10.
• One of the biggest problems in deep learning is its difficulty to retain consistent robustness when transferring the model trained on one dataset to another dataset. To conquer the problem, deep transfer learning was implemented to execute various vision tasks by using a pre-trained deep model in a diverse dataset. However, the robustness was often far from state-of-the-art. We propose a collaborative weight-based classification method for deep transfer learning (DeepCWC). The method performs the L2-norm based collaborative representation on the original images, as well as the deep features extracted by pre-trained deep models. Two distance vectors will be obtained based on the two representation coefficients, and then fused together via the collaborative weight. The two feature sets show a complementary character, and the original images provide information compensating the missed part in the transferred deep model. A series of experiments conducted on both small and large vision datasets demonstrated the robustness of the proposed DeepCWC in both face recognition and object recognition tasks.
• We define a simple kind of higher inductive type generalising dependent $W$-types, which we refer to as $W$-types with reductions. Just as dependent $W$-types can be characterised as initial algebras of certain endofunctors (referred to as polynomial endofunctors), we will define our generalisation as initial algebras of certain pointed endofunctors, which we will refer to as pointed polynomial endofunctors. We will show that $W$-types with reductions exist in all $\Pi W$-pretoposes that satisfy a weak choice axiom, known as weakly initial set of covers (WISC). This includes all Grothendieck toposes and realizability toposes as long as WISC holds in the background universe. We will show that a large class of $W$-types with reductions in internal presheaf categories can be constructed without using WISC. We will show that $W$-types with reductions suffice to construct some interesting examples of algebraic weak factorisation systems (awfs's). Specifically, we will see how to construct awfs's that are cofibrantly generated with respect to a codomain fibration, as defined in a previous paper by the author.
• The first steps in defining tropicalization for spherical varieties have been taken in the last few years. There are two parts to this theory: tropicalizing subvarieties of homogeneous spaces and tropicalizing their closures in spherical embeddings. In this paper, we obtain a new description of spherical tropicalization that is equivalent to the other theories. This works by embedding in a toric variety, tropicalizing there, and then applying a particular piecewise projection map. We use this theory to prove that taking closures commutes with the spherical tropicalization operation.

Beni Yoshida Feb 13 2018 19:53 UTC

This is not a direct answer to your question, but may give some intuition to formulate the problem in a more precise language. (And I simplify the discussion drastically). Consider a static slice of an empty AdS space (just a hyperbolic space) and imagine an operator which creates a particle at some

...(continued)
Abhinav Deshpande Feb 10 2018 15:42 UTC

I see. Yes, the epsilon ball issue seems to be a thorny one in the prevalent definition, since the gate complexity to reach a target state from any of a fixed set of initial states depends on epsilon, and not in a very nice way (I imagine that it's all riddled with discontinuities). It would be inte

...(continued)
Elizabeth Crosson Feb 10 2018 05:49 UTC

Thanks for the correction Abhinav, indeed I meant that the complexity of |psi(t)> grows linearly with t.

Producing an arbitrary state |phi> exactly is also too demanding for the circuit model, by the well-known argument that given any finite set of gates, the set of states that can be reached i

...(continued)
Abhinav Deshpande Feb 09 2018 20:21 UTC

Elizabeth, interesting comment! Did you mean to say that the complexity of $U(t)$ increases linearly with $t$ as opposed to exponentially?

Also, I'm confused about your definition. First, let us assume that the initial state is well defined and is $|\psi(0)\rangle$.
If you define the complexit

...(continued)
Elizabeth Crosson Feb 08 2018 04:27 UTC

The complexity of a state depends on the dynamics that one is allowed to use to generate the state. If we restrict the dynamics to be "evolving according a specific Hamiltonian H" then we immediately have that the complexity of U(t) = exp(i H t) grows exponentially with t, up until recurrences that

...(continued)
Danial Dervovic Feb 05 2018 15:03 UTC

Thank you Māris for the extremely well thought-out and articulated points here.

I think this very clearly highlights the need to think explicitly about the precompute time if using the lifting to directly simulate the quantum walk, amongst other things.

I wish to give a well-considered respons

...(continued)
Michael A. Sherbon Feb 02 2018 15:56 UTC

Good general review on the Golden Ratio and Fibonacci ... in physics, more examples are provided in the paper “Fine-Structure Constant from Golden Ratio Geometry,” Specifically,

\alpha^{-1}\simeq\frac{360}{\phi^{2}}-\frac{2}{\phi^{3}}&plus;\frac{\mathit{A^{2}}}{K\phi^{4}}-\frac{\mathit{A^{\math

...(continued)
Māris Ozols Feb 01 2018 17:53 UTC

This paper considers the problem of using "lifted" Markov chains to simulate the mixing of coined quantum walks. The Markov chain has to approximately (in the total variational distance) sample from the distribution obtained by running the quantum walk for a randomly chosen time $t \in [0,T]$ follow

...(continued)
Johnnie Gray Feb 01 2018 12:59 UTC

Thought I'd just comment here that we've rather significantly updated this paper.

wenling yang Jan 30 2018 19:08 UTC