# Top arXiv papers

• We consider the $n$-component $|\varphi|^4$ lattice spin model ($n \ge 1$) and the weakly self-avoiding walk ($n=0$) on $\mathbb{Z}^d$, in dimensions $d=1,2,3$. We study long-range models based on the fractional Laplacian, with spin-spin interactions or walk step probabilities decaying with distance $r$ as $r^{-(d+\alpha)}$ with $\alpha \in (0,2)$. The upper critical dimension is $d_c=2\alpha$. For $\epsilon >0$, and $\alpha = \frac 12 (d+\epsilon)$, the dimension $d=d_c-\epsilon$ is below the upper critical dimension. For small $\epsilon$, weak coupling, and all integers $n \ge 0$, we prove that the two-point function at the critical point decays with distance as $r^{-(d-\alpha)}$. This "sticking" of the critical exponent at its mean-field value was first predicted in the physics literature in 1972. Our proof is based on a rigorous renormalisation group method. The treatment of observables differs from that used in recent work on the nearest-neighbour 4-dimensional case, via our use of a cluster expansion.
• In combinatorial group testing problems Questioner needs to find a defective element $x\in [n]$ by testing subsets of $[n]$. In [18] the authors introduced a new model, where each element knows the answer for those queries that contain it and each element should be able to identify the defective one. In this article we continue to investigate this kind of models with more defective elements. We also consider related models inspired by secret sharing models, where the elements should share information among them to find out the defectives. Finally the adaptive versions of the different models are also investigated.
• The effects of finite particle number sampling on the net baryon number cumulants, extracted from fluid dynamical simulations, are studied. The commonly used finite particle number sampling procedure introduces an additional Poissonian (or multinomial if global baryon number conservation is enforced) contribution which increases the extracted moments of the baryon number distribution. If this procedure is applied to a fluctuating fluid dynamics framework one severely overestimates the actual cumulants. We show that the sampling of so called test-particles suppresses the additional contribution to the moments by at least one power of the number of test-particles. We demonstrate this method in a numerical fluid dynamics simulation that includes the effects of spinodal decomposition due to a first order phase transition. Furthermore, in the limit where anti-baryons can be ignored, we derive analytic formulas which capture exactly the effect of particle sampling on the baryon number cumulants. These formulas may be used to test the various numerical particle sampling algorithms.
• The simultaneous photometric and spectroscopic observations of the RR Lyrae variables in the globular cluster, M3, published in Jurcsik et al. (2017, Paper I) made it possible to perform Baade-Wesselink (BW) analysis of a large sample of Blazhko stars for the first time. The BW distances of Blazhko stars turned out to be unreliable, as significantly different distances were obtained for the stars of the Blazhko sample and also for the same star in different modulation phases. Even the results of small modulation-amplitude Blazhko stars may be doubtful. This result warns that the application of the BW method to Blazhko stars is not trustworthy. Keeping the distance fixed for each Blazhko star in each modulation phase, a significant difference between the spectroscopic and the photometric radius ($R_{\textrm{sp}}$, $R_{\textrm{ph}}$) variations is detected. The phase and amplitude variations of $R_{\textrm{sp}}$ follow the changes of the light curve during the Blazhko cycle but the $R_{\textrm{ph}}$ curve seems to be not (or only marginally) affected by the modulation. The asynchronous behaviour of $R_{\textrm{sp}}$ and $R_{\textrm{ph}}$ supports the interpretation of the Blazhko effect as a depth-dependent phenomenon, as the spectroscopic radius variation reflects the radial displacement of the line-forming region high in the atmosphere, while the photospheric radius variation is derived from the information of the observed visual-band light emitted mostly by the lower photosphere. The stability of $R_{\textrm{ph}}$ may be interpreted as a strong argument against the non-radial-mode explanation of the Blazhko phenomenon.
• The realization of molecular-based electronic devices depends to a large extent on the ability to mechanically stabilize the involved molecular bonds, while making use of efficient resonant charge transport through the device. Resonant charge transport can induce vibrational instability of molecular bonds, leading to bond rupture under a bias voltage. In this work, we go beyond the wide-band approximation in order to study the phenomenon of vibrational instability in single molecule junctions and show that the energy-dependence of realistic molecule-leads couplings affects the mechanical stability of the junction. We show that the chemical bonds can be stabilized in the resonant transport regime by increasing the bias voltage on the junction. This research provides guidelines for the design of mechanically stable molecular devices operating in the regime of resonant charge transport.
• Many reduced order models are neither robust with respect to the parameter changes nor cost-effective enough for handling the nonlinear dependence of complex dynamical systems. In this study, we put forth a robust machine learning framework for projection based reduced order modeling of such nonlinear and nonstationary systems. As a demonstration, we focus on a nonlinear advection-diffusion system given by the viscous Burgers equation, which is a prototype setting of more realistic fluid dynamics applications with the same quadratic nonlinearity. In our proposed methodology the effects of truncated modes are modeled using a singe layer feed-forward neural network architecture. The neural network architecture is trained by utilizing both the Bayesian regularization and extreme learning machine approaches, where the latter one is found to be computationally more efficient. A particular effort is devoted to the selection of basis functions considering the proper orthogonal decomposition and Fourier bases. It is shown that the proposed models yield significant improvements in the accuracy over the standard Galerkin projection models with a negligibly small computational overhead and provide reliable predictions with respect to the parameter changes.
• The magnetic field in intergalactic space gives important information about magnetogenesis in the early universe. The properties of this field can be probed by searching for radiation of secondary e$^+$e$^-$ pairs created by TeV photons, that produce GeV range radiation by Compton-scattering cosmic microwave background (CMB) photons. The arrival times of the GeV "echo" photons depend strongly on the magnetic field strength and coherence length. A Monte Carlo code that accurately treats pair creation is developed to simulate the spectrum and time-dependence of the echo radiation. The extrapolation of the spectrum of powerful gamma-ray bursts (GRBs) like GRB 130427A to TeV energies is used to demonstrate how the IGMF can be constrained if it falls in the $10^{-21}$ - $10^{-17}$ G range for 1 Mpc coherence length.
• Let $X$ be a finite collection of sets. We count number of ways disjoint union of $n-1$ subsets in $X$ is a set in $X$, and estimate the number from above by $|X|^{c(n)}$ where $$c(n)=\left(1-\frac(n-1)\ln (n-1)n\ln n \right)^-1.$$ This extends the recent result of Kane--Tao, corresponding to the case $n=3$ where $c(3)\approx 1.725$, to an arbitrary finite number of disjoint $n-1$ partitions which have applications in the run time analysis of the ASTRAL algorithm in phylogenetic reconstructions.
• In this work, the Quasi-Random Lattice (QRL) model is summarized and critically discussed, in order to outline its potentialities and limitations, in perspective of future developments. QRL primarily focuses on the mean activity coefficient of ionic solutions, the model having first been developed in order to provide practical equations, able to involve a minimal number of unknown or unpredictable quantities. QRL at present depends on one adjustable parameter (at given pressure and temperature), experimentally known for many common salts either symmetric or asymmetric, and corresponding to a well-defined concentration, which also sets the upper limit of applicability of the model. For aqueous electrolytes, the concentration-parameter ranges from 1 M to 8 M (about). In the following it will be seen that, although belonging to the class of simplified approaches, QRL can provide very interesting results since its simple parametrisation is more significant, from a theoretical point of view, than so far recognized. A general overview of the QRL theory will first be presented. Then, some preliminary results will be discussed, in particular concerned with volumetric and thermal properties of electrolyte solutions.
• We extend recent work by van der Laan (2014) on causal inference for causally connected units to more general social network settings. Our asymptotic results allow for dependence of each observation on a growing number of other units as sample size increases. We are not aware of any previous methods for inference about network members in observational settings that allow the number of ties per node to increase as the network grows. While previous methods have generally implicitly focused on one of two possible sources of dependence among social network observations, we allow for both dependence due to contagion, or transmission of information across network ties, and for dependence due to latent similarities among nodes sharing ties. We describe estimation and inference for causal effects that are specifically of interest in social network settings.
• For binary experimental data, we discuss randomization-based inferential procedures that do not need to invoke any modeling assumptions. We also introduce methods for likelihood and Bayesian inference based solely on the physical randomization without any hypothetical super population assumptions about the potential outcomes. These estimators have some properties superior to moment-based ones such as only giving estimates in regions of feasible support. Due to the lack of identification of the causal model, we also propose a sensitivity analysis approach which allows for the characterization of the impact of the association between the potential outcomes on statistical inference.
• In this paper we introduce new, easily implementable designs for drawing causal inference from randomized experiments on networks with interference. Inspired by the idea of matching in observational studies, we introduce the notion of considering a treatment assignment as a quasi-coloring" on a graph. Our idea of a perfect quasi-coloring strives to match every treated unit on a given network with a distinct control unit that has identical number of treated and control neighbors. For a wide range of interference functions encountered in applications, we show both by theory and simulations that the classical Neymanian estimator for the direct effect has desirable properties for our designs. This further extends to settings where homophily is present in addition to interference.
• We show that, in optical pump-probe experiments on bulk samples, the statistical distribution of the intensity of ultrashort light pulses after the interaction with a nonequilibrium complex material can be used to measure the time-dependent noise of the current in the system. We illustrate the general arguments for a photo-excited Peierls material. The transient noise spectroscopy allows to measure to what extent electronic degrees of freedom dynamically obey the fluctuation-dissipation theorem, and how well they thermalize during the coherent lattice vibrations. The proposed statistical measurement developed here provides a new general framework to retrieve dynamical information on the excited distributions in nonequilibrium experiments which could be extended to other degrees of freedom of magnetic or vibrational origin.
• The Internet of Things (IoT) is a new computing paradigm that spans wearable devices, homes, hospitals, cities, transportation, and critical infrastructure. Building security into this new computing paradigm is a major technical challenge today. However, what are the security problems in IoT that we can solve using existing security principles? And, what are the new problems and challenges in this space that require new security mechanisms? This article summarizes the intellectual similarities and differences between classic information technology security research and IoT security research.
• Any model order reduced dynamical system that evolves a modal decomposition to approximate the discretized solution of a stochastic PDE can be related to a vector field tangent to the manifold of fixed rank matrices. The Dynamically Orthogonal (DO) approximation is the canonical reduced order model for which the corresponding vector field is the orthogonal projection of the original system dynamics onto the tangent spaces of this manifold. The embedded geometry of the fixed rank matrix manifold is thoroughly analyzed. The curvature of the manifold is characterized and related to the smallest singular value through the study of the Weingarten map. Differentiability results for the orthogonal projection onto embedded manifolds are reviewed and used to derive an explicit dynamical system for tracking the truncated Singular Value Decomposition (SVD) of a time-dependent matrix. It is demonstrated that the error made by the DO approximation remains controlled under the minimal condition that the original solution stays close to the low rank manifold, which translates into an explicit dependence of this error on the gap between singular values. The DO approximation is also justified as the dynamical system that applies instantaneously the SVD truncation to optimally constrain the rank of the reduced solution. Riemannian matrix optimization is investigated in this extrinsic framework to provide algorithms that adaptively update the best low rank approximation of a smoothly varying matrix. The related gradient flow provides a dynamical system that converges to the truncated SVD of an input matrix for almost every initial data.
• A major challenge in designing neural network (NN) systems is to determine the best structure and parameters for the network given the data for the machine learning problem at hand. Examples of parameters are the number of layers and nodes, the learning rates, and the dropout rates. Typically, these parameters are chosen based on heuristic rules and manually fine-tuned, which may be very time-consuming, because evaluating the performance of a single parametrization of the NN may require several hours. This paper addresses the problem of choosing appropriate parameters for the NN by formulating it as a box-constrained mathematical optimization problem, and applying a derivative-free optimization tool that automatically and effectively searches the parameter space. The optimization tool employs a radial basis function model of the objective function (the prediction accuracy of the NN) to accelerate the discovery of configurations yielding high accuracy. Candidate configurations explored by the algorithm are trained to a small number of epochs, and only the most promising candidates receive full training. The performance of the proposed methodology is assessed on benchmark sets and in the context of predicting drug-drug interactions, showing promising results. The optimization tool used in this paper is open-source.
• We study the geodesic flow of a class of 3-manifolds introduced by Benoist which have some hyperbolicity but are non-Riemannian, not CAT(0), and with non-C^1 geodesic flow. The geometries are nonstrictly convex Hilbert geometries in dimension three which admit compact quotient manifolds by discrete groups of projective transformations. We prove the Patterson-Sullivan density is canonical, with applications to counting, and construct explicitly the Bowen-Margulis measure of maximal entropy. The main result of this work is ergodicity of the Bowen-Margulis measure.
• We have recently demonstrated the laser cooling of a single $^{40}$Ca$^+$ ion to the motional ground state in a Penning trap using the resolved-sideband cooling technique on the electric quadrupole transition S$_{1/2} \leftrightarrow$ D$_{5/2}$. Here we report on the extension of this technique to small ion Coulomb crystals made of two or three $^{40}$Ca$^+$ ions. Efficient cooling of the axial motion is achieved outside the Lamb-Dicke regime on a two-ion string along the magnetic field axis as well as on two- and three-ion planar crystals. Complex sideband cooling sequences are required in order to cool both axial degrees of freedom simultaneously. We measure a mean excitation after cooling of $\bar n_\text{COM}=0.30(4)$ for the centre of mass mode and $\bar n_\text{B}=0.07(3)$ for the breathing mode of the two-ion string with corresponding heating rates of 11(2) s$^{-1}$ and 1(1) s$^{-1}$ at a trap frequency of 162 kHz. The ground state occupation of the axial modes is above 75% for the two-ion planar crystal and the associated heating rates 0.8(5) s$^{-1}$ at a trap frequency of 355 kHz.
• We present the phase diagram in a magnetic field of a 2D isotropic Heisenberg antiferromagnet on a triangular lattice. We consider spin-$S$ model with nearest-neighbor ($J_1$) and next-nearest-neighbor ($J_2$) interactions. We focus on the range of $1/8<J_2/J_1<1$, where the ordered states are different from those in the model with only nearest neighbor exchange. A classical ground state in this range has four sublattices and is infinitely degenerate in any field. The actual order is then determined by quantum fluctuations via "order from disorder" phenomenon. We argue that the phase diagram is rich due to competition between competing four-sublattice quantum states which break either $\mathbb{Z}_3$ orientational symmetry or $\mathbb{Z}_4$ sublattice symmetry. At small and high fields, the ground state is a $\mathbb{Z}_3$-breaking canted stripe state, but at intermediate fields the ordered states break $\mathbb{Z}_4$ sublattice symmetry. The most noticeable of such states is "three up, one down" state in which spins in three sublattices are directed along the field and in one sublattice opposite to the field. Such a state breaks no continuous symmetry and has gapped excitations. As the consequence, magnetization has a plateau at exactly one half of the saturation value. We identify gapless states, which border the "three up, one down" state and discuss the transitions between these states and the canted stripe state.
• We present Hubble Space Telescope imaging of two ultra diffuse galaxies (UDGs) with measured stellar velocity dispersions in the Coma cluster. The galaxies, Dragonfly 44 and DFX1, have effective radii of 4.7 kpc and 3.5 kpc and velocity dispersions of $47^{+8}_{-6}$ km/s and $30^{+7}_{-7}$ km/s, respectively. Both galaxies are associated with a striking number of compact objects, tentatively identified as globular clusters: $N_{\rm gc}=74\pm 18$ for Dragonfly 44 and $N_{\rm gc}=62\pm 17$ for DFX1. The number of globular clusters is far higher than expected from the luminosities of the galaxies but is consistent with expectations from the empirical relation between dynamical mass and globular cluster count defined by other galaxies. Combining our data for these two objects with previous HST observations of Coma UDGs we find that most have large globular cluster populations for their luminosities, in contrast to a recent study of a similar sample by Amorisco et al. (2017), but consistent with earlier results for individual galaxies. The Harris et al. (2017) relation between globular cluster count and dark matter halo mass implies a median halo mass of $M_{\rm halo}\sim 1.5\times 10^{11}\,{\rm M}_{\odot}$ for the sixteen Coma UDGs that have been observed with HST so far, with the largest and brightest having $M_{\rm halo}\sim 5\times 10^{11}\,{\rm M}_{\odot}$.
• We study the reactions of the low ionosphere during tropical depressions (TDs) which have been detected before the hurricane appearances in the Atlantic Ocean. We explore 41 TD events using very low frequency (VLF) radio signals emitted by NAA transmitter located in the USA and recorded by VLF receiver located in Belgrade (Serbia). We found VLF signal deviations (caused ionospheric turbulence) in the case of 36 out of 41 TD events (88%). Additionally, we explore 27 TDs which have not been developed in hurricanes and found similar low ionospheric reactions. However, in the sample of 41 TDs which are followed by hurricanes the typical low ionosphere perturbations seem to be more frequent than other TDs.
• May 25 2017 math.DS arXiv:1705.08511v1
We define a broad class of piecewise smooth plane homeomorphisms which have properties similar to the properties of Lozi maps, including the existence of a hyperbolic attractor. We call those maps Lozi-like. For those maps one can apply our previous results on kneading theory for Lozi maps. We show a strong numerical evidence that there exist Lozi-like maps that have kneading sequences different than those of Lozi maps.
• Hamiltonian Monte Carlo (HMC) is a powerful sampling algorithm employed by several probabilistic programming languages. Its fully automatic implementations have made HMC a standard tool for applied Bayesian modeling. While its performance is often superior to alternatives under a wide range of models, one weakness of HMC is its inability to handle discrete parameters. In this article, we present discontinuous HMC, an extension that can efficiently explore discrete parameter spaces as well as continuous ones. The proposed algorithm is based on two key ideas: embedding of discrete parameters into a continuous space and simulation of Hamiltonian dynamics on a piecewise smooth density function. The latter idea has been explored under special cases in the literature, but the extensions introduced here are critical in turning the idea into a general and practical sampling algorithm. Discontinuous HMC is guaranteed to outperform a Metropolis-within-Gibbs algorithm as the two algorithms coincide under a specific (and sub-optimal) implementation of discontinuous HMC. It is additionally shown that the dynamics underlying discontinuous HMC have a remarkable similarity to a zig-zag process, a continuous-time Markov process behind a state-of-the-art non-reversible rejection-free sampler. We apply our algorithm to challenging posterior inference problems to demonstrate its wide applicability and superior performance.
• The explosive growth of the location-enabled devices coupled with the increasing use of Internet services has led to an increasing awareness of the importance and usage of geospatial information in many applications. The navigation apps (often called Maps), use a variety of available data sources to calculate and predict the travel time as well as several options for routing in public transportation, car or pedestrian modes. This paper evaluates the pedestrian mode of Maps apps in three major smartphone operating systems (Android, iOS and Windows Phone). In the paper, we will show that the Maps apps on iOS, Android and Windows Phone in pedestrian mode, predict travel time without learning from the individual's movement profile. In addition, we will exemplify that those apps suffer from a specific data quality issue which relates to the absence of information about location and type of pedestrian crossings. Finally, we will illustrate learning from movement profile of individuals using various predictive analytics models to improve the accuracy of travel time estimation.
• We analyze universal terms that appear in the large system size scaling of the overlap between the Néel state and the ground state of the spin-1/2 XXZ chain in the antiferromagnetic regime. In a critical theory, the order one term of the asymptotics of such an overlap may be expressed in terms of $g$-factors, known in the context of boundary conformal field theory. In particular, for the XXZ model in its gapless phase, this term provides access to the Luttinger parameter. In its gapped phase, on the other hand, the order one term simply reflects the symmetry broken nature of the phase. In order to study the large system size scaling of this overlap analytically and to compute the order one term exactly, we use a recently derived finite-size determinant formula and perform an asymptotic expansion. Our analysis confirms the predictions of boundary conformal field theory and enables us to determine the exponent of the leading finite-size correction.
• FUV observations have revealed the transition temperature gas (TTG; $\log{T({\mathrm{K}})}$ ~ 5), located in the lower Galactic halo and in the High Velocity Clouds. However, the corresponding X-ray absorption has so far remained mostly undetected. In order to make an improvement in this respect in Galactic X-ray absorption studies, we accumulated very deep (~3 Ms) spectra of the blazar PKS 2155-304 obtained with the spectrometers RGS1, RGS2, LETG/HRC and LETG/ACIS-S and studied the absorption lines due to the intervening Galactic components. The very high quality of the data and the coverage of important wavelengths with at least two independent instruments allowed us to reliably detect ten Galactic lines with better than 99.95% confidence. We discovered significant absorption from blended OIV transitions 1s-2p $^2$S (22.571 Å ), 1s-2p $^2$P (22.741 Å ) and 1s-2p $^2$D (22.777 Å ), and from the OV transition 1s-2p (22.370 Å ) from TTG at $\log{T({\mathrm{K}})} \thinspace = \thinspace 5.2\pm0.1$. A joint X-ray and FUV analysis indicated that photoionisation is negligible for this component and that the gas is in a cooling transition phase. However, the temperature is high enough that the column density ratio N(OIV)/N(OV) is not significantly different from that in collisional ionisation equilibrium (CIE). Under CIE we obtained $N_{\mathrm{OIV}}$= 3.6$\pm$2.0 $\times 10^{15}$ cm$^{-2}$, corresponding to $N_{\mathrm{H}}$ = 1.0$\pm$0.5 $\times 10^{19} \frac{Z_{\odot}}{Z_{\mathrm{TTG}}}$ cm$^{-2}$.
• Current understanding of how contractility emerges in disordered actomyosin networks of non-muscle cells is still largely based on the intuition derived from earlier works on muscle contractility. This view, however, largely overlooks the free energy gain following passive cross-linker binding, which, even in the absence of active fluctuations, provides a thermodynamic drive towards highly overlapping filamentous states. In this work, we shed light on this phenomenon, showing that passive cross-linkers, when considered in the context of two anti-parallel filaments, generate noticeable contractile forces. However, as binding free energy of cross-linkers is increased, a sharp onset of kinetic arrest follows, greatly diminishing effectiveness of this contractility mechanism, allowing the network to contract only with weakly resisting tensions at its boundary. We have carried out stochastic simulations elucidating this mechanism, followed by a mean-field treatment that predicts how contractile forces asymptotically scale at small and large binding energies, respectively. Furthermore, when considering an active contractile filament pair, based on non-muscle myosin II, we found that the non-processive nature of these motors leads to highly inefficient force generation, due to recoil slippage of the overlap during periods when the motor is dissociated. However, we discovered that passive cross-linkers can serve as a structural ratchet during these unbound motor time spans, resulting in vast force amplification. Our results shed light on the non-equilibrium effects of transiently binding proteins in biological active matter, as observed in the non-muscle actin cytoskeleton, showing that highly efficient contractile force dipoles result from synergy of passive cross-linker and active motor dynamics, via a ratcheting mechanism on a funneled energy landscape.
• We propose an iterated local search based on several classes of local and large neighborhoods for the bin packing problem with conflicts. This problem, which combines the characteristics of both bin packing and vertex coloring, arises in various application contexts such as logistics and transportation, timetabling, and resource allocation for cloud computing. We introduce $O(1)$ evaluation procedures for classical local-search moves, polynomial variants of ejection chains and assignment neighborhoods, an adaptive set covering-based neighborhood, and finally a controlled use of 0-cost moves to further diversify the search. The overall method produces solutions of good quality on the classical benchmark instances and scales very well with an increase of problem size. Extensive computational experiments are conducted to measure the respective contribution of each proposed neighborhood. In particular, the 0-cost moves and the large neighborhood based on set covering contribute very significantly to the search. Several research perspectives are open in relation to possible hybridizations with other state-of-the-art mathematical programming heuristics for this problem.
• The problem of constructing all the non-degenerate involutive set-theoretic solutions of the Yang-Baxter equation recently has been reduced to the problem of describing all the left braces. In particular, the classification of all finite left braces is fundamental in order to describe all finite such solutions of the Yang-Baxter equation. In this paper we continue the study of finite simple left braces with the emphasis on the application of the asymmetric product of left braces in order to construct new classes of simple left braces. We do not only construct new classes but also we interpret all previously known constructions as asymmetric products. Moreover, a construction is given of finite simple left braces with a multiplicative group that is solvable of arbitrary derived length.
• Randomized experiments have been used to assist decision-making in many areas. They help people select the optimal treatment for the test population with certain statistical guarantee. However, subjects can show significant heterogeneity in response to treatments. The problem of customizing treatment assignment based on subject characteristics is known as uplift modeling, differential response analysis, or personalized treatment learning in literature. A key feature for uplift modeling is that the data is unlabeled. It is impossible to know whether the chosen treatment is optimal for an individual subject because response under alternative treatments is unobserved. This presents a challenge to both the training and the evaluation of uplift models. In this paper we describe how to obtain an unbiased estimate of the key performance metric of an uplift model, the expected response. We present a new uplift algorithm which creates a forest of randomized trees. The trees are built with a splitting criterion designed to directly optimize their uplift performance based on the proposed evaluation method. Both the evaluation method and the algorithm apply to arbitrary number of treatments and general response types. Experimental results on synthetic data and industry-provided data show that our algorithm leads to significant performance improvement over other applicable methods.
• A machine learning approach that we term that the Stochastic Replica Voting Machine (SRVM) algorithm is presented and applied to a binary and a 3-class classification problems in materials science. Here, we employ SRVM to predict candidate compounds capable of forming cubic Perovskite (ABX3) structure and further classify binary (AB) solids. The results of our binary and ternary classifications compared to those obtained by the SVM algorithm.
• We extend techniques due to Pardon to show that there is a lower bound on the distortion of a knot in $\mathbb{R}^3$ proportional to the minimum of the bridge distance and the bridge number of the knot. We also exhibit an infinite family of knots for which the minimum of the bridge distance and the bridge number is unbounded and Pardon's lower bound is constant.
• May 25 2017 cs.NI cs.CR arXiv:1705.08489v1
Today's embedded and cyber-physical systems are ubiquitous. A large number of critical cyber-physical systems have real-time requirements (e.g., avionics, automobiles, power grids, manufacturing systems, industrial control systems, etc.). The current trend is to connect real-time embedded devices to the Internet. This gives rise to the real-time Internet-of-things (RT-IoT) that promises a better user experience through stronger connectivity and better use of next-generation embedded devices, albeit with safety-critical properties. However RT-IoT are also increasingly becoming targets for cyber-attacks as evident by recent events. This paper gives an introduction to RT-IoT systems, an outlook of current approaches and possible research challenges towards a holistic secure RT-IoT framework.
• We introduce second-order vector representations of words, induced from nearest neighborhood topological features in pre-trained contextual word embeddings. We then analyze the effects of using second-order embeddings as input features in two deep natural language processing models, for named entity recognition and recognizing textual entailment, as well as a linear model for paraphrase recognition. Surprisingly, we find that nearest neighbor information alone is sufficient to capture most of the performance benefits derived from using pre-trained word embeddings. Furthermore, second-order embeddings are able to handle highly heterogeneous data better than first-order representations, though at the cost of some specificity. Additionally, augmenting contextual embeddings with second-order information further improves model performance in some cases. Due to variance in the random initializations of word embeddings, utilizing nearest neighbor features from multiple first-order embedding samples can also contribute to downstream performance gains. Finally, we identify intriguing characteristics of second-order embedding spaces for further research, including much higher density and different semantic interpretations of cosine similarity.
• Rotationally coherent Lagrangian vortices (RCLVs) are identified from satellite-derived surface geostrophic velocities in the Eastern Pacific (180$^\circ$-130$^\circ$ W) using the objective (frame-invariant) finite-time Lagrangian-coherent-structure detection method of Haller et al. (2016) based on the Lagrangian-averaged vorticity deviation. RCLVs are identified for 30, 90, and 270 day intervals over the entire satellite dataset, beginning in 1993. In contrast to structures identified using Eulerian eddy-tracking methods, the RCLVs maintain material coherence over the specified time intervals, making them suitable for material transport estimates. Statistics of RCLVs are compared to statistics of eddies identified from sea-surface height (SSH) by Chelton et al. 2011. RCLVs and SSH eddies are found to propagate westward at similar speeds at each latitude, consistent with the Rossby wave dispersion relation. However, RCLVs are uniformly smaller and shorter-lived than SSH eddies. A coherent eddy diffusivity is derived to quantify the contribution of RCLVs to meridional transport; it is found that RCLVs contribute less than 1% to net meridional dispersion and diffusion in this sector, implying that eddy transport of tracers is mostly due to incoherent motions, such as swirling and filamentation outside of the eddy cores, rather than coherent meridional translation of eddies themselves. These findings call into question prior estimates of coherent eddy transport based on Eulerian eddy identification methods.
• May 25 2017 hep-ph arXiv:1705.08486v1
If new physics (e.g. SUSY) does not show up as direct evidence at the LHC, it could still be observable in FCNC processes involving the $t$-quark. We take a close look at the process $t\to c + h/Z$ and show that its branching ratio in the Standard Model is subject to three mechanisms of suppression. To obtain an observable signal, one needs to evade all these mechanisms in a theory beyond the Standard Model. We show that a theory like the cMSSM cannot provide a big enough enhancement. However, in a framework like $R$-parity-violating SUSY, observable signals are a distinct possibility.
• Observations of high energy neutrinos, both in the laboratory and from cosmic sources, can be a useful probe in searching for new physics. Such observations can provide sensitive tests of Lorentz invariance violation (LIV), which may be a the result of quantum gravity physics (QG). We review some observationally testable consequences of LIV using effective field theory (EFT) formalism. To do this, one can postulate the existence of additional small LIV terms in free particle Lagrangians, suppressed by powers of the Planck mass. The observational consequences of such terms are then examined. In particular, one can place limits on a class of non-renormalizable, mass dimension five and six Lorentz invariance violating operators that may be the result of QG.
• We consider an extension of the Standard Model that provide an unified description of eV scale neutrino mass and dark energy. An explicit model is presented by augmenting the Standard Model with an $SU(2)_L$ doublet scalar, a singlet scalar and a right handed neutrino where all of them are assumed to be charged under a global $U(1)_X$ symmetry. A light pseudo-Nambu-Goldstone Boson, associated with the spontaneously broken $U(1)_{X}$ symmetry, acts as a mediator of an attractive force leading to a Dirac neutrino condensate, with large correlation length, and a non-zero gap in the right range providing a cosmologically feasible dark energy scenario. The neutrino mass is generated through the usual Dirac seesaw mechanism. Parameter space, reproducing viable dark energy scenario while having neutrino mass in the right ballpark, is presented.
• We give explicit formulae for a DGLA model of the bi-gon which is symmetric under the geometric symmetries of the cell. This follows the work of Lawrence-Sullivan on the (unique) DGLA model of the interval and its construction uses deeper knowledge of the structure of such models and their localisations for non-simply connected spaces.
• The differential equation proposed by Frits Zernike to obtain a basis of polynomial orthogonal solutions on the the unit disk to classify wavefront aberrations in circular pupils, is shown to have a set of new orthonormal solution bases, involving Legendre and Gegenbauer polynomials, in non-orthogonal coordinates close to Cartesian ones. We find the overlaps between the original Zernike basis and a representative of the new set, which turn out to be Clebsch-Gordan coefficients.
• High quality electrical contact to semiconducting transition metal dichalcogenides (TMDCs) such as $MoS_2$ is key to unlocking their unique electronic and optoelectronic properties for fundamental research and device applications. Despite extensive experimental and theoretical efforts reliable ohmic contact to doped TMDCs remains elusive and would benefit from a better understanding of the underlying physics of the metal-TMDC interface. Here we present measurements of the atomic-scale energy band diagram of junctions between various metals and heavily doped monolayer $MoS_2$ using ultra-high vacuum scanning tunneling microscopy (UHV-STM). Our measurements reveal that the electronic properties of these junctions are dominated by 2D metal induced gap states (MIGS). These MIGS are characterized by a spatially growing measured gap in the local density of states (L-DOS) of the $MoS_2$ within 2 nm of the metal-semiconductor interface. Their decay lengths extend from a minimum of ~0.55 nm near mid gap to as long as 2 nm near the band edges and are nearly identical for Au, Pd and graphite contacts, indicating that it is a universal property of the monolayer semiconductor. Our findings indicate that even in heavily doped semiconductors, the presence of MIGS sets the ultimate limit for electrical contact.
• May 25 2017 math.DG math.CV arXiv:1705.08477v1
Let $M$ be a complex manifold and $L$ an oriented real line bundle on M equipped with a flat connection. An LCK ("locally conformally Kahler") form is a closed, positive (1,1)-form taking values in L, and an LCK manifold is one which admits an LCK form. Locally, any LCK form is expressed as an L-valued pluri-Laplacian of a function called LCK potential. We consider a manifold $M$ with an LCK form admitting a global LCK potential, and prove that M admits a global, positive LCK potential. Then M admits a holomorphic embedding to a Hopf manifold.
• The Nuclear Spectroscopic Telescope Array (NuSTAR) provides an improvement in sensitivity at energies above 10 keV by two orders of magnitude over non-focusing satellites, making it possible to probe deeper into the Galaxy and Universe. Lansbury and collaborators recently completed a catalog of 497 sources serendipitously detected in the 3-24 keV band using 13 deg2 of NuSTAR coverage. Here, we report on an optical and X-ray study of 16 Galactic sources in the catalog. We identify eight of them as stars (but some or all could have binary companions), and use information from Gaia to report distances and X-ray luminosities for three of them. There are four CVs or CV candidates, and we argue that NuSTAR J233426-2343.9 is a relatively strong CV candidate based partly on an X-ray spectrum from XMM-Newton. NuSTAR J092418-3142.2, which is the brightest serendipitous source in the Lansbury catalog, and NuSTAR J073959-3147.8 are LMXB candidates, but it is also possible that these two sources are CVs. One of the sources is a known HMXB, and NuSTAR J105008-5958.8 is a new HMXB candidate, which has strong Balmer emission lines in its optical spectrum and a hard X-ray spectrum. We discuss the implications of finding these HMXBs for the surface density (logN-logS) and luminosity function of Galactic HMXBs. We conclude that, with the large fraction of unclassified sources in the Galactic plane detected by NuSTAR in the 8-24 keV band, there could be a significant population of low luminosity HMXBs.
• Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small adversarial change of an originally with high confidence correctly classified input leads to a wrong classification again with high confidence. This raises concerns that such classifiers are vulnerable to attacks and calls into question their usage in safety-critical systems. We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific lower bounds on the norm of the input manipulation required to change the classifier decision. Based on this analysis we propose the Cross-Lipschitz regularization functional. We show that using this form of regularization in kernel methods resp. neural networks improves the robustness of the classifier without any loss in prediction performance.
• Conjugated system have complex behaviors when increasing the number of monomers, which is one of the reasons that makes long oligomers hard to be characterized by numerical methods. An ex- ample of this are fused-azulene, a molecule that has been reported to displays an increasing magnetic moment with system size. A similar system composed of symmetric fused-benzene rings is reported to be always no magnetic. Instead of the empiric parametrized Pariser-Parr-Pople (PPP) Hamiltonian, a standard model for conjugated molecules, we consider the Hubbard Hamiltonian to explore a range of low electronic correlation by means of perturbation theory (PT). We show that a simple second-order perturbation treatment of electronic correlations by means of Rayleigh-Schroedinger PT allow to accurately infer about the magnetic state of these long complex \pi-conjugated molecules. For fused-azulene our results supports the hypothesis that the high-spin ground state on azulene oligomers comes from the frustrated geometry of these chains. We validate this approach using Density Matrix Renormalization Group (DMRG) calculations. Our procedure procedure could be helpful to describe the magnetic ground state of a larger set of conjugated molecules.
• One of the biggest needs in network science research is access to large realistic datasets. As data analytics methods permeate a range of diverse disciplines---e.g., computational epidemiology, sustainability, social media analytics, biology, and transportation--- network datasets that can exhibit characteristics encountered in each of these disciplines becomes paramount. The key technical issue is to be able to generate synthetic topologies with pre-specified, arbitrary, degree distributions. Existing methods are limited in their ability to faithfully reproduce macro-level characteristics of networks while at the same time respecting particular degree distributions. We present a suite of three algorithms that exploit the principle of residual degree attenuation to generate synthetic topologies that adhere to macro-level real-world characteristics. By evaluating these algorithms w.r.t. several real-world datasets we demonstrate their ability to faithfully reproduce network characteristics such as node degree, clustering coefficient, hop length, and k-core structure distributions.
• May 25 2017 hep-th arXiv:1705.08472v1
In this paper, we extend our previous work to construct (0,2) Toda-like mirrors to A/2-twisted theories on more general spaces, as part of a program of understanding (0,2) mirror symmetry. Specifically, we propose (0,2) mirrors to GLSMs on toric del Pezzo surfaces and Hirzebruch surfaces with deformations of the tangent bundle. We check the results by comparing correlation functions, global symmetries, as well as geometric blowdowns with the corresponding (0,2) Toda-like mirrors. We also briefly discuss Grassmannian manifolds.
• We present a novel device that can offer two extremes of elastic wave propagation --- nearly complete transmission and strong attenuation under impulse excitation. The mechanism of this highly tunable device relies on intermixing effects of dispersion and nonlinearity. The device consists of identical cylinders arranged in a chain, which interact with each other as per nonlinear Hertz's contact law. For a `dimer' configuration, i.e., two different contact angles alternating in the chain, we analytically, numerically, and experimentally show that impulse excitation can either propagate as a localized wave, or it can travel as a highly dispersive wave. Remarkably, these extremes can be achieved in this periodic arrangement simply by \textitin-situ control of contact angles between cylinders. We close the discussion by highlighting the key characteristics of the mechanisms that facilitate strong attenuation of incident impulse. These include low frequency to high frequency (LF-HF) scattering, and turbulence-like cascading in a periodic system. We thus envision that these adaptive, cylinder-based nonlinear phononic crystals, in conjunction with conventional impact mitigation mechanisms, could be used to design highly tunable and efficient impact manipulation devices.
• DH Tau is a young ($\sim$1 Myr) classical T Tauri star. It is one of the few young PMS stars known to be associated with a planetary mass companion, DH Tau b, orbiting at large separation and detected by direct imaging. DH Tau b is thought to be accreting based on copious H${\alpha}$ emission and exhibits variable Paschen Beta emission. NOEMA observations at 230 GHz allow us to place constraints on the disk dust mass for both DH Tau b and the primary in a regime where the disks will appear optically thin. We estimate a disk dust mass for the primary, DH Tau A of $17.2\pm1.7\,M_{\oplus}$, which gives a disk-to-star mass ratio of 0.014 (assuming the usual Gas-to-Dust mass ratio of 100 in the disk). We find a conservative disk dust mass upper limit of 0.42$M_{\oplus}$ for DH Tau b, assuming that the disk temperature is dominated by irradiation from DH Tau b itself. Given the environment of the circumplanetary disk, variable illumination from the primary or the equilibrium temperature of the surrounding cloud would lead to even lower disk mass estimates. A MCFOST radiative transfer model including heating of the circumplanetary disk by DH Tau b and DH Tau A suggests that a mass averaged disk temperature of 22 K is more realistic, resulting in a dust disk mass upper limit of 0.09$M_{\oplus}$ for DH Tau b. We place DH Tau b in context with similar objects and discuss the consequences for planet formation models.
• We study integrability of the derivative of solutions to a singular one-dimensional parabolic equation with initial data in $W^{1,1}$. In order to avoid additional difficulties we consider only the periodic boundary conditions. The problem we study is a gradient flow of a convex, linear growth variational functional. We also prove a similar result for the elliptic companion problem, i.e. the time semidiscretization.

Felix Leditzky May 24 2017 20:43 UTC

Yes, that's right, thanks!

For (5), you use the Cauchy-Schwarz inequality $\left| \operatorname{tr}(X^\dagger Y) \right| \leq \sqrt{\operatorname{tr}(X^\dagger X)} \sqrt{\operatorname{tr}(Y^\dagger Y)}$ for the Hilbert-Schmidt inner product $\langle X,Y\rangle := \operatorname{tr}(X^\dagger Y)$ wi

...(continued)
Michael Tolan May 24 2017 20:27 UTC

Just reading over Eq (5) on P5 concerning the diamond norm.

Should the last $\sigma_1$ on the 4th line be replaced with a $\sigma_2$? I think I can see how the proof is working but not entirely certain.

Noon van der Silk May 23 2017 11:15 UTC

I think this thread has reached it's end.

I've locked further comments, and I hope that the quantum computing community can thoughtfully find an approach to language that is inclusive to all and recognises the diverse background of all researchers, current and future.

I direct your attention t

...(continued)
Varun Narasimhachar May 23 2017 02:14 UTC

While I would never want to antagonize my peers or to allow myself to assume they were acting irrationally, I do share your concerns to an extent. I worry about the association of social justice and inclusivity with linguistic engineering, virtual lynching, censorship, etc. (the latter phenomena sta

...(continued)
Aram Harrow May 23 2017 01:30 UTC

I think you are just complaining about issues that arise from living with other people in the same society. If you disagree with their values, well, then some of them might have a negative opinion about you. If you express yourself in an aggressive way, and use words like "lynch" to mean having pe

...(continued)
Steve Flammia May 23 2017 01:04 UTC

I agree with Noon that the discussion is becoming largely off topic for SciRate, but that it might still be of interest to the community to discuss this. I invite people to post thoughtful and respectful comments over at [my earlier Quantum Pontiff post][1]. Further comments here on SciRate will be

...(continued)
Noon van der Silk May 23 2017 00:59 UTC

I've moderated a few comments on this post because I believe it has gone past useful discussion, and I'll continue to remove comments that I believe don't add anything of substantial value.

Thanks.

Aram Harrow May 22 2017 23:13 UTC

The problem with your argument is that no one is forcing anyone to say anything, or banning anything.

If the terms really were offensive or exclusionary or had other bad side effects, then it's reasonable to discuss as a community whether to keep them, and possibly decide to stop using them. Ther

...(continued)
stan May 22 2017 22:53 UTC

Fair enough. At the end of the day I think most of us are concerned with the strength of the result not the particular language used to describe it.

VeteranVandal May 22 2017 22:41 UTC

But how obvious is ancilla? To me it is not even remotely obvious (nor clear as a term, but as the literature used it so much, I see such word in much the same way as I see auxiliary, in fact - now if you want to take offense with auxiliary, what can I say? I won't invent words just to please you).

...(continued)