# Top arXiv papers

• Layered Li(Ni,Mn,Co,)O$_2$ (NMC) presents an intriguing ternary alloy design space for the optimization of performance as a cathode material in Li-ion batteries. Here, we present a high fidelity computational search of the ternary phase diagram with an emphasis on high-Ni containing compositional phases. This is done through the use of density functional theory training data fed into a reduced order model Hamiltonian that accounts for effective electronic and spin interactions of neighboring transition metal atoms at various lengths in a background of fixed lithium and oxygen atoms. This model can then be solved to include finite temperature thermodynamics into a convex hull analysis. We also provide a method to propagate the uncertainty at every level of the analysis to the final prediction of thermodynamically favorable compositional phases thus providing a quantitative measure of confidence for each prediction made. Due to the complexity of the three component system, as well as the intrinsic error of density functional theory, we argue that this propagation of uncertainty, particularly the uncertainty due to exchange-correlation functional choice is necessary to have reliable and interpretable results. With our final result, we recover the prediction of already known phases such as LiNi$_{0.33}$Mn$_{0.33}$Co$_{0.33}$O$_2$ (111) and LiNi$_{0.8}$Mn$_{0.1}$Co$_{0.1}$O$_2$ (811) in exact proportion while finding other proportions very close to the experimentally claimed LiNi$_{0.6}$Mn$_{0.2}$Co$_{0.2}$O$_2$ (622) and LiNi$_{0.5}$Mn$_{0.3}$Co$_{0.2}$O$_2$ (532) phases, and overall predict a total of 37 phases with reasonable confidence and 69 more phases with a lower level of confidence. Through our analysis, we also can identify the phases with the highest average operational voltage at a given Co composition.
• A quantum interatomic scattering is implemented in the direct simulation Monte Carlo (DSMC) method applied to transport phenomena in rarefied gases. In contrast to the traditional DSMC method based on the classical scattering, the proposed implementation allows us to model flows of gases over the whole temperature range beginning from 1 K up any high temperature when no ionization happens. To illustrate the new numerical approach, two helium isotopes $^3$He and $^4$He were considered in two canonical problems, namely, heat transfer between two planar surfaces and planar Couette flow. To solve these problems, the ab initio potential for helium is used, but the proposed technique can be used with any intermolecular potential. The problems were solved over the temperature range from 1 K to 3000 K and for two values of the rarefaction parameter 1 and 10. The former corresponds to the transitional regime and the last describes the temperature jump and velocity slip regime. No influence of the quantum effects was detected within the numerical error of 0.1 % for the temperature 300 K and higher. However, the quantum approach requires less computational effort than the classical one in this temperature range. For temperatures lower than 300 K, the influence of the quantum effects exceed the numerical error and reaches 67 % at the temperature of 1 K.
• In this work, we describe the geometric and probabilistic properties of a noncommutative 2- torus in a magnetic field. We study the volume invariance, integrated scalar curvature and volume form by using the method of perturbation by inner derivation of the magnetic Laplacian in the noncommutative 2-torus. Then, we analyze the magnetic stochastic process describing the motion of a particle subject to a uniform magnetic field on the noncommutative 2-torus, derive and discuss the related main properties.
• Suppose that $T^*$ is an $\omega_1$-Aronszajn tree with no stationary antichain. We introduce a forcing axiom PFA($T^*$) for proper forcings which preserve these properties of $T^*$. We prove that PFA($T^*$) implies many of the strong consequences of PFA, such as the failure of very weak club guessing, that all of the cardinal characteristics of the continuum are greater than $\omega_1$, and the $P$-ideal dichotomy. On the other hand, PFA($T^*$) implies some of the consequences of diamond principles, such as the existence of Knaster forcings which are not stationarily Knaster.
• Suppose $(B,\pi)$ is an open book supporting $(Y,\xi)$, where the binding $B$ is possibly disconnected, and $K$ is a braid about this open book. Then $B\cup K$ is naturally a transverse link in $(Y,\xi)$. We prove that the transverse link invariant in knot Floer homology, $\widehatt(B∪K)∈\widehatHFK(-Y,B∪K),$defined in [BVV13] is always nonzero. This generalizes the main results of Etnyre and Vela-Vick in [VV11, EVV10]. As an application, we show that if $K$ is braided about an open book with connected binding, and has fractional Dehn twist coefficient greater than one, then $\widehat{t}(K)\ne 0$. This generalizes a result of Plamenevskaya [PLA15] for classical braids.
• We describe the formation of bulk and edge arcs in the dispersion relation of two-dimensional coupled-resonator arrays that are topologically trivial in the hermitian limit. Each resonator provides two asymmetrically coupled internal modes, as realized in noncircular open geometries, which enables the system to exhibit non-hermitian physics. Neighboring resonators are coupled chirally to induce non-hermitian symmetries. The bulk dispersion displays Fermi arcs connecting spectral singularities known as exceptional points, and can be tuned to display purely real and imaginary branches. At an interface between resonators of different shape, one-dimensional edge states form that spectrally align along complex arcs connecting different parts of the bulk bands. We also describe conditions under which the edge-state arcs are free standing. These features can be controlled via anisotropy in the resonator couplings.
• Firstly, we shall introduce the so-called snapping out Walsh's Brownian motion and present its relation with Walsh's Brownian motion. Then the stiff problem related to Walsh's Brownian motion will be described and we shall build a phase transition for it. The snapping out Walsh's Brownian motion corresponds to the so-called semi-permeable pattern of this stiff problem.
• We use the proper motions (PM) of half a million red giant stars in Large Magellanic Cloud measured by Gaia to construct a 2d kinematic map of mean PM and its dispersion across the galaxy, out to 7 Kpc from its centre. We then construct dynamical models and measure the rotation curve, mean azimuthal velocity, velocity dispersion profiles, and the orientation of the galaxy. We find that the circular velocity reaches 100 km/s at 5 Kpc, and that the velocity dispersion ranges from 40-50 km/s in the galaxy centre to 20 km/s at 7 Kpc.
• Recently observations of Gravitational Waves (GW) generated by black-hole collisions have opened a new window to explore the universe in diverse scales. It is expected that in the following years be possible the detection of primordial gravitational waves. However this formalism is developed for weak gravitational waves, when the dynamics of the waves can be linearized. In this work we develop a non-perturbative formalism to describe GW using the Unified Spinor Fields (USF) theory. The tensor index is calculated and we obtain that this must be $n_T=0$ in order to the $+$ and $\times$ polarisations modes can take the same spectrum. This impose some restriction on the constant of self-interaction $\xi$ of the fermionic source. We obtain that the amplitude of the signals today must be $\sim 10^{-112}$ times more weak than the amplitude generated during inflation, which is of the order of $\left.\Delta_{GW}\right|_{Infl} \simeq 10^{-11}$.
• The Gibbs energy, G, determines the equilibrium conditions of chemical reactions and materials stability. Despite this fundamental and ubiquitous role, G has been tabulated for only a small fraction of known inorganic compounds, thus impeding a comprehensive perspective of the effects of temperature and composition on materials stability and synthesizability. Here, we use the SISSO (sure independence screening and sparsifying operator) approach to identify a simple and accurate descriptor to predict G for stoichiometric inorganic compounds with ~50 meV/atom (~1 kcal/mol) resolution, and with minimal computational cost, for temperatures ranging from 300-1800 K. We then apply this descriptor to ~30,000 known materials curated from the Inorganic Crystal Structure Database (ICSD). Using the resulting predicted thermochemical data, we generate thousands of temperature-dependent phase diagrams to provide insights into the effects of temperature and composition on materials synthesizability and stability and to establish the temperature-dependent scale of metastability for inorganic compounds.
• Numeracy is the ability to understand and work with numbers. It is a necessary skill for composing and understanding documents in clinical, scientific, and other technical domains. In this paper, we explore different strategies for modelling numerals with language models, such as memorisation and digit-by-digit composition, and propose a novel neural architecture that uses a continuous probability density function to model numerals from an open vocabulary. Our evaluation on clinical and scientific datasets shows that using hierarchical models to distinguish numerals from words improves a perplexity metric on the subset of numerals by 2 and 4 orders of magnitude, respectively, over non-hierarchical models. A combination of strategies can further improve perplexity. Our continuous probability density function model reduces mean absolute percentage errors by 18% and 54% in comparison to the second best strategy for each dataset, respectively.
• We present a calculating method for the quark and lepton mixing angles. After a general discussion in field theoretic models, we present a working model from a string compactification through Z(12-I) orbifold compactification. It is the first example from string compactification which successfully fits to the observed data. Assuming that all Yukawa couplings from string compactification are real, we also comment on a relation between the CP phases in the Jarlskog determinants obtained from the CKM and PMNS matrices.
• Friction coefficients for the fusion reaction $^{16}$O+$^{16}$O $\rightarrow$ $^{32}$S are extracted based on both the time-dependent Hartree-Fock and the time-dependent density matrix methods. The latter goes beyond the mean-field approximation by taking into account the effect of two-body correlations, but in practical simulations of fusion reactions we find that the total energy is not conserved. We analyze this problem and propose a solution that allows for a clear quantification of dissipative effects in the dynamics. Compared to mean-field simulations, friction coefficients in the density-matrix approach are enhanced by about $20 \, \%$. An energy-dependence of the dissipative mechanism is also demonstrated, indicating that two-body collisions are more efficient at generating friction at low incident energies.
• In a recent article, Bicca-Marques and Calegaro-Marques [2016] discussed the putative assumptions related to an interpretation we provided regarding an observed positive relationship between weekly averaged parasite richness of a group of mandrills and their daily path lengths (DPL), published earlier in the same journal (Brockmeyer et al., 2015). In our article, we proposed, inter alia, that 'the daily travels of mandrills could be seen as a way to escape contaminated habitats on a local scale'. In their article, Bicca-Marques and Calegaro-Marques [2016] proposed an alternative mechanism that they considered to be more parsimonious. In their view, increased DPL also increases exposure to novel parasites from the environment. In other words, while we proposed that elevated DPL may be a consequence of elevated parasite richness, they viewed it as a cause. We are happy to see that our study attracted so much interest that it evoked a public comment. We are also grateful to Bicca-Marques and Calegaro-Marques [2016] for pointing out an obvious alternative scenario that we failed to discuss and for laying out several key factors and assumptions that should be addressed by future studies examining the links between parasite risk and group ranging. We use this opportunity to advance this discourse by responding to some of the criticisms raised in their discussion of our article. Our reply is structured into three sections. In the first section ('Omission'), we briefly contextualize the main object of criticism. In the second section ('Parsimony'), we discuss the putative parsimony of the two competing scenario. Finally, we end with a general comment about the nature of scientific discourse.
• Extra dimensions can be very useful tools when constructing new physics models. Previously, we began investigating toy models for the 5-D analog of the kinetic mixing/vector portal scenario where the interactions of bulk dark matter with the brane-localized fields of the Standard Model are mediated by a massive $U(1)_D$ dark photon also living in the bulk. In that setup, where the dark matter was taken to be a complex scalar, a number of nice features were obtained such as $U(1)_D$ breaking by boundary conditions without the introduction of a dark Higgs field, the absence of potentially troublesome SM Higgs-dark singlet mixing, also by boundary conditions, the natural similarity of the dark matter and dark photon masses and the decoupling of the heavy gauge Kaluza-Klein states from the Standard Model. In the present paper we extend this approach by examining the more complex cases of Dirac and Majorana fermionic dark matter. In particular, we discuss a new mechanism that can occur in 5-D (but not in 4-D) that allows for light Dirac dark matter in the $\sim 100$ MeV mass range, even though it has an $s$-wave annihilation into Standard Model fields, by avoiding the strong constraints that arise from both the CMB and 21 cm data. This mechanism makes use of the presence of the Kaluza-Klein excitations of the dark photon to extremize the increase in the annihilation cross section usually obtained via resonant enhancement. In the Majorana dark matter case, we explore the possibility of a direct $s$-channel dark matter pair-annihilation process producing the observed relic density, due to the general presence of parity-violating dark matter interactions, without employing the usual co-annihilation mechanism which is naturally suppressed in this 5-D setup.
• We report the first detection of the second-forbidden, non-unique, $2^+\rightarrow 0^+$, ground-state transition in the $\beta$ decay of $^{20}$F. A low-energy, mass-separated $^{20}\rm{F}$ beam produced at the IGISOL facility in Jyväskylä, Finland, was implanted in a thin carbon foil and the $\beta$ spectrum measured using a magnetic transporter and a plastic-scintillator detector. The branching ratio inferred from the observed $\beta$ yield is $[ 1.10\pm 0.21\textrm{(stat)}\pm 0.17\textrm{(sys)}^{+0.00}_{-0.11}\textrm{(theo)}] \times 10^{-5}$ corresponding to $\log ft = 10.47(11)$, making this the strongest known second-forbidden, non-unique transition. The experimental result is supported by shell-model calculations and has important astrophysical implications.
• In 1923, the Great Kanto Earthquake hit the Japanese archipelago with a moment magnitude scale of 7.9. To study its long-run effects on the development of children, we established a unique school-level panel dataset on the height of children and compiled the regional variation of the damage from official reports. We found that fetal earthquake exposure had negative effects on the development of children and that the magnitude increased with the degree of devastation. However, the results from the prefecture-level data imply that the impacts of earthquakes on child height are limited at the local level as the physical disruption due to the earthquake was concentrated on a set of municipalities in a certain prefecture.
• In this paper we presented the modified algorithm for astrometric reduction of the wide-field images. This algorithm is based on the iterative using of the method of ordinary least squares (OLS) and statistical Student t-criterion. The proposed algorithm provides the automatic selection of the most probabilistic reduction model. This approach allows eliminating almost all systematic errors that are caused by imperfections in the optical system of modern large telescopes.
• There are only 10 Euclidean forms, that is flat closed three dimensional manifolds: six are orientable and four are non-orientable. The aim of this paper is to describe all types of $n$-fold coverings over orientable Euclidean manifolds $\mathcal{G}_{2}$ and $\mathcal{G}_{4}$, and calculate the numbers of non-equivalent coverings of each type. We classify subgroups in the fundamental groups $\pi_1(\mathcal{G}_{2})$ and $\pi_1(\mathcal{G}_{4})$ up to isomorphism and calculate the numbers of conjugated classes of each type of subgroups for index $n$. The manifolds $\mathcal{G}_{2}$ and $\mathcal{G}_{4}$ are uniquely determined among the others orientable forms by their homology groups $H_1(\mathcal{G}_{2})=\mathbb{Z}_2\times \mathbb{Z}_2 \times \mathbb{Z}$ and $H_1(\mathcal{G}_{4})=\mathbb{Z}_2 \times \mathbb{Z}$.
• In this paper we presented the algorithm designed to efficient coordinate cross-match of objects in the modern massive astronomical catalogues. Preliminary data sort in the existed catalogues provides the opportunity for coordinate identification of the objects without any constraints with the storage and technical environment (PC). Using the multi-threading of the modern computing processors allows speeding up the program up to read-write data to the storage. Also the paper contains the main difficulties of implementing of the algorithm, as well as their possible solutions.
• May 22 2018 cs.IT math.IT arXiv:1805.08144v1
For a Distributed Storage System (DSS), the \textitFractional Repetition (FR) code is a class in which replicas of encoded data packets are stored on distributed chunk servers, where the encoding is done using the Maximum Distance Separable (MDS) code. The FR codes allow for exact uncoded repair with minimum repair bandwidth. In this paper, FR codes are constructed using finite binary sequences. The condition for universally good FR codes is calculated on such sequences. For some sequences, the universally good FR codes are explored.
• May 22 2018 math.CO arXiv:1805.08143v1
For a graph $G$ with vertex set $V(G)$, the Steiner distance $d(S)$ of $S\subseteq V(G)$ is the smallest number of edges in a connected subgraph of $G$ that contains $S$. Such a subgraph is necessarily a tree called a Steiner tree for $S$. The Steiner distance is a type of multi-way metric measuring the size of a Steiner tree between vertices of a graph and it generalizes the geodetic distance. The Steiner Wiener index is the sum of all Steiner distances in a graph and it generalizes the Wiener index. A simple method for calculating the Steiner Wiener index of block graphs is presented.
• We show that the spectral energy distribution of relic gravitons mildly increases for frequencies smaller than the $\mu$Hz and then flattens out whenever the refractive index of the tensor modes is dynamical during a quasi-de Sitter stage of expansion. For a conventional thermal history the high-frequency plateau ranges between the mHz and the audio band but it is supplemented by a spike in the GHz region if a stiff post-inflationary phase precedes the standard radiation-dominated epoch. Even though the slope is blue at intermediate frequencies, it may become violet in the MHz window. For a variety of post-inflationary histories, including the conventional one, a dynamical index of refraction leads to a potentially detectable spectral energy density in the kHz and in the mHz regions while all the relevant phenomenological constraints are concurrently satisfied.
• Sortition, i.e., random appointment for public duty, has been employed by societies throughout the years, especially for duties related to the judicial system, as a firewall designated to prevent illegitimate interference between parties in a legal case and agents of the legal system. In judicial systems of modern western countries, random procedures are mainly employed to select the jury, the court and/or the judge in charge of judging a legal case, so that they have a significant role in the course of a case. Therefore, these random procedures must comply with some principles, as statistical soundness; complete auditability; open-source programming; and procedural, cryptographical and computational security. Nevertheless, some of these principles are neglected by some random procedures in judicial systems, that are, in some cases, performed in secrecy and are not auditable by the involved parts. The assignment of cases in the Brazilian Supreme Court (Supremo Tribunal Federal) is an example of such procedures, for it is performed by a closed-source algorithm, unknown to the public and to the parts involved in the judicial cases, that allegedly assign the cases randomly to the justice chairs based on their caseload. In this context, this article presents a review of how sortition has been employed historically by societies, and discusses how Mathematical Statistics may be applied to random procedures of the judicial system, as it has been applied for almost a century on clinical trials, for example. Based on this discussion, a statistical model for assessing randomness in case assignment is proposed and applied to the Brazilian Supreme Court in order to shed light on how this assignment process is performed by the closed-source algorithm. Guidelines for random procedures are outlined and topics for further researches presented.
• The cross-correlation study of the unresolved $\gamma$-ray background (UGRB) with galaxy clusters has a potential to reveal the nature of the UGRB. In this paper, we perform a cross-correlation analysis between $\gamma$-ray data by the Fermi Large Area Telescope (Fermi-LAT) and a galaxy cluster catalogue from the Subaru Hyper Suprime-Cam (HSC) survey. The Subaru HSC cluster catalogue provides a wide and homogeneous large-scale structure distribution out to the high redshift at $z=1.1$, which has not been accessible in previous cross-correlation studies. We conduct the cross-correlation analysis not only for clusters in the all redshift range ($0.1 < z < 1.1$) of the survey, but also for subsamples of clusters divided into redshift bins, the low redshift bin ($0.1 < z < 0.6$) and the high redshift bin ($0.6 < z < 1.1$), to utilize the wide redshift coverage of the cluster catalogue. We find the evidence of the cross-correlation signals with the significance of 2.0-2.3$\sigma$ for all redshift and low-redshift cluster samples. On the other hand, for high-redshift clusters, we find the signal with weaker significance level (1.6-1.9$\sigma$). We also compare the observed cross-correlation functions with predictions of a theoretical model in which the UGRB originates from $\gamma$-ray emitters such as blazars, star-forming galaxies and radio galaxies. We find that the detected signal is consistent with the model prediction.
• The aim of this book is to introduce different robot path planning algorithms and suggest some of the most appropriate ones which are capable of running on a variety of robots and are resistant to disturbances. Being real-time, being autonomous, and the ability to identify high risk areas and risk management are the other features that will be mentioned throughout these methods. As such, the first chapter of the book provides an introduction on the importance of robots and describes the subject of navigation and path planning. The second chapter deals with the topic of path planning in unknown environments. In Chapter 3, path planning is considered in known environments. The fourth chapter focuses on robot path planning based on the robot vision sensors and computing hardware. Finally, In Chapter 5, the performance of some of the most important path planning methods introduced in the second to fourth chapters are presented in terms of implementation in various environments.
• We prove $W^{2,\varepsilon}$ estimates for viscosity supersolutions of fully nonlinear, uniformly elliptic equations where $\varepsilon$ decays polynomially with respect to the ellipticity ratio of the equations.
• We develop a model of social learning from overabundant information: Agents have access to many sources of information, and observation of all sources is not necessary in order to learn the payoff-relevant state. Short-lived agents sequentially choose to acquire a signal realization from the best source for them. All signal realizations are public. Our main results characterize two starkly different possible long-run outcomes, and the conditions under which each obtains: (1) efficient information aggregation, where the community eventually achieves the highest possible speed of learning; (2) "learning traps," where the community gets stuck using a suboptimal set of sources and learns inefficiently slowly. A simple property of the correlation structure separates these two possibilities. In both regimes, we characterize which sources are observed in the long run and how often.
• The Laplace operator $\mathcal{L}$ is discontinuous from $L^{p}\left(\mathbb{R}_{+}\right)$ into $L^{q}\left(\mathbb{R}_{+}\right)$ unless $1\leq p\leq 2$ and $q$ is its conjugate Lebesgue exponent. To better understand where this discontinuity comes from, we investigate two separate weaker problems: $$\mathcalL:L^p\left(\mathbbR_+\right) \longrightarrow L^q\left(\Omega\right),\u2009\Omega⊂\mathbbR_+\:\textis bounded and measurable,\qquad (I)$$ $$\mathcalL:L^p\left(\mathbbR_+\right)\longrightarrow L^q\left([s,∞[\,\right),\u2009s>0.,\qquad (II)$$ It turns out (I) holds true precisely if $\,\frac{1}{p}+\frac{1}{q}>1\,$ or $\,\frac{1}{p}+\frac{1}{q}=1,\,1\leq p \leq 2$, whereas (II) is valid precisely if $\,\frac{1}{p}+\frac{1}{q}<1\,$ or $\,\frac{1}{p}+\frac{1}{q}=1,\,1\leq p \leq 2$. Consequently, neither (I) nor (II) is true whenever $\,\frac{1}{p}+\frac{1}{q}=1,\,2< p \leq \infty$.
• We study homogeneous quenches in integrable quantum field theory where the initial state contains zero-momentum particles. We demonstrate that the two-particle pair amplitude necessarily has a singularity at the two-particle threshold. Albeit the explicit discussion is carried out for special (integrable) initial states, we argue that the singularity is inevitably present. We also identify the singularity in quenches in the Ising model across the quantum critical point, and compute it perturbatively in phase quenches in the quantum sine-Gordon model which are potentially relevant to experiments. We then construct the explicit time dependence of one-point functions using a linked cluster expansion regulated by a finite volume parameter. We find that the secular contribution normally linear in time is modified by a $t\ln t$ term. We additionally encounter a novel type of secular contribution which is shown to be related to parametric resonance. It is an interesting open question to resum the new contributions and to establish their consequences directly observable in experiments or numerical simulations.
• Galaxies at low-redshift typically possess negative gas-phase metallicity gradients (centres more metal-rich than their outskirts). Whereas, it is not uncommon to observe positive metallicity gradients in higher-redshift galaxies ($z \gtrsim 0.6$). Bridging these epochs, we present gas-phase metallicity gradients of 84 star-forming galaxies between $0.08 < z < 0.84$. Using the galaxies with reliably determined metallicity gradients, we measure the median metallicity gradient to be negative ($-0.039^{+0.007}_{-0.009}$ dex/kpc). Underlying this, however, is significant scatter: $(8\pm3)\%\ [7]$ of galaxies have significantly positive metallicity gradients, $(38 \pm 5)\%\ [32]$ have significantly negative gradients, $(31\pm5)\%\ [26]$ have gradients consistent with being flat. (The remaining $(23\pm5)\%\ [19]$ have unreliable gradient estimates.) We notice a slight trend for a more negative metallicity gradient with both increasing stellar mass and increasing star formation rate (SFR). However, given the potential redshift and size selection effects, we do not consider these trends to be significant. Indeed, once we normalize the SFR relative to that of the main sequence, we do not observe any trend between the metallicity gradient and the normalized SFR. This is contrary to recent studies of galaxies at similar and higher redshifts. We do, however, identify a novel trend between the metallicity gradient of a galaxy and its size. Small galaxies ($r_d < 3$ kpc) present a large spread in observed metallicity gradients (both negative and positive gradients). In contrast, we find no large galaxies ($r_d > 3$ kpc) with positive metallicity gradients, and overall there is less scatter in the metallicity gradient amongst the large galaxies. These large (well-evolved) galaxies may be analogues of present-day galaxies, which also show a common negative metallicity gradient.
• We survey some recent works on standard Young tableaux of bounded height. We focus on consequences resulting from numerous bijections to lattice walks in Weyl chambers.
• May 22 2018 quant-ph arXiv:1805.08129v1
We investigate the transport problem that a spinful matter wave is incident on a strong localized spin-orbit-coupled Bose-Einstein condensate in optical lattices, where the localization is admitted by atom interaction only existing at one particular site, and the spin-orbit coupling arouse spatial rotation of the spin texture. We find that tuning the spin orientation of the localized Bose-Einstein condensate can lead to spin-nonreciprocal / spin-reciprocal transport, meaning the transport properties are dependent on / independent of the spin orientation of incident waves. In the former case, we obtain the conditions to achieve transparency, beam-splitting, and blockade of the incident wave with a given spin orientation, and furthermore the ones to perfectly isolate incident waves of different spin orientation, while in the latter, we obtain the condition to maximize the conversion of different spin states. The result may be useful to develop a novel spinful matter wave valve that integrates spin switcher, beam-splitter, isolator, and converter. The method can also be applied to other real systems, e.g., realizing perfect isolation of spin states in magnetism, which is otherwise rather difficult.
• Studying chemical reactions on a state-to-state level tests and improves our fundamental understanding of chemical processes. For such investigations it is convenient to make use of ultracold atomic and molecular reactants as they can be prepared in well defined internal and external quantum states$^{1-4}$. In general, even cold reactions have many possible final product states$^{5-15}$ and reaction channels are therefore hard to track individually$^{16}$. In special cases, however, only a single reaction channel is essentially participating, as observed e.g. in the recombination of two atoms forming a Feshbach molecule$^{17-19}$ or in atom-Feshbach molecule exchange reactions$^{20,21}$. Here, we investigate a single-channel reaction of two Li$_2$-Feshbach molecules where one of the molecules dissociates into two atoms $2\mathrm{AB}\Rightarrow \mathrm{AB}+\mathrm{A}+\mathrm{B}$. The process is a prototype for a class of four-body collisions where two reactants produce three product particles. We measure the collisional dissociation rate constant of this process as a function of collision energy/ temperature and scattering length. We confirm an Arrhenius-law dependence on the collision energy, an $a^4$ power-law dependence on the scattering length $a$ and determine a universal four body reaction constant.
• We explore spinning, precessing, unequal mass binary black holes to display the long term angular orbital momentum flip dynamics. We consider two case studies of binaries with mass ratios $q=1/7$ and $q=1/15$ and a highly spinning large black hole with misaligned intrinsic spin $S_2/m_2^2=0.85$. We perform full numerical simulations to evolve them down to merger for nearly 14 and 18 orbits respectively and a full $L$-flip cycle. The pattern of radiation of such systems is of particular interest displaying strong polarization-dependent variation of amplitudes at precessional frequencies leading to interesting observational consequences for ground, space, and pulsar timing based gravitational wave detectors. These features can be exploited to observe different ranges of binary masses in one frequency band.
• Embedding a magnetic electroactive molecule in a three-terminal junction allows for the fast and local electric field control of magnetic properties desirable in spintronic devices and quantum gates. Here, we provide an example of this control through the reversible and stable charging of a single all-organic neutral diradical molecule. By means of inelastic electron tunnel spectroscopy (IETS) we show that the added electron occupies a molecular orbital distinct from those containing the two radical electrons, forming a spin system with three antiferromagnetically-coupled spins. Changing the redox state of the molecule therefore switches on and off a parallel exchange path between the two radical spins through the added electron. This electrically-controlled gating of the intramolecular magnetic interactions constitutes an essential ingredient of a single-molecule $\sqrt{\text{SWAP}}$ quantum gate.
• The betweenness centrality (BC) of a node in a network (or graph) is a measure of its importance in the network. BC is widely used in a large number of environments such as social networks, transport networks, security/mobile networks and more. We present an O(n)-round distributed algorithm for computing BC of every vertex as well as all pairs shortest paths (APSP) in a directed unweighted network, where n is the number of vertices and m is the number of edges. We also present O(n)-round distributed algorithms for computing APSP and BC in a weighted directed acyclic graph (dag). Our algorithms are in the Congest model and our weighted dag algorithms appear to be the first nontrivial distributed algorithms for both APSP and BC. All our algorithms pay careful attention to the constant factors in the number of rounds and number of messages sent, and for unweighted graphs they improve on one or both of these measures by at least a constant factor over previous results for both directed and undirected APSP and BC.
• Crab crossing is essential for high-luminosity colliders. The High Luminosity Large Hadron Collider (HL-LHC) will equip one of its Interaction Points (IP1) with Double-Quarter Wave (DQW) crab cavities. A DQW cavity is a new generation of deflecting RF cavities that stands out for its compactness and broad frequency separation between fundamental and first high-order modes. The deflecting kick is provided by its fundamental mode. Each HL-LHC DQW cavity shall provide a nominal deflecting voltage of 3.4 MV, although up to 5.0 MV may be required. A Proof-of-Principle (PoP) DQW cavity was limited by quench at 4.6 MV. This paper describes a new, highly optimized cavity, designated DQW SPS-series, which satisfies dimensional, cryogenic, manufacturing and impedance requirements for beam tests at SPS and operation in LHC. Two prototypes of this DQW SPS-series were fabricated by US industry and cold tested after following conventional SRF surface treatment. Both units outperformed the PoP cavity, reaching a deflecting voltage of 5.3-5.9 MV. This voltage - the highest reached by a DQW cavity - is well beyond the nominal voltage of 3.4 MV and may even operate at the ultimate voltage of 5.0MVwith sufficient margin. This paper covers fabrication, surface preparation and cryogenic RF test results and implications.
• Scanning tunneling microscope (STM) is a powerful tool for studying the structural and electronic properties at the atomic scale. The combination of low temperature and high magnetic field for STM and related spectroscopy techniques allows us to investigate the physical properties of novel materials at these extreme conditions with high energy resolution. Here, we present the construction and the performance of a 250 mK STM system with the 7 Tesla magnetic field. Both sample and tip can be treated and exchanged in ultrahigh vacuum. Furthermore, a double deck sample stage is designed for the STM head so we can clean the tip by field emission or prepare a spin-polarized tip in situ without removing the sample. The energy resolution of scanning tunneling spectroscopy (STS) at T = 300 mK is determined by measuring the superconducting gap with a niobium tip on a gold surface. We demonstrate the performance of this STM system by measuring BiTeI surface and imaging the bicollinear magnetic order of $Fe_{1+x}Te$ at liquid helium temperature. We further show the superconducting vortex imaging on PbTaSe2 at T = 260 mK.
• We describe block oriented multidimensional pulse position modulation and its resilience against impulsive noise. The modulation implements the encoder and part of the decoder of the BBC algorithm. We tested the modulation on circuits that send and detect a pulse based signal in the presence of impulsive noise. We measured the packet error rate vs. signal to noise ratio and we compared it with published error rates for OFDM. We found an error rate of 2 x 10^(-5) at a signal to noise ratio of 16 dB without forward error correction and a data rate of 64 kbit /sec.
• Given a finitely generated group with generating set $S$, we study the cogrowth sequence, which is the number of words of length $n$ over the alphabet $S$ that are equal to one. This is related to the probability of return for walks in a Cayley graph with steps from $S$. We prove that the cogrowth sequence is not P-recursive when $G$ is an amenable group of superpolynomial growth, answering a question of Garrabant and Pak. In addition, we compute the cogrowth for certain infinite families of free products of finite groups and free groups, and prove that a gap theorem holds: if $S$ is a finite symmetric generating set for a group $G$ and if $a_n$ denotes the number of words of length $n$ over the alphabet $S$ that are equal to $1$ then either $\limsup_n a_n^{1/n} \le 2$ or $\limsup_n a_n^{1/n} \ge 2\sqrt{2}$.
• In this paper we study the regularity problem of a three dimensional chemotaxis-Navier-Stokes system on a periodic domain. A new regularity criterion in terms of only low modes of the oxygen concentration and the fluid velocity is obtained via a wavenumber splitting approach. The result improves many existing criteria in the literature.
• We demonstrate sub-cycle control of frustrated double ionization (FDI) in the two-electron triatomic molecule D$_3^+$ when driven by two orthogonally polarized two-color laser fields. We employ a three-dimensional semi-classical model that fully accounts for the electron and nuclear motion in strong fields. We control FDI triggered by a strong near-infrared laser field with a weak mid-infrared laser field. This control as a function of the time delay between the two pulses is demonstrated when the FDI probability and the distribution of the momentum of the escaping electron along the mid-infrared laser field are considered in conjunction. We find that the momentum distribution of the escaping electron has a hive-shape with features that can accurately be mapped to the time one of the two electrons tunnel-ionizes at the start of the break-up process. This mapping distinguishes consecutive tunnel-ionization times within a cycle of the mid-infrared laser field.
• Stochastic gradient descent is the method of choice for large scale optimization of machine learning objective functions. Yet, its performance is greatly variable and heavily depends on the choice of the stepsizes. This has motivated a large body of research on adaptive stepsizes. However, there is currently a gap in our theoretical understanding of these methods, especially in the non-convex setting. In this paper, we start closing this gap: we theoretically analyze the use of adaptive stepsizes, like the ones in AdaGrad, in the non-convex setting. We show sufficient conditions for almost sure convergence to a stationary point when the adaptive stepsizes are used, proving the first guarantee for AdaGrad in the non-convex setting. Moreover, we show explicit rates of convergence that automatically interpolates between $O(1/T)$ and $O(1/\sqrt{T})$ depending on the noise of the stochastic gradients, in both the convex and non-convex setting.
• The cold dark matter (CDM) scenario has proved successful in cosmology. However, we lack a fundamental understanding of its microscopic nature. Moreover, the apparent disagreement between CDM predictions and subgalactic-structure observations has prompted the debate about its behaviour at small scales. These problems could be alleviated if the dark matter is composed of ultralight fields $m \sim 10^{-22}\ \text{eV}$, usually known as fuzzy dark matter (FDM). Some specific models, with axion-like potentials, have been thoroughly studied and are collectively referred to as ultralight axions (ULAs) or axion-like particles (ALPs). In this work we consider anharmonic corrections to the mass term coming from a repulsive quartic self-interaction. Whenever this anharmonic term dominates, the field behaves as radiation instead of cold matter, modifying the time of matter-radiation equality. Additionally, even for high masses, i.e. masses that reproduce the cold matter behaviour, the presence of anharmonic terms introduce a cut-off in the matter power spectrum through its contribution to the sound speed. We analyze the model and derive constraints using a modified version of CLASS and comparing with CMB and large-scale structure data.
• The proportional hazards model represents the most commonly assumed hazard structure when analysing time to event data using regression models. We study a general hazard structure which contains, as particular cases, proportional hazards, accelerated hazards, and accelerated failure time structures, as well as combinations of these. We propose an approach to apply these different hazard structures, based on a flexible parametric distribution (Exponentiated Weibull) for the baseline hazard. This distribution allows us to cover the basic hazard shapes of interest in practice: constant, bathtub, increasing, decreasing, and unimodal. In an extensive simulation study, we evaluate our approach in the context of excess hazard modelling, which is the main quantity of interest in descriptive cancer epidemiology. This study exhibits good inferential properties of the proposed model, as well as good performance when using the Akaike Information Criterion for selecting the hazard structure. An application on lung cancer data illustrates the usefulness of the proposed model.
• We present new spectropolarimetric data for WR 42 collected over 6 months at the 11-m Southern African Large Telescope (SALT) using the Robert Stobie Spectrograph.
• Small scale fading makes the wireless channel gain vary significantly over small distances and in the context of classical communication systems it can be detrimental to performance. But in the context of mobile robot (MR) wireless communications, we can take advantage of the fading using a mobility diversity algorithm (MDA) to deliberately locate the MR at a point where the channel gain is high. There are two classes of MDAs. In the first class, the MR explores various points, stops at each one to collect channel measurements and then locates the best position to establish communications. In the second class the MR moves, without stopping, along a continuous path while collecting channel measurements and then stops at the end of the path. It determines the best point to establish communications. Until now, the shape of the continuous path for such MDAs has been arbitrarily selected and currently there is no method to optimize it. In this paper, we propose a method to optimize such a path. Simulation results show that such optimized paths provide the MDAs with an increased performance, enabling them to experience higher channel gains while using less mechanical energy for the MR motion.
• This paper is devoted to $C^2$ estimates for strictly locally convex radial graphs with prescribed Gauss curvature and boundary in space forms. As an application, existence results in $\mathbb{R}^{n + 1}$ and $\mathbb{H}^{n+1}$ are established under the assumption of a strictly locally convex strict subsolution.
• That the Haskell Compiler GHC is capable of proving non-trivial equalities between Haskell code, by virtue of its aggressive optimizer, in particular the term rewriting engine in the simplifier. We demonstrate this with a surprising little code in a GHC plugin, explains the knobs we had to turn, discuss the limits of the approach and related applications of the same idea, namely testing that promises from Haskell libraries with domain-specific optimizations hold.

Max Lu Apr 25 2018 22:08 UTC

"This is a very inspiring paper! The new framework (ZR = All Reality) it provided allows us to understand all kinds of different reality technologies (VR, AR, MR, XR etc) that are currently loosely connected to each other and has been confusing to many people. Instead of treating our perceived sens

...(continued)
Stefano Pirandola Apr 23 2018 12:23 UTC

The most important reading here is Sam Braunstein's foundational paper: https://authors.library.caltech.edu/3827/1/BRAprl98.pdf published in January 98, already containing the key results for the strong convergence of the CV protocol. This is a must-read for those interested in CV quantum informatio

...(continued)
Mark M. Wilde Apr 23 2018 12:09 UTC

One should also consult my paper "Strong and uniform convergence in the teleportation simulation of bosonic Gaussian channels" https://arxiv.org/abs/1712.00145v4 posted in January 2018, in this context.

Stefano Pirandola Apr 23 2018 11:46 UTC

Some quick clarifications on the Braunstein-Kimble (BK) protocol for CV teleportation
and the associated teleportation simulation of bosonic channels.
(Disclaimer: the following is rather technical and CVs might not be so popular on this blog...so I guess this post will get a lot of dislikes :)

1)

...(continued)
NJBouman Apr 22 2018 18:26 UTC

[Fredrik Johansson][1] has pointed out to me (the author) the following about the multiplication benchmark w.r.t. GMP. This will be taken into account in the upcoming revision.

Fredrik Johansson wrote:
> You shouldn't be comparing your code to mpn_mul, because this function is not actually th

...(continued)
Joel Wallman Apr 18 2018 13:34 UTC

A very nice approach! Could you clarify the conclusion a little bit though? The aspirational goal for a quantum benchmark is to test how well we approximate a *specific* representation of a group (up to similarity transforms), whereas what your approach demonstrates is that without additional knowle

...(continued)
serfati philippe Mar 29 2018 14:07 UTC

see my 2 papers on direction of vorticity (nov1996 + feb1999) = https://www.researchgate.net/profile/Philippe_Serfati (published author, see also mendeley, academia.edu, orcid etc)

serfati philippe Mar 29 2018 13:34 UTC

see my 4 papers, 1998-1999, on contact and superposed vortex patches, cusps (and eg splashs), corners, generalized ones on lR^n and (ir/)regular ones =. http://www.researchgate.net/profile/Philippe_Serfati/ (published author).

Luis Cruz Mar 16 2018 15:34 UTC

Related Work:

- [Performance-Based Guidelines for Energy Efficient Mobile Applications](http://ieeexplore.ieee.org/document/7972717/)
- [Leafactor: Improving Energy Efficiency of Android Apps via Automatic Refactoring](http://ieeexplore.ieee.org/document/7972807/)

Dan Elton Mar 16 2018 04:36 UTC

Comments are appreciated. Message me here or on twitter @moreisdifferent

Code is open source and available at :
[https://github.com/delton137/PIMD-F90][1]

[1]: https://github.com/delton137/PIMD-F90