# Top arXiv papers

• In this paper, we propose two low order nonconforming finite element methods (FEMs) for the three-dimensional Stokes flow that generalize the non-conforming FEM of Kouhia and Stenberg (1995, Comput. Methods Appl. Mech. Engrg.). The finite element spaces proposed in this paper consist of two globally continuous components (one piecewise affine and one enriched component) and one component that is continuous in the midpoints of interior faces. We prove that the discrete Korn inequality and a discrete inf-sup condition hold uniformly in the meshsize and also for a non-empty Neumann boundary. Based on these two results, we show the well-posedness of the discrete problem. Two counterexamples prove that there is no direct generalization of the Kouhia-Stenberg FEM to three space dimensions: The finite element space with one non-conforming and two conforming piecewise affine components does not satisfy a discrete inf-sup condition with piecewise constant pressure approximations, while finite element functions with two non-conforming and one conforming component do not satisfy a discrete Korn inequality.
• High Energy Particle Physics experiments at the LHC use hybrid silicon detectors, in both pixel and strip geometry, for their inner trackers. These detectors have proven to be very reliable and performant. Nevertheless, there is great interest in the development of depleted CMOS silicon detectors, which could achieve similar performances at lower cost of production and complexity. We present recent developments of this technology in the framework of the ATLAS CMOS demonstrator project. In particular, studies of two active sensors from LFoundry, CCPD_LF and LFCPIX, and the first fully monolithic prototype MONOPIX will be shown.
• Motivated mainly by the localization over an open bounded set $\Omega$ of $\mathbb R^n$ of solutions of the Schrödinger equations, we consider the Schrödinger equation over $\Omega$ with a very singular potential $V(x) \ge C d (x, \partial \Omega)^{-r}$ with $r\ge 2$ and a convective flow $\vec U$. We prove the existence and uniqueness of a very weak solution of the equation, when the right hand side datum $f(x)$ is in $L^1 (\Omega, d(\cdot, \partial \Omega))$, even if no boundary condition is a priori prescribed. We prove that, in fact, the solution necessarily satisfies (in a suitable way) the Dirichlet condition $u = 0$ on $\partial \Omega$. These results improve some of the results of the previous paper by the authors in collaboration with Roger Temam. In addition, we prove some new results dealing with the $m$-accretivity in $L^1 (\Omega, d(\cdot, \partial \Omega)^ \alpha)$, where $\alpha \in [0,1]$, of the associated operator, the corresponding parabolic problem and the study of the complex evolution Schrödinger equation in $\mathbb R^n$.
• Linear Temporal Logic (LTL) is a widely used specification framework for linear time properties of systems. The standard approach for verifying such properties is by transforming LTL formulae to suitable $\omega$-automata and then applying model checking. We revisit Vardi's transformation of an LTL formula to an alternating $\omega$-automaton and Wolper's LTL tableau method for satisfiability checking. We observe that both constructions effectively rely on a decomposition of formulae into linear factors. Linear factors have been introduced previously by Antimirov in the context of regular expressions. We establish the notion of linear factors for LTL and verify essential properties such as expansion and finiteness. Our results shed new insights on the connection between the construction of alternating $\omega$-automata and semantic tableaux.
• A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some pre-specified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser's directional two-sided test as well as the more recently introduced Jones and Tukey's testing procedure involve three possible decisions to infer on unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g. that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are however situations where a point hypothesis is indeed plausible, for example when considering hypotheses derived from Einstein's theories. In this article, we introduce a five-decision rule testing procedure, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and of Jones and Tukey (higher power), allowing for a non-negligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.
• The detection of the (semi)metal-insulator phase transition can be extremely difficult if the local order parameter which characterizes the ordered phase is unknown.In some cases, it is even impossible to define a local order parameter: the most prominent example of such system is the spin liquid state. This state was proposed to exist in theHubbard model on the hexagonal lattice in a region between the semimetal phase and the antiferromagnetic insulator phase. The existence of this phase has been the subject of a long debate. In order to detect these exotic phases we must use alternative methods to those used for more familiar examples of spontaneous symmetry breaking. We have modified the Backus-Gilbert method of analytic continuation which was previously used in the calculation of the pion quasiparticle mass in lattice QCD. The modification of the method consists of the introduction of the Tikhonov regularization scheme which was used to treat the ill-conditioned kernel. This modified Backus-Gilbert method is applied to the Euclidean propagators in momentum space calculated using the hybridMonte Carlo algorithm. In this way, it is possible to reconstruct the full dispersion relation and to estimate the mass gap, which is a direct signal of the transition to the insulating state. We demonstrate the utility of this method in our calculations for the Hubbard model on the hexagonal lattice. We also apply the method to the metal-insulator phase transition in the Hubbard-Coulomb model on the square lattice.
• Oct 19 2017 math.RT arXiv:1710.06674v1
In this paper we introduce an easily verifiable sufficient condition to determine whether an algebra is quasi-hereditary. In the case of monomial algebras, we give conditions that are both necessary and sufficient to show whether an algebra is quasi-hereditary.
• We report the effects of anisotropy in the confining potential on two component Bose-Einstein condensates (TBECs) through the properties of the low energy quasiparticle excitations. Starting from generalized Gross Pitaevskii equation, we obtain the Bogoliubov de-Gennes (BdG) equation for TBECs using the Hartree-Fock-Bogoliubov (HFB) theory. Based on this theory, we present the influence of radial anisotropy on TBECs in the immiscible or the phase-separated domain. In particular, the TBECs of $^{85}$Rb~-$^{87}$Rb and $^{133}$Cs~-$^{87}$Rb TBECs are chosen as specific examples of the two possible interface geometries, shell-structured and side by side, in the immiscible domain. We also show that the dispersion relation for the TBEC shell-structured interface has two branches, and anisotropy modifies the energy scale and structure of the two branches.
• We consider small perturbations of a dynamical system on the one-dimensional torus. We derive sharp estimates for the pre-factor of the stationary state, we examine the asymptotic behavior of the solutions of the Hamilton-Jacobi equation for the pre-factor, we compute the capacities between disjoint sets, and we prove the metastable behavior of the process among the deepest wells following the martingale approach. We also present a bound for the probability that a Markov process hits a set before some fixed time in terms of the capacity of an enlarged process.
• Bayesian calibration of computer models has reached a certain level of maturity, and it is nowadays commonly applied, according to different frameworks, to infer model parameter values. However, Bayesian calibration has the capability of underpinning more general analyses, and with the aid of some additional elements, it could drive the full development of computer models. The present study describes a framework serving this purpose comprising identification of model parameters, assessment of model deficiencies, and model comparison and selection. Such a framework is demonstrated through a series of numerical experiments, analysing building energy models of a test box, that has undergone round robin experiments within the International Energy Agency, Energy Building and Communities, Annex 58.
• Motivated by the recent measurement of $D^0\to\rho^0\gamma$ improved standard model (SM) predictions for branching ratios and CP asymmetries of radiative charm decay are given. Weak annihilation induced decays are probes of non-perturbative QCD approaches. Rare decays probe the SM and physics beyond the SM, e.g. leptoquark and supersymmetric models. Opportunities with $\Lambda_c\to p\gamma$ for future polarization measurements are presented.
• Many-body perturbation theory is often formulated in terms of an expansion in the dressed instead of the bare Green's function, and in the screened instead of the bare Coulomb interaction. However, screening can be calculated on different levels of approximation, and it is important to define what is the most appropriate choice. We explore this question by studying a zero-dimensional model (so called 'one-point model') that retains the structure of the full equations. We study both linear and non-linear response approximations to the screening. We find that an expansion in terms of the screening in the random phase approximation is the most promising way for an application in real systems. Moreover, by making use of the nonperturbative features of the Kadanoff-Baym equation for the one-body Green's function, we obtain an approximate solution in our model that is very promising, although its applicability to real systems has still to be explored.
• Detection of surgical instruments plays a key role in ensuring patient safety in minimally invasive surgery. In this paper, we present a novel method for 2D vision-based recognition and pose estimation of surgical instruments that generalizes to different surgical applications. At its core, we propose a novel scene model in order to simultaneously recognize multiple instruments as well as their parts. We use a Convolutional Neural Network architecture to embody our model and show that the cross-entropy loss is well suited to optimize its parameters which can be trained in an end-to-end fashion. An additional advantage of our approach is that instrument detection at test time is achieved while avoiding the need for scale-dependent sliding window evaluation. This allows our approach to be relatively parameter free at test time and shows good performance for both instrument detection and tracking. We show that our approach surpasses state-of-the-art results on in-vivo retinal microsurgery image data, as well as ex-vivo laparoscopic sequences.
• We report a direct observation of temperature-induced topological phase transition between trivial and topological insulator in HgTe quantum well. By using a gated Hall bar device, we measure and represent Landau levels in fan charts at different temperatures and we follow the temperature evolution of a peculiar pair of "zero-mode" Landau levels, which split from the edge of electron-like and hole-like subbands. Their crossing at critical magnetic field $B_c$ is a characteristic of inverted band structure in the quantum well. By measuring the temperature dependence of $B_c$, we directly extract the critical temperature $T_c$, at which the bulk band-gap vanishes and the topological phase transition occurs. Above this critical temperature, the opening of a trivial gap is clearly observed.
• The extragalactic background radiation produced by distant galaxies emitting in the far infrared limits the sensitivity of telescopes operating in this range due to confusion. We have constructed a model of the infrared background based on numerical simulations of the large-scale structure of the Universe and the evolution of dark matter halos. The predictions of this model agree well with the existing data on source counts. We have constructed maps of a sky field with an area of 1 deg$^2$ directly from our simulated observations and measured the confusion limit. At wavelengths $100-300$ $\mu$m the confusion limit for a 10-m telescope has been shown to be at least an order of magnitude lower than that for a 3.5-m one. A spectral analysis of the simulated infrared background maps clearly reveals the large-scale structure of the Universe. The two-dimensional power spectrum of these maps has turned out to be close to that measured by space observatories in the infrared. However, the fluctuations in the number of intensity peaks observed in the simulated field show no clear correlation with superclusters of galaxies; the large-scale structure has virtually no effect on the confusion limit.
• Oct 19 2017 math.CO arXiv:1710.06664v1
The notion of descent set, for permutations as well as for standard Young tableaux (SYT), is classical. Cellini introduced a natural notion of \em cyclic descent set for permutations, and Rhoades introduced such a notion for SYT --- but only for rectangular shapes. In this work we define \em cyclic extensions of descent sets in a general context, and prove existence and essential uniqueness for SYT of almost all shapes. The proof applies nonnegativity properties of Postnikov's toric Schur polynomials, providing a new interpretation of certain Gromov-Witten invariants.
• We investigate Néron models of Jacobians of singular curves over strictly Henselian discretely valued fields, and their behaviour under tame base change. For a semiabelian variety, this behaviour is governed by a finite sequence of (a priori) real numbers between 0 and 1, called "jumps". The jumps are conjectured to be rational, which is known in some cases. The purpose of this paper is to prove this conjecture in the case where the semiabelian variety is the Jacobian of a geometrically integral curve with a push-out singularity. Along the way, we prove the conjecture for algebraic tori which are induced along finite separable extensions, and generalize Raynaud's description of the identity component of the Néron model of the Jacobian of a smooth curve (in terms of the Picard functor of a proper, flat, and regular model) to our situation. The main technical result of this paper is that the exact sequence which decomposes the Jacobian of one of our singular curves into its toric and Abelian parts extends to an exact sequence of Néron models. Previously, only split semiabelian varieties were known to have this property.
• In this paper we give a smooth linearization theorem for nonautonomous difference equations with a nonuniform strong exponential dichotomy. The linear part of such a nonautonomous difference equation is defined by a sequence of invertible linear operators on $\mathbb{R}^d$. Reducing the linear part to a bounded linear operator on a Banach space, we discuss the spectrum and its spectral gaps. Then we obtain a gap condition for $C^1$ linearization of such a nonautonomous difference equation. We finally extend the result to the infinite dimensional case. Our theorems improve known results even in the case of uniform strong exponential dichotomies.
• We consider a microswimmer that moves in two dimensions at a constant speed and changes the direction of its motion due to a torque consisting of a constant and a fluctuating component. The latter will be modeled by a symmetric Lévy-stable ($\alpha$-stable) noise. The purpose is to develop a kinetic approach to eliminate the angular component of the dynamics in order to find a coarse grained description in the coordinate space. By defining the joint probability density function of the position and of the orientation of the particle through the Fokker-Planck equation, we derive transport equations for the position-dependent marginal density, the particle's mean velocity and the velocity's variance. At time scales larger than the relaxation time of the torque $\tau_{\phi}$ the two higher moments follow the marginal density, and can be adiabatically eliminated. As a result, a closed equation for the marginal density follows. This equation which gives a coarse-grained description of the microswimmer's positions at time scales $t\gg \tau_{\phi}$, is a diffusion equation with a constant diffusion coefficient depending on the properties of the noise. Hence, the long time dynamics of a microswimmer can be described as a normal, diffusive, Brownian motion with Gaussian increments.
• A model for the prediction of functional time series is introduced, where observations are assumed to be realizations of a C[0,1]-valued process. We model the dependence of the data with a non-standard autoregressive structure, motivated in terms of the Reproducing Kernel Hilbert Space (RKHS) generated by the covariance kernel of the data. The general definition has as particular case a set of finite-dimensional models based on marginal variables of the process. Thus, this approach is especially useful to find relevant points for prediction (sometimes called "impact points"). Some examples show that this model has a good amount of generality. In addition, problems like the non-invertibility of the covariance operators in function spaces can be circumvented using this methodology. A simulation study and two real data examples are presented to evaluate the performance of the proposed predictors.
• The energy spectrum of the cosmic radiation in the range 10$^{19}$-2.4$\times$10$^{21}$ eV has been recently predicted showing a rich and distinctive staircase profile. In order to check the prediction, the spectra measured by running and past experiments above 10$^{19}$ eV are examined. The computed spectrum compares more favourably with the Telescope Array, HiRes I and Yakutsk data rather than with the Auger data in the range (1-20)$\times$10$^{19}$ eV. Previous flux measurements by Haverah Park, \sc sugar, \sc agasa and Fly's Eye experiments are above the predicted spectrum in the limited band (1-30)$\times$10$^{19}$ eV. The flux measured by the Auger Group in the band (8-18)$\times$10$^{19}$ eV is below those of all other experiments and below the prediction. The energy scales of the instruments might be at the origin of the flux mismatch among the experiments. Accordingly, the energy scales of all the eleven instruments operating above 10$^{20}$ eV are examined and the major inconsistencies discerned. The paucity of events above 10$^{20}$ eV of the Auger experiment with respect to all others is by far the major puzzle emerging from this scrutiny. The Auger instrument recorded only 4 events above 10$^{20}$ eV with an exposure exceeding 42500 km$^{2}$ sr year while the Telescope Array recorded 13 events with an exposure of 8100 km$^{2}$ sr year. A tentative solution of this puzzle is ventilated.
• We study the impact of quenched disorder (random exchange couplings or site dilution) on easy-plane pyrochlore antiferromagnets. In the clean system, a magnetically ordered state is selected from a classically degenerate manifold via an order-by-disorder mechanism. In the presence of randomness, however, different states can be locally selected depending on details of the disorder configuration. Using a combination of analytical considerations and classical Monte-Carlo simulations, we argue that any long-range-ordered magnetic state is destroyed beyond a critical level of randomness where the system breaks into magnetic domains due to random exchange anisotropies, becoming, therefore, a glass of spin clusters, in accordance with the available experimental data.
• Flavour-violating Higgs interactions are suppressed in the Standard Model such that their observation would be a clear sign of new physics. We investigate the prospects for detecting quark flavour-violating Higgs decays in the clean ILC environment. Concentrating on the decay to a bottom and a light quark $j$, we identify the dominant Standard Model background channels as coming from hadronic Standard Model Higgs decays with mis-identified jets. Therefore, good flavour tagging capabilities are essential to keep the background rate under control. Through a simple cut-based analysis, we find that the most promising search channel is the two-jet plus missing energy signature \mbox$e^+e^-\to bj+\ETmiss$. At \SI500GeV, the expected \SI95\% \CL upper limit on \mbox$\br(h\to bj)$ is of order \nume-3. Correspondingly, a $\SI{5}{\sigma}$ discovery is expected to be possible for branching ratios as low as a few \nume-3.
• Nucleon-structure calculations of isovector vector- and axialvector-current form factors, transversity and scalar charge, and quark momentum and helicity fractions are reported from two recent 2+1-flavor dynamical domain-wall fermions lattice-QCD ensembles generated jointly by the RIKEN-BNL-Columbia and UKQCD Collaborations with Iwasaki $\times$ dislocation-suppressing-determinatn-ratio gauge action at inverse lattice spacing of 1.378(7) GeV and pion mass values of 249.4(3) and 172.3(3) MeV.
• The Bernoulli-Gaussian (BG) model is practical to characterize impulsive noises that widely exist in various communication systems. To estimate the BG model parameters from noise measurements, a precise impulse detection is essential. In this paper, we propose a novel blind impulse detector, which is proven to be fast and accurate for BG noise in underspread communication channels.
• We show that mock automorphic forms obtained from weak harmonic Maaß forms give rise to nontrivial $(\mathfrak g,K)$-cohomology, providing evidence for replacing the holomorphic' condition with cohomological' when generalizing to general reductive groups. We note that such a candidate allows for growing Fourier coefficients, in contrast to automorphic forms under the Miatello-Wallach conjecture. The second part of this note surveys the connection with BPS black hole counts as a physical motivation for introducing mock automorphic forms.
• In the field of molecular electronics thin films of molecules adsorbed on insulating surfaces are used as the functional building blocks of electronic devices. A control of the structural and electronic properties of the thin films is required for a reliable operating mode of such devices. Here, noncontact atomic force and Kelvin probe force microscopies have been used to investigate the growth and electronic properties of pentacene on KBr(001) and KCl(001) surfaces. Mainly molecular islands of upright standing pentacene are formed, whereas a new phase of tilted molecules appear near step edges on some KBr samples. Local contact potential differences (LCPD) have been studied with both Kelvin experiments and density-functional theory calculations. Large LCPD are found between the substrate and the differently oriented molecules, which may be explained by a partial charge transfer from the pentacene to the surface. The monitoring of the changes of the pentacene islands during dewetting shows that multilayers build up at the expense of monolayers. Moreover, in the Kelvin images, previously unknown line defects appear, which unveil the epitaxial growth of pentacene crystals.
• The ab initio extension of the dynamical vertex approximation (D$\Gamma$A) method allows for realistic materials calculations that include non-local correlations beyond $GW$ and dynamical mean-field theory. Here, we discuss the AbinitioD$\Gamma$A algorithm, its implementation and usage in detail, and make the program package available to the scientific community.
• We report results of a search for light weakly interacting massive particles (WIMPs) dark matter from CDEX-1 experiment at the China Jinping Underground Laboratory (CJPL). Constraints on WIMP-nucleon spin-independent (SI) and spin-dependent (SD) couplings are derived with a physics threshold of 160 eVee, from an exposure of 737.1 kg-days. The SI and SD limits extend the lower reach of light WIMPs to 2 GeV and improve over our earlier bounds at WIMPs mass less than 6 GeV.
• We consider hyperelastic problems and their numerical solution using a conforming finite element discretization and iterative linearization algorithms. For these problems, we present equilibrated, weakly symmetric, $H(\rm{div)}$-conforming stress tensor reconstructions, obtained from local problems on patches around vertices using the Arnold--Falk--Winther finite element spaces. We distinguish two stress reconstructions, one for the discrete stress and one representing the linearization error. The reconstructions are independent of the mechanical behavior law. Based on these stress tensor reconstructions, we derive an a posteriori error estimate distinguishing the discretization, linearization, and quadrature error estimates, and propose an adaptive algorithm balancing these different error sources. We prove the efficiency of the estimate, and confirm it on a numerical test with analytical solution for the linear elasticity problem. We then apply the adaptive algorithm to a more application-oriented test, considering the Hencky--Mises and an isotropic damage models.
• In systems described by the scattering theory, there is an upper bound, lower than Carnot, on the efficiency of thermoelectric energy conversion at a given output power. We show that interacting systems can overcome such bound. This result is rooted in the possibility for interacting systems to achieve the Carnot efficiency at the thermodynamic limit without delta-energy filtering, so that large efficiencies can be obtained without greatly reducing power.
• Robotic swimmers are currently a subject of extensive research and development for several underwater applications. Clever design and planning must rely on simple theoretical models that account for the swimmer's hydrodynamics in order to optimize its structure and control inputs. In this work, we study a planar snake-like multi-link swimmer by using the "perfect fluid" model that accounts for inertial hydrodynamic forces while neglecting viscous drag effects. The swimmer's dynamic equations of motion are formulated and reduced into a first-order system due to symmetries and conservation of generalized momentum variables. Focusing on oscillatory inputs of joint angles, we study optimal gaits for 3-link and 5-link swimmers via numerical integration. For the 3-link swimmer, we also provide a small-amplitude asymptotic solution which enables obtaining closed-form approximations for optimal gaits. The theoretical results are then corroborated by experiments and motion measurement of untethered robotic prototypes with 3 and 5 links, showing a reasonable agreement between experiments and the theoretical model.
• A graph $G$ is called a $(3,j;n)$-minimal Ramsey graph if it has the least amount of edges, $e(3,j;n)$, given that $G$ is triangle-free, the independence number $\alpha(G) < j$ and that $G$ has $n$ vertices. Triangle-free graphs $G$ with $\alpha(G) < j$ and where $e(G) - e(3,j;n)$ is small are said to be almost minimal Ramsey graphs. We look at a construction of some almost minimal Ramsey graphs, called $H_{13}$-patterned graphs. We make computer calculations of the number of almost minimal Ramsey triangle-free graphs that are $H_{13}$-patterned. The results of these calculations indicate that many of these graphs are in fact $H_{13}$-patterned. In particular, all but one of the connected $(3,j;n)$-minimal Ramsey graphs for $j \leq 9$ are indeed $H_{13}$-patterned.
• One of the historical suggestions to tackle the strong CP problem is to take the up quark mass to zero while keeping $m_d$ finite. The $\theta$ angle is then supposed to become irrelevant, i.e. the topological susceptibility vanishes. However, the definition of the quark mass is scheme-dependent and identifying the $m_u=0$ point is not trivial, in particular with Wilson-like fermions. More specifically, up to our knowledge there is no theoretical argument guaranteeing that the topological susceptibility exactly vanishes when the PCAC mass does. We will present our recent progresses on the empirical check of this property using $N_f=1+2$ flavours of clover fermions, where the lightest fermion is tuned very close to $m^{PCAC}_u$=0 and the mass of the other two is kept of the order of magnitude of the physical $m_s$. This choice is indeed expected to amplify any unknown non-perturbative effect caused by $m_u\not=m_d$. The simulation is repeated for several $\beta$s and those results, although preliminary, give a hint about what happens in the continuum limit.
• We present a systematic study of the 2nd order scalar, vector and tensor metric perturbations in the Einstein-de Sitter Universe in synchronous coordinates. For the scalar-scalar coupling between 1st order perturbations, we decompose the 2nd order perturbed Einstein equation into the respective field equations of 2nd order scalar, vector, and tensor perturbations, and obtain their solutions with general initial conditions. In particular, the decaying modes of solution are included, the 2nd order vector is generated even the 1st order vector is absent, and the solution of 2nd order tensor corrects that in literature. We perform general synchronous-to-synchronous gauge transformations up to 2nd order generated by a 1st order vector field $\xi^{(1)\mu}$ and a 2nd order $\xi^{(2)\mu}$. All the residual gauge modes of 2nd order metric perturbations and density contrast are found, and their number is substantially reduced when the transformed 3-velocity of dust is set be zero. Moreover, we show that only $\xi^{(2)\mu}$ is effective in carrying out 2nd order transformations that we consider, because $\xi^{(1)\mu}$ has been used in obtaining the 1st order perturbations. Holding the 1st order perturbations fixed, the transformations by $\xi^{(2)\mu}$ on the 2nd order perturbations have the same structure as those by $\xi^{(1)\mu}$ on the 1st order perturbations.
• Hydrogen (H)-doped LaFeAsO is a prototypical iron-based superconductor. However, its phase diagram extends beyond the standard framework, where a superconducting (SC) phase follows an antiferromagnetic (AF) phase upon carrier doping; instead, the SC phase is sandwiched between two AF phases appearing in lightly and heavily H-doped regimes. We performed nuclear magnetic resonance (NMR) measurements under pressure, focusing on the second AF phase in the heavily H-doped regime. The second AF phase is strongly suppressed when a pressure of 3.0 GPa is applied, and apparently shifts to a highly H-doped regime, thereby a "bare" quantum critical point (QCP) emerges. A quantum critical regime emerges in a paramagnetic state near the QCP, however, the influence of the AF critical fluctuations to the SC phase is limited in the narrow doping regime near the QCP. The optimal SC condition ($T_c \sim$ 48 K) is unaffected by AF fluctuations.
• We study the second-order perturbations in the Einstein-de Sitter Universe in synchronous coordinate. We solve the second-order perturbed Einstein equation with scalar-tensor, and tensor-tensor couplings between 1st order perturbations, and obtain, for each coupling, the solutions of scalar, vector, tensor metric perturbations, including both the growing and decaying modes for general initial conditions. We perform general synchronous-to-synchronous gauge transformations up to 2nd order, which are generated by a 1st order vector field and a 2nd order vector field, and obtain all the residual gauge modes of the 2nd order metric perturbations in synchronous coordinates. We show that only the 2nd order vector field is effective for the 2nd order transformations that we consider because the 1st order vector field was already fixed in obtaining the 1st order perturbations. In particular, the 2nd order tensor is invariant under 2nd order gauge transformations using $\xi^{(2)\mu}$ only, just like the 1st order tensor is invariant under 1st order transformations.
• The processes of the averaged regression quantiles and of their modifications provide useful tools in the regression models when the covariates are not fully under our control. As an application we mention the probabilistic risk assessment in the situation when the return depends on some exogenous variables. The processes enable to evaluate the expected $\alpha$-shortfall ($0\leq\alpha\leq 1$) and other measures of the risk, recently generally accepted in the financial literature, but also help to measure the risk in environment analysis and elsewhere.
• Finding hot topics in scholarly fields can help researchers to keep up with the latest concepts, trends, and inventions in their field of interest. Due to the rarity of complete large-scale scholarly data, earlier studies target this problem based on manual topic extraction from a limited number of domains, with their focus solely on a single feature such as coauthorship, citation relations, and etc. Given the compromised effectiveness of such predictions, in this paper we use a real scholarly dataset from Microsoft Academic Graph, which provides more than 12000 topics in the field of Computer Science (CS), including 1200 venues, 14.4 million authors, 30 million papers and their citation relations over the period of 1950 till now. Aiming to find the topics that will trend in CS area, we innovatively formalize a hot topic prediction problem where, with joint consideration of both inter- and intra-topical influence, 17 different scientific features are extracted for comprehensive description of topic status. By leveraging all those 17 features, we observe good accuracy of topic scale forecasting after 5 and 10 years with R2 values of 0.9893 and 0.9646, respectively. Interestingly, our prediction suggests that the maximum value matters in finding hot topics in scholarly fields, primarily from three aspects: (1) the maximum value of each factor, such as authors' maximum h-index and largest citation number, provides three times the amount of information than the average value in prediction; (2) the mutual influence between the most correlated topics serve as the most telling factor in long-term topic trend prediction, interpreting that those currently exhibiting the maximum growth rates will drive the correlated topics to be hot in the future; (3) we predict in the next 5 years the top 100 fastest growing (maximum growth rate) topics that will potentially get the major attention in CS area.
• Oct 19 2017 cs.GT cs.AI arXiv:1710.06636v1
Despite efforts to increase the supply of organs from living donors, most kidney transplants performed in Australia still come from deceased donors. The age of these donated organs has increased substantially in recent decades as the rate of fatal accidents on roads has fallen. The Organ and Tissue Authority in Australia is therefore looking to design a new mechanism that better matches the age of the organ to the age of the patient. I discuss the design, axiomatics and performance of several candidate mechanisms that respect the special online nature of this fair division problem.
• We consider the entropic regularization of discretized optimal transport and propose to solve its optimality conditions via a logarithmic Newton iteration. We show a quadratic convergence rate and validate numerically that the method compares favorably with the more commonly used Sinkhorn--Knopp algorithm for small regularization strength. We further investigate numerically the robustness of the proposed method with respect to parameters such as the mesh size of the discretization.
• Light-shining-through-a-wall experiments represent a new experimental approach to search for undiscovered elementary particles not accessible with accelerator based experiments. The next generation of these experiments, such as ALPS II, require high finesse, long baseline optical cavities with fast length control. In this paper we report on a length stabilization control loop used to keep a cavity resonant with light at a wavelength of 532nm. It achieves a unity-gain-frequency of 4kHz and actuates on a mirror with a diameter of 50.8mm. This length control system was implemented on a 10m cavity and its projected performance meets the ALPS II requirements. The finesse of this cavity was measured to be 93,800$\pm$500 for 1064nm light, a value which is close to the design requirements for the ALPS II regeneration cavity.
• We present a study of the isospin-breaking (IB) corrections to pseudoscalar (PS) meson masses using the gauge configurations produced by the ETM Collaboration with $N_f=2+1+1$ dynamical quarks at three lattice spacings varying from 0.089 to 0.062 fm. Our method is based on a combined expansion of the path integral in powers of the small parameters $(\widehat{m}_d - \widehat{m}_u)/\Lambda_{QCD}$ and $\alpha_{em}$, where $\widehat{m}_f$ is the renormalized quark mass and $\alpha_{em}$ the renormalized fine structure constant. We obtain results for the pion, kaon and $D$-meson mass splitting; for the Dashen's theorem violation parameters $\epsilon_\gamma(\overline{\mathrm{MS}}, 2~\mbox{GeV})$, $\epsilon_{\pi^0}$, $\epsilon_{K^0}(\overline{\mathrm{MS}}, 2~\mbox{GeV})$; for the light quark masses $(\widehat{m}_d - \widehat{m}_u)(\overline{\mathrm{MS}}, 2~\mbox{GeV})$, $(\widehat{m}_u / \widehat{m}_d)(\overline{\mathrm{MS}}, 2~\mbox{GeV})$; for the flavour symmetry breaking parameters $R(\overline{\mathrm{MS}}, 2~\mbox{GeV})$ and $Q(\overline{\mathrm{MS}}, 2~\mbox{GeV})$ and for the strong IB effects on the kaon decay constants.
• We investigate topological phase transitions in Chern insulators within three-band models, focusing on the empty band and lowest band populated by spinless fermions. We consider Lieb and kagome lattices and notice phase transitions driven by the hopping integral between nearest-neighbors, which leads to the change of the lowest band Chern number from $C=1$ to $C=-1$. In the single-particle picture, different phases are examined by investigating corresponding entanglement spectra and the evolution of the entanglement entropy. Entanglement spectra reveal the spectral flow characteristic of topologically nontrivial systems before and after phase transitions. For the lowest band with $1/3$ filling, Fractional Chern insulator (FCI) phases are identified by examining the ground state momenta, the spectral flow, and counting of the entanglement energy levels below the gap in the entanglement spectrum. A quasilinear dependence of the entanglement entropy $\alpha(n_A)$ term is observed for FCI phase, similarly to linear behavior expected for Laughlin phase in the fractional quantum Hall effect. At the topological phase transition, for both empty and partially filled lowest bands, the energy gap closure and a discontinuity in the entanglement entropy are observed. This coincides with the divergence of a standard deviation of the Berry curvature. We notice also a phase transition driven by the nearest-neighbor interaction in the Lieb lattice, where the many-body energy gap closes. The phase transitions are shown to be stable for an arbitrary system size, thus predicted to be present in the thermodynamic limit. While our calculations are performed for an interaction energy far exceeding the gap between the two lowest energy bands, we note that the higher band does not affect the phase transitions, however destabilizes FCI phases.
• We report results from lithium abundance determinations using high resolution spectral analysis of the 107 metal-rich stars from the Calan-Hertfordshire Extrasolar Planet Search programme. We set out to understand the lithium distribution of the population of stars taken from this survey. The lithium abundance with account NLTE effects was determined from the fits to the Li I 6708 Å~resonance doublet profiles in the observed spectra. We find that a) fast rotators tend to have higher lithium abundances, b) $\log$ N(Li) is higher in more massive/hot stars, c) $\log$ N(Li) is higher in less evolved stars, i.e. stars of lower \logg, d) stars with the metallicities $>$0.25~dex do not show the presence of lithium lines in their spectra, e) most of our planet hosts rotate slower, f) our estimate of a lower limit of lithium isotopic ratio is \Li $>$10 in the atmospheres of two SWP and two non-SWP stars. Measurable lithium abundances were found in the atmospheres of 45 stars located at distances of 20-170 pc from the Sun, for the other 62 stars the upper limits of log N(Li) were computed. We found well defined dependences of lithium abundances on \Teff, \vsini, and less pronounced for the \logg. In case of \vsini we see two sequences of stars: with measurable lithium and with the upper limit of log N(Li). About 10\% of our targets are known to host planets. Only two SWP have notable lithium abundances, so we found a lower proportion of stars with detectable Li among known planet hosts than among stars without planets. However, given the small sample size of our planet-host sample, our analysis does not show any statistically significant differences in the lithium abundance between SWP and stars without known planets.
• We use the energy-balance code MAGPHYS to determine stellar and dust masses, and dust corrected star-formation rates for over 200,000 GAMA galaxies, 170,000 G10-COSMOS galaxies and 200,000 3D-HST galaxies. Our values agree well with previously reported measurements and constitute a representative and homogeneous dataset spanning a broad range in stellar mass (10^8---10^12 Msol), dust mass (10^6---10^9 Msol), and star-formation rates (0.01---100 Msol per yr), and over a broad redshift range (0.0 < z < 5.0). We combine these data to measure the cosmic star-formation history (CSFH), the stellar-mass density (SMD), and the dust-mass density (DMD) over a 12 Gyr timeline. The data mostly agree with previous estimates, where they exist, and provide a quasi-homogeneous dataset using consistent mass and star-formation estimators with consistent underlying assumptions over the full time range. As a consequence our formal errors are significantly reduced when compared to the historic literature. Integrating our cosmic star-formation history we precisely reproduce the stellar-mass density with an ISM replenishment factor of 0.50 +/- 0.07, consistent with our choice of Chabrier IMF plus some modest amount of stripped stellar mass. Exploring the cosmic dust density evolution, we find a gradual increase in dust density with lookback time. We build a simple phenomenological model from the CSFH to account for the dust mass evolution, and infer two key conclusions: (1) For every unit of stellar mass which is formed 0.0065---0.004 units of dust mass is also formed; (2) Over the history of the Universe approximately 90 to 95 per cent of all dust formed has been destroyed and/or ejected.
• We construct an exact tensor functor from the category $\mathcal{A}$ of finite-dimensional graded modules over the quiver Hecke algebra of type $A_\infty$ to the category $\mathscr C_{B^{(1)}_n}$ of finite-dimensional integrable modules over the quantum affine algebra of type $B^{(1)}_n$. It factors through the category $\mathcal T_{2n}$, which is a localization of $\mathcal{A}$. As a result, this functor induces a ring isomorphism from the Grothendieck ring of $\mathcal T_{2n}$ (ignoring the gradings) to the Grothendieck ring of a subcategory $\mathscr C^{0}_{B^{(1)}_n}$ of $\mathscr C_{B^{(1)}_n}$. Moreover, it induces a bijection between the classes of simple objects. Because the category $\mathcal T_{2n}$ is related to categories $\mathscr C^{0}_{A^{(t)}_{2n-1}}$ $(t=1,2)$ of the quantum affine algebras of type $A^{(t)}_{2n-1}$, we obtain an interesting connection between those categories of modules over quantum affine algebras of type $A$ and type $B$. Namely, for each $t =1,2$, there exists an isomorphism between the Grothendieck ring of $\mathscr C^{0}_{A^{(t)}_{2n-1}}$ and the Grothendieck ring of $\mathscr C^{0}_{B^{(1)}_n}$, which induces a bijection between the classes of simple modules.
• We consider a boundary-value problem describing the steady motion of a two-component mixture of viscous compressible heat-conducting fluids in a bounded domain. We make no simplifying assumptions except for postulating the coincidence of phase temperatures (which is physically justified in certain situations), that is, we retain all summands in equations that are a natural generalization of the Navier-Stokes-Fourier model of the motion of a one-component medium. We prove the existence of weak generalized solutions of the problem.
• Sau, Lutchyn, Tewari and Das Sarma (SLTD) proposed a heterostructure consisting of a semiconducting thin film sandwiched between an s-wave superconductor and a magnetic insulator and showed possible Majorana zero mode. Here we study spin polarization of the vortex core states and spin selective Andreev reflection at the vortex center of the SLTD model. In the topological phase, the differential conductance at the vortex center contributed from the Andreev reflection, is spin selective and has a quantized value $(dI/dV)^{topo}_A =2e^2/h$ at zero bias. In the topological trivial phase, $(dI/dV)^{trivial}_A$ at the lowest quasiparticle energy of the vortex core is spin selective due to the spin-orbit coupling (SOC). Unlike in the topological phase, $(dI/dV)^{trivial}_A$ is suppressed in the Giaever limit and vanishes exactly at zero bias due to the quantum destruction interference.
• We construct the error distribution of galactic rotation curve ($\Theta$) measurements using 134 data points from the 162 measurements compiled in De Grijs et al. (arXiv:1709.02501), following the same procedures used in previous works by Ratra and collaborators. We determine the weighted mean of these measurements to be $\Theta_{Mean} = 226.73 \pm 0.70$ km/sec and the median estimate is calculated to be $\Theta_{Med} = 234.66\pm 0.58$ km/sec. We also checked if the error distribution (constructed using both the weighted mean and median as the estimate) shows a Gaussian distribution. We find using both the estimates that it has much wider tails than a Gaussian distribution. We also tried to fit the data to four distributions: Gaussian, Cauchy, double-exponential, and Students-t. The best fit is obtained using the Students-$t$ distribution for $n=2$ using the median value as the central estimate, corresponding to $p$-value of 0.19. All other distributions provide poorer fits to the data.

Siddhartha Das Oct 06 2017 03:18 UTC

Here is a work in related direction: "Unification of Bell, Leggett-Garg and Kochen-Specker inequalities: Hybrid spatio-temporal inequalities", Europhysics Letters 104, 60006 (2013), which may be relevant to the discussions in your paper. [https://arxiv.org/abs/1308.0270]

Bin Shi Oct 05 2017 00:07 UTC

Welcome to give the comments for this paper!

Bassam Helou Sep 22 2017 17:21 UTC

The initial version of the article does not adequately and clearly explain how certain equations demonstrate whether a particular interpretation of QM violates the no-signaling condition.
A revised and improved version is scheduled to appear on September 25.

James Wootton Sep 21 2017 05:41 UTC

What does this imply for https://scirate.com/arxiv/1608.00263? I'm guessing they still regard it as valid (it is ref [14]), but just too hard to implement for now.

Ben Criger Sep 08 2017 08:09 UTC

Oh look, there's another technique for decoding surface codes subject to X/Z correlated errors: https://scirate.com/arxiv/1709.02154

Aram Harrow Sep 06 2017 07:54 UTC

The paper only applies to conformal field theories, and such a result cannot hold for more general 1-D systems by 0705.4077 and other papers (assuming standard complexity theory conjectures).

Felix Leditzky Sep 05 2017 21:27 UTC

Thanks for the clarification, Philippe!

Philippe Faist Sep 05 2017 21:09 UTC

Hi Felix, thanks for the good question.

We've found it more convenient to consider trace-nonincreasing and $\Gamma$-sub-preserving maps (and this is justified by the fact that they can be dilated to fully trace-preserving and $\Gamma$-preserving maps on a larger system). The issue arises because

...(continued)