# Top arXiv papers

• We use high resolution ARPES to study the resonant, collective excitation mode in the superconducting state of Bi2212. By collecting very high quality data we found new features in the self energy in the antinodal region, where the interaction of electrons with the mode is the strongest. This interaction leads to pronounced peak in the scattering rate and we demonstrate that this feature is directly responsible for well known hump-dip-peak structure in cuprates. By studying how the weight of this peak changes with temperature we unequivocally demonstrate that interaction of electrons with resonant mode in cuprates vanishes at Tc and is very much localized in the momentum space close to the antinode. These findings present a consistent picture of line shape and self energy signatures of the electron-boson coupling in cuprates and resolve long standing controversy surrounding this issue. The momentum dependence of the strength of electron-mode interaction enables development of quantitative theory of this phenomenon in cuprates.
• The varying speed of light (VSL) has been used in cosmological models in which the physical constants vary over time. On the other hand, the Dvali, Gabadadze and Porrati (DGP) brane world model, especially its normal branch has been extensively discussed to justify the current cosmic acceleration. In this article we show that the normal branch of DGP in VSL cosmology leads to a self-accelerating behavior and therefore can interpret cosmic acceleration. Applying statefinder diagnostics demonstrate that our result slightly deviates \LambdaCDM model.
• We discuss the main mechanisms generating chaotic behavior of the quantum trajectories in the de Broglie - Bohm picture of quantum mechanics, in systems of two and three degrees of freedom. In the 2D case, chaos is generated via multiple scatterings of the trajectories with one or more nodal point - X-point complexes'. In the 3D case, these complexes form foliations along nodal lines' accompanied by X-lines'. We also identify cases of integrable or partially integrable quantum trajectories. The role of chaos is important in interpreting the dynamical origin of the quantum relaxation' effect, i.e. the dynamical emergence of Born's rule for the quantum probabilities, which has been proposed as an extension of the Bohmian picture of quantum mechanics. In particular, the local scaling laws characterizing the chaotic scattering phenomena near X-points, or X-lines, are related to the global rate at which the quantum relaxation is observed to proceed. Also, the degree of chaos determines the rate at which nearly-coherent initial wavepacket states lose their spatial coherence in the course of time.
• Inspired by the boom of the consumer IoT market, many device manufacturers, start-up companies and technology giants have jumped into the space. Unfortunately, the exciting utility and rapid marketization of IoT, come at the expense of privacy and security. Industry reports and academic work have revealed many attacks on IoT systems, resulting in privacy leakage, property loss and large-scale availability problems. To mitigate such threats, a few solutions have been proposed. However, it is still less clear what are the impacts they can have on the IoT ecosystem. In this work, we aim to perform a comprehensive study on reported attacks and defenses in the realm of IoT aiming to find out what we know, where the current studies fall short and how to move forward. To this end, we first build a toolkit that searches through massive amount of online data using semantic analysis to identify over 3000 IoT-related articles. Further, by clustering such collected data using machine learning technologies, we are able to compare academic views with the findings from industry and other sources, in an attempt to understand the gaps between them, the trend of the IoT security risks and new problems that need further attention. We systemize this process, by proposing a taxonomy for the IoT ecosystem and organizing IoT security into five problem areas. We use this taxonomy as a beacon to assess each IoT work across a number of properties we define. Our assessment reveals that relevant security and privacy problems are far from solved. We discuss how each proposed solution can be applied to a problem area and highlight their strengths, assumptions and constraints. We stress the need for a security framework for IoT vendors and discuss the trend of shifting security liability to external or centralized entities. We also identify open research problems and provide suggestions towards a secure IoT ecosystem.
• We propose and analyze a method to engineer effective interactions in an ensemble of d-level systems (qudits) driven by global control fields. In particular, we present (i) a necessary and sufficient condition under which a given interaction can be turned off (decoupled), (ii) the existence of a universal sequence that decouples any (cancellable) interaction, and (iii) an efficient algorithm to engineer a target Hamiltonian from an initial Hamiltonian (if possible). As examples, we provide a 6-pulse sequence that decouples effective spin-1 dipolar interactions and demonstrate that a spin- 1 Ising chain can be engineered to study transitions among three distinct symmetry protected topological phases.
• A patchwork method is used to study the dynamics of loss and recovery of an initial configuration in spin glass models in dimensions d=1 and d=2. The patchwork heuristic is used to accelerate the dynamics to investigate how models might reproduce the remarkable memory effects seen in experiment. Starting from a ground state configuration computed for one choice of nearest neighbor spin couplings, the sample is aged up to a given scale under new random couplings, leading to the partial erasure of the original ground state. The couplings are then restored to the original choice and patchwork coarsening is again applied, in order to assess the recovery of the original state. Eventual recovery of the original ground state upon coarsening is seen in two-dimensional Ising spin glasses and one-dimensional Potts models, while one-dimensional Ising glasses neither lose nor gain overlap with the ground state during the recovery stage. The recovery for the two-dimensional Ising spin glasses suggests scaling relations that lead to a recovery length scale that grows as a power of the aging length scale.
• We derive recursive representations in the internal weights of N-point Virasoro conformal blocks in the sphere linear channel and the torus necklace channel, and recursive representations in the central charge of arbitrary Virasoro conformal blocks on the sphere, the torus, and higher genus Riemann surfaces in the plumbing frame.
• Unstable spin-1 particles are properly described by including absorptive corrections to the electromagnetic vertex and propagator, without breaking the electromagnetic gauge invariance. We show that the modified propagator can be set into a complex mass form, provided the mass and the width parameters, which are properly defined at the pole position, are replaced by energy dependent functions fulfilling the same requirements at the pole. We exemplify the case for the $K^*(892)$ vector meson, where the mass function deviates around 2 MeV from the $K\pi$ threshold to the pole position. The absorptive correction depends on the mass of the particles in the loop. For vector mesons, whose main decay is into two pseudoscalar mesons ($PP'$), the flavor symmetry breaking induces a correction to the longitudinal part of the propagator. Considering the $\tau^- \to K_S\pi^-\nu_\tau$ decay, we illustrate these corrections by obtaining the modified vector and scalar form factors. The $K_S\pi^-$ spectrum is described considering the $K^*(892)$ and $K^{'*}(1410)$ vectors and one scalar particle. Nonetheless, for this case, the correction to the scalar form factor is found to be negligible.
• Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This in turn can be achieved by variational regularization where the penalty term is the sum of absolute values of wavelet coefficients. Daubechies, Defrise and De Mol (Comm. Pure Appl. Math. 57) showed that the minimizer of the variational regularization functional can be computed iteratively using a soft thresholding operation. Choosing the soft threshold parameter $\mu>0$ is analogous to the notoriously difficult problem of picking the optimal regularization parameter in Tikhonov regularization. Here a novel automatic method is introduced for choosing $\mu$, based on a control algorithm driving the sparsity of the reconstruction to an \it a priori known ratio of nonzero versus zero wavelet coefficients in the unknown function.
• We investigated the estimation of an unknown Gaussian process (containing displacement, squeezing and phase-shift) applied to a matter system. The state of the matter system is not directly measured, instead, we measure an optical mode which interacts with the system. We propose an interferometric setup exploiting a beam-splitter-type of light-matter interaction with homodyne detectors and two methods of estimation. We demonstrate the superiority of the interferometric setup over alternative non-interferometric schemes. Importantly, we show that even limited coupling strength and a noisy matter system are sufficient for very good estimation. Our work opens the way to many future investigations of light-matter interferometry for experimental platforms in quantum metrology of matter systems.
• In all supersymmetric theories, gravitinos, with mass suppressed by the Planck scale, are an obvious candidate for dark matter; but if gravitinos ever reached thermal equilibrium, such dark matter is apparently either too abundant or too hot, and is excluded. However, in theories with an axion, a saxion condensate is generated during an early era of cosmological history and its late decay dilutes dark matter. We show that such dilution allows previously thermalized gravitinos to account for the observed dark matter over very wide ranges of gravitino mass, keV < $m_{3/2}$ < TeV, axion decay constant, $10^9$ GeV < $f_a$ < $10^{16}$ GeV, and saxion mass, 10 MeV < $m_s$ < 100 TeV. Constraints on this parameter space are studied from BBN, supersymmetry breaking, gravitino and axino production from freeze-in and saxion decay, and from axion production from both misalignment and parametric resonance mechanisms. Large allowed regions of $(m_{3/2}, f_a, m_s)$ remain, but differ for DFSZ and KSVZ theories. Superpartner production at colliders may lead to events with displaced vertices and kinks, and may contain saxions decaying to $(WW,ZZ,hh), gg, \gamma \gamma$ or a pair of Standard Model fermions. Freeze-in may lead to a sub-dominant warm component of gravitino dark matter, and saxion decay to axions may lead to dark radiation.
• We are glad that our paper has generated intense discussions in the fMRI field, on how to analyze fMRI data and how to correct for multiple comparisons. The goal of the paper was not to disparage any specific fMRI software, but to point out that parametric statistical methods are based on a number of assumptions that are not always valid for fMRI data, and that non-parametric statistical methods are a good alternative. Through AFNIs introduction of non-parametric statistics in the function 3dttest++, the three most common fMRI softwares now all support non-parametric group inference (SPM through the toolbox SnPM, and FSL through the function randomise).
• Consider the supercritical branching random walk on the real line in the boundary case and the associated Gibbs measure $\nu_{n,\beta}$ on the $n^\text{th}$ generation, which is also the polymer measure on a disordered tree with inverse temperature $\beta$. The convergence of the partition function $W_{n,\beta}$, after rescaling, towards a nontrivial limit has been proved by Aı̈dékon and Shi in the critical case $\beta = 1$ and by Madaule when $\beta >1$. We study here the near-critical case, where $\beta_n \to 1$, and prove the convergence of $W_{n,\beta_n}$, after rescaling, towards a constant multiple of the limit of the derivative martingale. Moreover, trajectories of particles chosen according to the Gibbs measure $\nu_{n,\beta}$ have been studied by Madaule in the critical case, with convergence towards the Brownian meander, and by Chen, Madaule and Mallein in the strong disorder regime, with convergence towards the normalized Brownian excursion. We prove here the convergence for trajectories of particles chosen according to the near-critical Gibbs measure and display continuous families of processes from the meander to the excursion or to the Brownian motion.
• In this work we introduce the category of multiplicative sections of an $\la$-groupoid. We prove that this category carries natural strict Lie 2-algebra structures, which are Morita invariant. As applications, we study the algebraic structure underlying multiplicative vector fields on a Lie groupoid and in particular vector fields on differentiable stacks. We also introduce the notion of geometric vector field on the quotient stack of a Lie groupoid, showing that the space of such vector fields is a Lie algebra. We describe the Lie algebra of geometric vector fields in several cases, including classifying stacks, quotient stacks of regular Lie groupoids and in particular orbifolds, and foliation groupoids.
• All-atom molecular dynamics simulations of an elastohydrodynamic lubrication oil film are performed to study the effect of pressure. Fluid molecules of n-hexane are confined between two solid plates under a constant normal force of 0.1--8.0 GPa. Traction simulations are performed by applying relative sliding motion to the solid plates. A transition in the traction behavior is observed around 0.5--2.0 GPa, which corresponds to the viscoelastic region to the plastic--elastic region, which are experimentally observed. This phase transition is related to the suppression of the fluctuation in molecular motion.
• This paper presents a new way to design a Fuzzy Terminal Iterative Learning Control (TILC) to control the heater temperature setpoints of a thermoforming machine. This fuzzy TILC is based on the inverse of a fuzzy model of this machine, and is built from experimental (or simulation) data with kriging interpolation. The Fuzzy Inference System usually used for a fuzzy model is the zero order Takagi Sugeno Kwan system (constant consequents). In this paper, the 1st order Takagi Sugeno Kwan system is used, with the fuzzy model rules expressed using matrices. This makes the inversion of the fuzzy model much easier than the inversion of the fuzzy model based on the TSK of order 0. Based on simulation results, the proposed fuzzy TILC seems able to give a very good initial guess as to the heater temperature setpoints, making it possible to have almost no wastage of plastic sheets. Simulation results show the effectiveness of the fuzzy TILC compared to a crisp TILC, even though the fuzzy controller is based on a fuzzy model built from noisy data.
• When individual $p$-values are conservative under the null, usual multiple testing methods may lose power substantially. We propose to reduce the total number of tests by conditioning: $p$-values less than a chosen threshold $0 < \tau < 1$ are kept and divided by $\tau$, then a usual multiple testing procedure is applied. This method controls the multiple testing error if the conservative $p$-values are also uniformly conservative, meaning the conditional distribution $(p/\tau)\,|\,p \le \tau$ is stochastically larger than the uniform distribution on $(0,1)$ for any $\tau$ where $p$ is the conservative $p$-value. We show uniform conservativeness hold for one-sided tests in a one-dimensional exponential family (e.g. testing for qualitative interaction) as well as testing $|\mu|\le\eta$ using a statistic $X \sim \mathrm{N}(\mu,1)$ (e.g. testing for practical importance with threshold $\eta$). Our theoretical and numerical results suggest the proposed tests gain significant power when many $p$-values are uniformly conservative and lose little power when no $p$-value is uniformly conservative.
• Biological populations are subject to fluctuating environmental conditions. Different adaptive strategies can allow them to cope with these fluctuations: specialization to one particular environmental condition, adoption of a generalist phenotype that compromise between conditions, or population-wise diversification (bet-hedging). Which strategy provides the largest selective advantage in the long run depends on the range of accessible phenotypes and the statistics of the environmental fluctuations. Here, we analyze this problem in a simple mathematical model of population growth. First, we review and extend a graphical method to identify the nature of the optimal strategy when the environmental fluctuations are uncorrelated. Temporal correlations in environmental fluctuations open up new strategies that rely on memory but are mathematically challenging to study: we present here new analytical results to address this challenge. We illustrate our general approach by analyzing optimal adaptive strategies in the presence of trade-offs that constrain the range of accessible phenotypes. Our results extend several previous studies and have applications to a variety of biological phenomena, from antibiotic resistance in bacteria to immune responses in vertebrates.
• Automatic Music Transcription (AMT) is one of the oldest and most well-studied problems in the field of music information retrieval. Within this challenging research field, onset detection and instrument recognition take important places in transcription systems, as they respectively help to determine exact onset times of notes and to recognize the corresponding instrument sources. The aim of this study is to explore the usefulness of multiscale scattering operators for these two tasks on plucked string instrument and piano music. After resuming the theoretical background and illustrating the key features of this sound representation method, we evaluate its performances comparatively to other classical sound representations. Using both MIDI-driven datasets with real instrument samples and real musical pieces, scattering is proved to outperform other sound representations for these AMT subtasks, putting forward its richer sound representation and invariance properties.
• We present an updated version of the mass--metallicity relation (MZR) using integral field spectroscopy data obtained from 734 galaxies observed by the CALIFA survey. These unparalleled spatially resolved spectroscopic data allow us to determine the metallicity at the same physical scale ($\mathrm{R_{e}}$) for different calibrators. We obtain MZ relations with similar shapes for all calibrators, once the scale factors among them are taken into account. We do not find any significant secondary relation of the MZR with either the star formation rate (SFR) or the specific SFR for any of the calibrators used in this study, based on the analysis of the residuals of the best fitted relation. However we do see a hint for a (s)SFR-dependent deviation of the MZ-relation at low masses (M$<$10$^{9.5}$M$_\odot$), where our sample is not complete. We are thus unable to confirm the results by Mannucci et al. (2010), although we cannot exclude that this result is due to the differences in the analysed datasets. In contrast, our results are inconsistent with the results by Lara-Lopez et al. (2010), and we can exclude the presence of a SFR-Mass-Oxygen abundance Fundamental Plane. These results agree with previous findings suggesting that either (1) the secondary relation with the SFR could be induced by an aperture effect in single fiber/aperture spectroscopic surveys, (2) it could be related to a local effect confined to the central regions of galaxies, or (3) it is just restricted to the low-mass regime, or a combination of the three effects.
• An oriented $k$-uniform hypergraph has Property O, if for every linear order of the vertex set, there is some edge oriented consistently with the linear order. Duffus, Kay and Rödl investigate the minimum number $f(k)$ of edges a $k$-uniform hypergaph having \emphProperty O can have. They prove $k! \leq f(k) \leq (k^2 \ln k) k!$, where the upper bound holds for sufficiently large $k$. In this note we improve the upper bound by a factor of $k \ln k$ showing $f(k) \leq \left(\lfloor \frac{k}{2} \rfloor +1 \right) k! - \lfloor \frac{k}{2} \rfloor (k-1)!$ for every $k\geq 3$. Furthermore they introduce the minimum number $n(k)$ of vertices a $k$-uniform hypergaph having Property O can have. For $k=3$ they show $6\leq n(3) \leq 9$ and ask for the precise value of $n(3)$. We show that $n(3)=6$.
• Stochastic gradient descent based algorithms are typically used as the general optimization tools for most deep learning models. A Restricted Boltzmann Machine (RBM) is a probabilistic generative model that can be stacked to construct deep architectures. For RBM with Bernoulli inputs, non-Euclidean algorithm such as stochastic spectral descent (SSD) has been specifically designed to speed up the convergence with improved use of the gradient estimation by sampling methods. However, the existing algorithm and corresponding theoretical justification depend on the assumption that the possible configurations of inputs are finite, like binary variables. The purpose of this paper is to generalize SSD for Gaussian RBM being capable of mod- eling continuous data, regardless of the previous assumption. We propose the gradient descent methods in non-Euclidean space of parameters, via de- riving the upper bounds of logarithmic partition function for RBMs based on Schatten-infinity norm. We empirically show that the advantage and improvement of SSD over stochastic gradient descent (SGD).
• This paper considers a distributed gossip approach for finding a Nash equilibrium in networked games on graphs. In such games a player's cost function may be affected by the actions of any subset of players. An interference graph is employed to illustrate the partially-coupled cost functions and the asymmetric information requirements. For a given interference graph, network communication between players is considered to be limited. A generalized communication graph is designed so that players exchange only their required information. An algorithm is designed whereby players, with possibly partially-coupled cost functions, make decisions based on the estimates of other players' actions obtained from local neighbors. It is shown that this choice of communication graph guarantees that all players' information is exchanged after sufficiently many iterations. Using a set of standard assumptions on the cost functions, the interference and the communication graphs, almost sure convergence to a Nash equilibrium is proved for diminishing step sizes. Moreover, the case when the cost functions are not known by the players is investigated and a convergence proof is presented for diminishing step sizes. The effect of the second largest eigenvalue of the expected communication matrix on the convergence rate is quantified. The trade-off between parameters associated with the communication graph and the ones associated with the interference graph is illustrated. Numerical results are presented for a large-scale networked game.
• Pursuing the notion of ambidexterity developed by Hopkins and Lurie, we prove that the span $\infty$-category of finite n-truncated spaces is the free n-semiadditive $\infty$-category generated by a single object. Passing to presentable $\infty$-categories one obtains a description of the free presentable n-semiadditive $\infty$-category in terms of a new notion of n-commutative monoids, which can be described as spaces in which families of points parameterized by finite $n$-truncated spaces can be coherently summed. Such an abstract summation procedure can be used to give a formal definition of the finite path integrals described by Freed, Hopkins, Lurie and Teleman in the context of 1-dimensional topological field theories.
• Attacks on the microarchitecture of modern processors have become a practical threat to security and privacy in desktop and cloud computing. Recently, cache attacks have successfully been demonstrated on ARM based mobile devices, suggesting they are as vulnerable as their desktop or server counterparts. In this work, we show that previous literature might have left an overly pessimistic conclusion of ARM's security as we unveil AutoLock: an internal performance enhancement found in inclusive cache levels of ARM processors that adversely affects Evict+Time, Prime+Probe, and Evict+Reload attacks. AutoLock's presence on system-on-chips (SoCs) is not publicly documented, yet knowing that it is implemented is vital to correctly assess the risk of cache attacks. We therefore provide a detailed description of the feature and propose three ways to detect its presence on actual SoCs. We illustrate how AutoLock impedes cross-core cache evictions, but show that its effect can also be compensated in a practical attack. Our findings highlight the intricacies of cache attacks on ARM and suggest that a fair and comprehensive vulnerability assessment requires an in-depth understanding of ARM's cache architectures and rigorous testing across a broad range of ARM based devices.
• Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.
• In this paper, we present the ADMIRE architecture; a new framework for developing novel and innovative data mining techniques to deal with very large and distributed heterogeneous datasets in both commercial and academic applications. The main ADMIRE components are detailed as well as its interfaces allowing the user to efficiently develop and implement their data mining applications techniques on a Grid platform such as Globus ToolKit, DGET, etc.
• We analyse the behaviour of the MacDowell-Mansouri action with internal symmetry group $\mathrm{SO}(4,1)$ under the covariant Hamiltonian formulation. The field equations, known in this formalism as the De Donder-Weyl equations, are obtained by means of the graded Poisson-Gerstenhaber bracket structure present within the covariant formulation. The decomposition of the internal algebra $\mathfrak{so}(4,1)\simeq\mathfrak{so}(3,1)\oplus\mathbb{R}^{3,1}$ allows the symmetry breaking $\mathrm{SO}(4,1)\to\mathrm{SO}(3,1)$, which reduces the original action to the Palatini action without the topological term. We demonstrate that, in contrast to the Lagrangian approach, this symmetry breaking can be performed indistinctly in the covariant Hamiltonian formalism either before or after the variation of the De Donder-Weyl Hamiltonian has been done, recovering Einstein's equations via the Poisson-Gerstenhaber bracket.
• We introduce a weak notion of barycenter of a probability measure $\mu$ on a metric measure space $(X, d, {\bf m})$, with the metric $d$ and reference measure ${\bf m}$. Under the assumption that optimal transport plans are given by mappings, we prove that our barycenter $B(\mu)$ is well defined; it is a probability measure on $X$ supported on the set of the usual metric barycenter points of the given measure $\mu$. The definition uses the canonical embedding of the metric space $X$ into its Wasserstein space $P(X)$, pushing a given measure $\mu$ forward to a measure on $P(X)$. We then regularize the measure by the Wasserstein distance to the reference measure ${\bf m}$, and obtain a uniquely defined measure on $X$ supported on the barycentric points of $\mu$. We investigate various properties of $B(\mu)$
• Mar 30 2017 math.DS arXiv:1703.09753v1
We study the self-semiconjugations of the Tent-map $f:\, x\mapsto 1-|2x-1|$ for $x\in [0,\, 1]$. We prove that each of these semi-conjugations $\xi$ is piecewise linear. For any $n\in \mathbb{N}$ we denote $A_n = f^{-n}(0)$ and describe the maps $\psi:\, A_n\rightarrow [0,\, 1]$ such that $\psi\circ f = f\circ \psi$. Also we describe all possible restrictions, of self-semiconjugations of the Tent-map onto $A_n$ and prove that for any $\alpha\in A_n\setminus A_{n-1}$ a restriction is completely determined by its value at $\alpha$.
• A study is presented of two-dimensional superintegrable systems separating in Cartesian coordinates and allowing an integral of motion that is a fourth order polynomial in the momenta. All quantum mechanical potentials that do not satisfy any linear differential equation are found. They do however satisfy nonlinear ODEs. We show that these equations always have the Painlevé property and integrate them in terms of known Painlevé transcendents or elliptic functions.
• Mar 30 2017 math.HO arXiv:1703.09750v1
This is a short essay on the roles of Max Dehn and Axel Thue in the formulation of the word problem for (semi-)groups, and the story of the proofs showing that the word problem is undecidable.
• Let $Y$ be a sublattice of a vector lattice $X$. We consider the problem of identifying the smallest order closed sublattice of $X$ containing $Y$. It is known that the analogy with topological closure fails. Let $\overline{Y}^o$ be the order closure of $Y$ consisting of all order limits of nets of elements from $Y$. Then $\overline{Y}^o$ need not be order closed. We show that in many cases the smallest order closed sublattice containing $Y$ is in fact the second order closure $\overline{\overline{Y}^o}^o$. Moreover, if $X$ is a $\sigma$-order complete Banach lattice, then the condition that $\overline{Y}^o$ is order closed for every sublattice $Y$ characterizes order continuity of the norm of $X$. The present paper provides a general approach to a fundamental result in financial economics concerning the spanning power of options written on a financial asset.
• We investigate a stress-energy tensor for a CFT at strong coupling inside a small five-dimensional rotating Myers-Perry black hole with equal angular momenta by using the holographic method. As a gravitational dual, we perturbatively construct a black droplet solution by applying the "derivative expansion" method, generalizing the work of Haddad (arXiv:1207.2305), and analytically compute the holographic stress-energy tensor for our solution. We find that the stress-energy tensor is finite at both the future and past outer (event) horizons, and that the energy density is negative just outside the event horizons due to the Hawking effect. Furthermore, we apply the holographic method to the question of quantum instability of the Cauchy horizon since, by construction, our black droplet solution also admits a Cauchy horizon inside.
• In this paper, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the non-linearly evolved matter density field. Assuming that only the longitudinal component of the displacement is cosmologically relevant, this algorithm iteratively solves the non-linear coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the non-linear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent caused only by the emergence of the transverse component after the shell-crossing. As this method circumvents one of the most strongest non-linearity in density field, the reconstructed field is well-described by linear theory and is immune from the bulk-flow smearing of the BAO signature, and therefore could be used to significantly improve the precision of measuring the sound horizon scale. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error is reduced by a factor of 2.7, and is very close to the ideal limit one would ever achieve with linear power spectrum and Gaussian covariance matrix.
• Mar 30 2017 astro-ph.EP arXiv:1703.09741v1
This chapter of the book Planetary Ring Systems addresses the origin of planetary rings, one of the least understood processes related to planet formation and evolution. Whereas rings seem ubiquitous around giant planets, their great diversity of mass, structure and composition is a challenge for any formation scenario. Recent advances in our understanding of ring and satellite formation and destruction suggest that these processes are interconnected, so that rings and satellites may be two aspects of the same geological system. However, no single theory seems able to explain the origin of the different planetary rings known in our Solar System, and it now seems evident that rings may result from a variety of processes like giant collisions, tidal stripping of comets or satellites, as well as planet formation itself. In order to build any theory of ring formation it is important to specify physical processes that affect the long-term evolution of rings, as well as to describe the different observations that any ring formation model should explain. This is the topic of section 2. In section 3, we focus our attention on Saturn's rings and their main properties, and then discuss the pros and cons of a series of ring formation models. We also discuss the link between rings and satellites. In section 4, we extend the discussion to the other giant planets (Jupiter, Uranus, and Neptune). Section 5 is devoted to new types of rings -- the recent discovery of rings orbiting small outer Solar System bodies (Centaurs), and the possible rings around extrasolar planets. In section 6, we conclude and try to identify critical observations and theoretical advances needed to better understand the origin of rings and their significance in the global evolution of planets.
• Mar 30 2017 cond-mat.quant-gas arXiv:1703.09740v1
Quench dynamics is an active area of study encompassing condensed matter physics and quantum information, with applications to cold-atomic gases and pump-probe spectroscopy of materials. Recent theoretical progress in studying quantum quenches is reviewed. Quenches in interacting one dimensional systems as well as systems in higher spatial dimensions are covered. The appearance of non-trivial steady states following a quench in exactly solvable models is discussed, and the stability of these states to perturbations is described. Proper conserving approximations needed to capture the onset of thermalization at long times are outlined. The appearance of universal scaling for quenches near critical points, and the role of the renormalization group in capturing the transient regime, are reviewed. Finally the effect of quenches near critical points on the dynamics of entanglement entropy and entanglement statistics is discussed. The extraction of critical exponents from the entanglement statistics is outlined.
• A back action from Dirac electrons in graphene on the hybridization of radiative and evanescent fields is found as an analogy to Newton's third law. Here, the back action appears as a localized polarization field which greatly modifies an incident surface-plasmon-polariton (SPP) field. This yields a high sensitivity to local dielectric environments and provides a scrutiny tool for molecules or proteins selectively bounded with carbons. A scattering matrix is shown with varied frequencies nearby the surface-plasmon (SP) resonance for the increase, decrease and even a full suppression of the polarization field, which enables accurate effective-medium theories to be constructed for Maxwell-equation finite-difference time-domain methods. Moreover, double peaks in the absorption spectra for hybrid SP and graphene-plasmon modes are significant only with a large conductor plasma frequency, but are overshadowed by a round SPP peak at a small plasma frequency as the graphene is placed close to conductor surface. These resonant absorptions facilitate the polariton-only excitations, leading to polariton condensation for a threshold-free laser.
• We study three computer algebra systems, namely SageMath (with SageManifolds package), Maxima (with ctensor package) and Python language (with GraviPy module), which allow tensor manipulation for general relativity calculations. We present simple examples and give a benchmark of these systems. After the general analysis, we focus on the SageMath+SageManifolds system to analyze and visualize the solutions of the massless Klein-Gordon equation and geodesic motion with Hamilton-Jacobi formalism.
• We show that our algorithm for inverting the sweep map on (2n, n)-Dyck paths works for any (kn, n)-Dyck path, where k is an arbitrary positive integer.
• We show that it is possible for graphene-based Josephson junctions (gJjs) to detect single photons in a wide electromagnetic spectrum from visible to radio frequencies. Our approach takes advantage of the exceptionally low electronic heat capacity of monolayer graphene and its constricted thermal conductance to its phonon degrees of freedom. Such a system could provide high sensitivity photon detection required for research areas including quantum information processing and radio-astronomy. As an example, we present our device concepts for gJj single photon detectors in both the microwave and infrared regimes. The dark count rate and intrinsic quantum efficiency are computed based on parameters from a measured gJj, demonstrating feasibility within existing technologies.
• Mar 30 2017 physics.acc-ph arXiv:1703.09735v1
We consider the process of cooling of a heavy particle beam in a co-moving electron beam of low temperature guided by a solenoidal magnetic field. This paper summarizes the main results of theoretical studies of this process conducted by the author during a period of several years. The main result of these studies is a conclusion that magnetization of the electron beam can provide the possibility of drastic enhancement of the cooling rate of a heavy particle beam with achieving equilibrium temperatures that are much lower than the transverse temperatures of the electron beam. Magnetized electron cooling is proposed for cooling of ion beams in high-luminosity colliders.
• The article examines nonisotropic Nikolskii and Besov spaces with norms defined using $L_p$-averaged moduli of continuity of functions of appropriate orders along the coordinate directions, instead of moduli of continuity of known orders for derivative functions along the same directions. The author builds continuous linear mappings of such spaces of functions defined in domains of certain type to ordinary nonisotropic Nikolskii and Besov spaces in $\mathbb{R}^d$ that are function extension operators, thus incurring coincidence of both kinds of spaces in the said domains. The article also provides weak asymptotics of approximation characteristics related to the problem of derivative reconstruction from function values at a given number of points, the S.B.Stechkin's problem for differential operator, and the problem of width asymptotics for nonisotropic Nikolskii-Besov classes in those domains.
• Nuclear starburst discs (NSDs) are star-forming discs that may be residing in the nuclear regions of active galaxies at intermediate redshifts. One dimensional (1D) analytical models developed by Thompson et al. (2005) show that these discs can possess an inflationary atmosphere when dust is sublimated on parsec scales. This make NSDs a viable source for AGN obscuration. We model the two dimensional (2D) structure of NSDs using an iterative method in order to compute the explicit vertical solutions for a given annulus. These solutions satisfy energy and hydrostatic balance, as well as the radiative transfer equation. In comparison to the 1D model, the 2D calculation predicts a less extensive expansion of the atmosphere by orders of magnitude at the parsec/sub-parsec scale, but the new scale-height $h$ may still exceed the radial distance $R$ for various physical conditions. A total of 192 NSD models are computed across the input parameter space in order to predict distributions of a line of sight column density $N_H$. Assuming a random distribution of input parameters, the statistics yield 56% of Type 1, 23% of Compton-thin Type 2s (CN), and 21% of Compton-thick (CK) AGNs. Depending on a viewing angle ($\theta$) of a particular NSD (fixed physical conditions), any central AGN can appear to be Type 1, CN, or CK which is consistent with the basic unification theory of AGNs. Our results show that $\log[N_H(\text{cm}^{-2})]\in$ [23,25.5] can be oriented at any $\theta$ from 0$^\circ$ to $\approx$80$^\circ$ due to the degeneracy in the input parameters.
• In this paper we characterize graphs which maximize the spectral radius of their adjacency matrix over all graphs of Colin de Verdière parameter at most $m$. We also characterize graphs of maximum spectral radius with no $H$ as a minor when $H$ is either $K_r$ or $K_{s,t}$. Interestingly, the extremal graphs match those which maximize the number of edges over all graphs with no $H$ as a minor when $r$ and $s$ are small, but not when they are larger.
• We first study a model, introduced recently in \citeES, of a critical branching random walk in an IID random environment on the $d$-dimensional integer lattice. The walker performs critical (0-2) branching at a lattice point if and only if there is no obstacle' placed there. The obstacles appear at each site with probability $p\in [0,1)$ independently of each other. We also consider a similar model, where the offspring distribution is subcritical. Let $S_n$ be the event of survival up to time $n$. We show that on a set of full $\mathbb P_p$-measure, as $n\to\infty$, (i) Critical case: P^\omega(S_n)∼\frac2qn; (ii) Subcritical case: P^\omega(S_n)= \exp\left[\left( -C_d,q⋅\fracn(\log n)^2/d \right)(1+o(1))\right], where $C_{d,q}>0$ does not depend on the branching law. Hence, the model exhibits self-averaging' in the critical case but not in the subcritical one. I.e., in (i) the asymptotic tail behavior is the same as in a "toy model" where space is removed, while in (ii) the spatial survival probability is larger than in the corresponding toy model, suggesting spatial strategies. We utilize a spine decomposition of the branching process as well as some known results on random walks.
• We report the detection of a largely ionized very-high velocity cloud (VHVC; $v_{\rm LSR}\sim-350$ km/s) toward M33 with the Hubble Space Telescope/Cosmic Origin Spectrograph. The VHVC is detected in OI, CII, SiII, and SiIII absorption along five sightlines separated by ~0.06-0.4 degree. On sub-degree scales, the velocities and ionic column densities of the VHVC remain relatively smooth with standard deviations of +/-14 km/s and +/-0.15 dex between the sightlines, respectively. The VHVC has a metallicity of [OI/HI]=-0.56+/-0.17 dex (Z=0.28+/-0.11 Z$_{\odot}$). Despite the position-velocity proximity of the VHVC to the ionized Magellanic Stream, the VHVC's higher metallicity makes it unlikely to be associated with the Stream, highlighting the complex velocity structure of this region of sky. We investigate the VHVC's possible origin by revisiting its surrounding HI environment. We find that the VHVC may be: (1) a MW CGM cloud, (2) related to a nearby HI VHVC -- Wright's Cloud, or (3) connected to M33's northern warp. Furthermore, the VHVC could be a bridge connecting Wright's Cloud and M33's northern warp, which would make it a Magellanic-like structure in the halo of M33.
• We present a lattice quantum chromodynamics determination of the scalar and vector form factors for the $B_s \rightarrow D_s \ell \nu$ decay over the full physical range of momentum transfer. In conjunction with future experimental data, our results will provide a new method to extract $|V_{cb}|$, which may elucidate the current tension between exclusive and inclusive determinations of this parameter. Combining the form factor results at non-zero recoil with recent HPQCD results for the $B \rightarrow D \ell \nu$ form factors, we determine the ratios $f^{B_s \rightarrow D_s}_0(M_\pi^2) / f^{B \rightarrow D}_0(M_K^2) = 1.000(62)$ and $f^{B_s \rightarrow D_s}_0(M_\pi^2) / f^{B \rightarrow D}_0(M_\pi^2) = 1.006(62)$. These results give the fragmentation fraction ratios $f_s/f_d = 0.310(30)_{\mathrm{stat.}}(21)_{\mathrm{syst.}}(6)_{\mathrm{theor.}}(38)_{\mathrm{latt.}}$ and $f_s/f_d = 0.307(16)_{\mathrm{stat.}}(21)_{\mathrm{syst.}}(23)_{\mathrm{theor.}}(44)_{\mathrm{latt.}}$, respectively. The fragmentation fraction ratio is an important ingredient in experimental determinations of $B_s$ meson branching fractions at hadron colliders, in particular for the rare decay ${\cal B}(B_s \rightarrow \mu^+ \mu^-)$. In addition to the form factor results, we make the first prediction of the branching fraction ratio $R(D_s) = {\cal B}(B_s\to D_s\tau\nu)/{\cal B}(B_s\to D_s\ell\nu) = 0.301(6)$, where $\ell$ is an electron or muon. Current experimental measurements of the corresponding ratio for the semileptonic decays of $B$ mesons disagree with Standard Model expectations at the level of nearly four standard deviations. Future experimental measurements of $R(D_s)$ may help understand this discrepancy.
• We chart the breakdown of semiclassical gravity by analyzing the Virasoro conformal blocks to high numerical precision, focusing on the heavy-light limit corresponding to a light probe propagating in a BTZ black hole background. In the Lorentzian regime, we find empirically that the initial exponential time-dependence of the blocks transitions to a universal $t^{-\frac{3}{2}}$ power-law decay. For the vacuum block the transition occurs at $t \approx \frac{\pi c}{6 h_L}$, confirming analytic predictions. In the Euclidean regime, due to Stokes phenomena the naive semiclassical approximation fails completely in a finite region enclosing the `forbidden singularities'. We emphasize that limitations on the reconstruction of a local bulk should ultimately stem from distinctions between semiclassical and exact correlators.
• Given a graph $G$, a proper $k$-coloring of $G$ is a partition $c = (S_i)_{i\in [1,k]}$ of $V(G)$ into $k$ stable sets $S_1,\ldots, S_{k}$. Given a weight function $w: V(G) \to \mathbb{R}^+$, the weight of a color $S_i$ is defined as $w(i) = \max_{v \in S_i} w(v)$ and the weight of a coloring $c$ as $w(c) = \sum_{i=1}^{k}w(i)$. Guan and Zhu [Inf. Process. Lett., 1997] defined the weighted chromatic number of a pair $(G,w)$, denoted by $\sigma(G,w)$, as the minimum weight of a proper coloring of $G$. For a positive integer $r$, they also defined $\sigma(G,w;r)$ as the minimum of $w(c)$ among all proper $r$-colorings $c$ of $G$. The complexity of determining $\sigma(G,w)$ when $G$ is a tree was open for almost 20 years, until Araújo et al. [SIAM J. Discrete Math., 2014] recently proved that the problem cannot be solved in time $n^{o(\log n)}$ on $n$-vertex trees unless the Exponential Time Hypothesis (ETH) fails. The objective of this article is to provide hardness results for computing $\sigma(G,w)$ and $\sigma(G,w;r)$ when $G$ is a tree or a forest, relying on complexity assumptions weaker than the ETH. Namely, we study the problem from the viewpoint of parameterized complexity, and we assume the weaker hypothesis $FPT \neq W[1]$. Building on the techniques of Araújo et al., we prove that when $G$ is a forest, computing $\sigma(G,w)$ is $W[1]$-hard parameterized by the size of a largest connected component of $G$, and that computing $\sigma(G,w;r)$ is $W[2]$-hard parameterized by $r$. Our results rule out the existence of $FPT$ algorithms for computing these invariants on trees or forests for many natural choices of the parameter.

Steve Flammia Mar 30 2017 20:12 UTC

Yes, I did indeed mean that the results of the previous derivations are correct and that predictions from experiments lie within the stated error bounds. To me, it is a different issue if someone derives something with a theoretical guarantee that might have sufficient conditions that are too strong

...(continued)
Robin Blume-Kohout Mar 30 2017 16:55 UTC

I agree with much of your comment. But, the assertion you're disagreeing with isn't really mine. I was trying to summarize the content of the present paper (and 1702.01853, hereafter referred to as [PRYSB]). I'll quote a few passages from the present paper to support my interpretation:

1. "[T

...(continued)
Steve Flammia Mar 30 2017 15:41 UTC

I disagree with the assertion (1) that the previous theory didn't give "the right answers." The previous theory was sound; no one is claiming that there are any mistakes in any of the proofs. However, there were nonetheless some issues.

The first issue is that the previous analysis of gate-depe

...(continued)
Robin Blume-Kohout Mar 30 2017 12:07 UTC

That's a hard question to answer. I suspect that on any questions that aren't precisely stated (and technical), there's going to be some disagreement between the authors of the two papers. After one read-through, my tentative view is that each of the two papers addresses three topics which are pre

...(continued)
LogiQ Mar 30 2017 03:23 UTC

So what is the deal?

Does this negate all the problems with https://scirate.com/arxiv/1702.01853 ?

Laura Mančinska Mar 28 2017 13:09 UTC

Great result!

For those familiar with I_3322, William here gives an example of a nonlocal game exhibiting a behaviour that many of us suspected (but couldn't prove) to be possessed by I_3322.

gae spedalieri Mar 13 2017 14:13 UTC

1) Sorry but this is false.

1a) That analysis is specifically for reducing QECC protocol to an entanglement distillation protocol over certain class of discrete variable channels. Exactly as in BDSW96. Task of the protocol is changed in the reduction.

1b) The simulation is not via a general LOCC b

...(continued)
Siddhartha Das Mar 13 2017 13:22 UTC

We feel that we have cited and credited previous works appropriately in our paper. To clarify:

1) The LOCC simulation of a channel and the corresponding adaptive reduction can be found worked out in full generality in the 2012 Master's thesis of Muller-Hermes. We have cited the original paper BD

...(continued)
gae spedalieri Mar 13 2017 08:56 UTC

This is one of those papers where the contribution of previous literature is omitted and not fairly represented.

1- the LOCC simulation of quantum channels (not necessarily teleportation based) and the corresponding general reduction of adaptive protocols was developed in PLOB15 (https://arxiv.org/

...(continued)
Noon van der Silk Mar 08 2017 04:45 UTC

I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent

...(continued)