# Top arXiv papers

• Strong magnetic fields, synchrotron emission, and Compton scattering are omnipresent in compact celestial X-ray sources. Emissions in the X-ray energy band are consequently expected to be linearly polarized. X-ray polarimetry provides a unique diagnostic to study the location and fundamental mechanisms behind emission processes. The polarization of emissions from a bright celestial X-ray source, the Crab, is reported here for the first time in the hard X-ray band (~20-160 keV). The Crab is a complex system consisting of a central pulsar, a diffuse pulsar wind nebula, as well as structures in the inner nebula including a jet and torus. Measurements are made by a purpose-built and calibrated polarimeter, PoGO+. The polarization vector is found to be aligned with the spin axis of the pulsar for a polarization fraction, PF = (20.9 $\pm$ 5.0)%. This is higher than that of the optical diffuse nebula, implying a more compact emission site, though not as compact as, e.g., the synchrotron knot. Contrary to measurements at higher energies, no significant temporal evolution of phase-integrated polarisation parameters is observed. The polarization parameters for the pulsar itself are measured for the first time in the X-ray energy band and are consistent with observations at optical wavelengths.
• A fiber-fed Echelle spectrograph at 2.3 m Vainu Bappu Telescope (VBT), Kavalur, has been in operation since 2005. Owing to various technological advancements in precision spectroscopy in recent years, several research avenues have been opened in observational astronomy. These developments have created a demand to improve the Doppler precision of our spectrograph. Currently, the stability of the instrument is compromised by the temperature and pressure fluctuations inside the Echelle room. Further, a better wavelength calibration approach is needed to carefully track and disentangle the instrumental effects from stellar spectra. While planning a possible upgrade with an Iodine absorption gas cell, we measured the raw stability of the spectrograph using a series of calibration frames taken with the ThAr gas discharge lamp. The time series data were analysed with cross-correlation method and the shift in ThAr emission lines was accurately measured across different Echelle orders. In this paper, we present our stability analysis methodology and results for the Kavalur spectrograph. We also identify possible sources of error and discuss our strategy to mitigate them.
• We study the linear response of doped three dimensional Dirac and Weyl semimetals to vector potentials, by calculating the wave-vector and frequency dependent current-current response function analytically. The longitudinal part of the dynamic current-current response function is then used to study the plasmon dispersion, and the optical conductivity. The transverse response in the static limit yields the orbital magnetic susceptibility. In a Weyl semimetal, along with the current-current response function, all these quantities are significantly impacted by the presence of parallel electric and magnetic fields (a finite ${\bf E}\cdot{\bf B}$ term), and can be used to experimentally explore the chiral anomaly.
• We present experimental results obtained from pipe flows generated by fractal shaped orifices or openings. We compare different fractal orifices and their efficiencies to re-generate axi-symmetric flows and to return to the standard flow generated by a perforated plate or a circular orifice plate. We consider two families of fractal openings: mono-orifice and complex-orifice and emphasize the differences between the two fractal families. For the Reynolds number we use, we found that there is an optimum iteration for the fractal level above which no improvement for practical application such as flowmetering is to be expected. The main parameters we propose for the characterisation of the fractal orifice are the connexity parameter, the symmetry angle and the gap to the wall $\delta^*_g$. The results presented here are among the firsts for flows forced through fractal openings and will serve as reference for future studies and bench marks for numerical applications.
• In this paper, we prove fourth moment theorems for the multidimensional free Poisson limits on free Wigner chaos or on the free Poisson algebra. We find that a sequence of infinitely dimensional vectors of free stochastic multiple integrals of deterministic functions with respect to a free Brownian motion or a free Poisson random measure converges to an infinitely dimensional free Poisson distributions if the third and fourth joint moments of any two component sequences of the vector sequence converge to the corresponding moments of the infinitely dimensional free poisson distribution.
• Motivated by applications in distributed storage, the storage capacity of a graph was recently defined to be the maximum amount of information that can be stored across the vertices of a graph such that the information at any vertex can be recovered from the information stored at the neighboring vertices. Computing the storage capacity is a fundamental problem in network coding and is related, or equivalent, to some well-studied problems such as index coding with side information and generalized guessing games. In this paper, we consider storage capacity as a natural information-theoretic analogue of the minimum vertex cover of a graph. Indeed, while it was known that storage capacity is upper bounded by minimum vertex cover, we show that by treating it as such we can get a $3/2$ approximation for planar graphs, and a $4/3$ approximation for triangle-free planar graphs. Since the storage capacity is intimately related to the index coding rate, we get a $2$ approximation of index coding rate for planar graphs and $3/2$ approximation for triangle-free planar graphs. Previously only a trivial $4$ approximation of the index coding rate was known for planar graphs. We then develop a general method of "gadget covering" to upper bound the storage capacity in terms of the average of a set of vertex covers. This method is intuitive and leads to the exact characterization of storage capacity for various families of graphs. As an illustrative example, we use this approach to derive the exact storage capacity of cycles-with-chords, a family of graphs related to outerplanar graphs. Finally, we generalize the storage capacity notion to include recovery from partial failures in distributed storage. We show tight upper and lower bounds on this partial recovery capacity that scales nicely with the fraction of failure in a vertex.
• 49 Cam is a cool magnetic chemically peculiar star which has been noted for showing strong, complex Zeeman linear polarisation signatures. This paper describes magnetic and chemical surface maps obtained for 49 Cam using the INVERS10 magnetic Doppler imaging code and high-resolution spectropolarimetric data in all four Stokes parameters collected with the ESPaDOnS and Narval spectropolarimeters at the Canada-France-Hawaii Telescope and Pic du Midi Observatory. The reconstructed magnetic field maps of 49 Cam show a relatively complex structure. Describing the magnetic field topology in terms of spherical harmonics, we find significant contributions of modes up to l=3, including toroidal components. Observations cannot be reproduced using a simple low-order multipolar magnetic field structure. 49 Cam exhibits a level of field complexity that has not been seen in magnetic maps of other cool Ap stars. Hence we concluded that relatively complex magnetic fields are observed in Ap stars at both low and high effective temperatures. In addition to mapping the magnetic field, we also derive surface abundance distributions of nine chemical elements, including Ca, Sc, Ti, Cr, Fe, Ce, Pr, Nd, Eu. Comparing these abundance maps with the reconstructed magnetic field geometry, we find no clear relationship of the abundance distributions with the magnetic field for some elements. However, for other elements some distinct patterns are found. We discuss these results in the context of other recent magnetic mapping studies and theoretical predictions of radiative diffusion.
• We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.
• Jun 29 2017 math.AT arXiv:1706.09194v1
We prove that the injection of a differential graded Lie algebra, free as Lie algebra on a non negatively graded vector space, in its completion is a quasi-isomorphism. Consequences on differential graded Lie algebra models for rational homotopy type are presented.
• We study the problem of inferring network topology from information cascades, in which the amount of time taken for information to diffuse across an edge in the network follows an unknown distribution. Unlike previous studies, which assume knowledge of these distributions, we only require that diffusion along different edges in the network be independent together with limited moment information (e.g., the means). We introduce the concept of a separating vertex set for a graph, which is a set of vertices in which for any two given distinct vertices of the graph, there exists a vertex whose distance to them are different. We show that a necessary condition for reconstructing a tree perfectly using distance information between pairs of vertices is given by the size of an observed separating vertex set. We then propose an algorithm to recover the tree structure using infection times, whose differences have means corresponding to the distance between two vertices. To improve the accuracy of our algorithm, we propose the concept of redundant vertices, which allows us to perform averaging to better estimate the distance between two vertices. Though the theory is developed mainly for tree networks, we demonstrate how the algorithm can be extended heuristically to general graphs. Simulation results suggest that our proposed algorithm performs better than some current state-of-the-art network reconstruction methods.
• The Quark-Hadron Chiral Parity-Doublet Model (Q$\chi$P) is applied to calculate compact star properties in the presence of a deconfinement phase transition. Within this model, a consistent description of nuclear matter properties, chiral symmetry restoration and a transition from hadronic to quark and gluonic degrees of freedom is possible within one unified approach. We find that the equation of state obtained is consistent with recent perturbative QCD results and is able to accommodate observational constraints of massive and small neutron stars. Furthermore, we show that important features of the equation of state, like the symmetry energy and its slope, are well within their observational constraints.
• First we discuss some early work of Ulrike Feudel on structure formation in nonlinear reactions including ions and the efficiency of the conversion of chemical into electrical energy. Then we give some survey about energy conversion from chemical to higher forms of energy like mechanical, electrical and ecological energy. We consider examples of energy conversion in several natural processes and in some devices like fuel cells. Further, as an example, we study analytically the dynamics and efficiency of a simple "active circuit" converting chemical into electrical energy and driving currents which is roughly modeling fuel cells. Finally we investigate an analogous ecological system of Lotka - Volterra type consisting of an "active species" consuming some passive "chemical food". We show analytically for both these models that the efficiency increases with the load, reaches values higher then 50 percent in a narrow regime of optimal load and goes beyond some maximal load abrupt to zero.
• Given a Riemannian manifold, Weyl's law indicates how the spectrum of the Laplacian behaves asymptotically. Because of that result, there has been a growing interest in finding geometrical bounds compatible with this law. In the case of hypersurfaces, the isoperimetric ratio is a natural geometrical quantity, that allows to bound the spectrum from above. We investigate the problem and find an example of hypersurface where the eigenvalues are minorated by the isoperimetric ratio.
• In this paper, the inverse function and two other functions over $F_{p^m}$ are employed to construct three classes of optimal $p$-ary cyclic codes with parameters $[p^m-1, p^m-2m-2, 4]$, where $m > 1$ and is an integer, and $p > 3$ and is an odd prime. In addition, perfect nonlinear monomials, almost perfect nonlinear monomials and a number of other monomials over $F_{5^m}$ are used to construct some optimal quinary cyclic codes with the same parameters.
• In Cox regression it is sometimes of interest to study time-varying effects (TVE) of exposures and to test the proportional hazards assumption. TVEs can be investigated with log hazard ratios modelled as a function of time. Missing data on exposures are common and multiple imputation (MI) is a popular approach to handling this, to avoid the potential bias and loss of efficiency resulting from a 'complete-case' analysis. Two MI methods have been proposed for when the substantive model is a Cox proportional hazards regression: an approximate method (White and Royston, Statist. Med. 2009;28:1982-98) and a substantive-model-compatible method (Bartlett et al., SMMR 2015;24:462-87). At present, neither method accommodates TVEs of exposures. We extend them to do so for a general form for the TVEs and give specific details for TVEs modelled using restricted cubic splines. Simulation studies assess the performance of the methods under several underlying shapes for TVEs. Our proposed methods give approximately unbiased TVE estimates for binary exposures with missing data, but for continuous exposures the substantive-model-compatible method performs better. The methods also give approximately correct type I errors in the test for proportional hazards when there is no TVE, and gain power to detect TVEs relative to complete-case analysis. Ignoring TVEs at the imputation stage results in biased TVE estimates, incorrect type I errors and substantial loss of power in detecting TVEs. We also propose a multivariable TVE model selection algorithm. The methods are illustrated using data from the Rotterdam Breast Cancer Study. Example R code is provided.
• Online advertising and product recommendation are important domains of applications for multi-armed bandit methods. In these fields, the reward that is immediately available is most often only a proxy for the actual outcome of interest, which we refer to as a conversion. For instance, in web advertising, clicks can be observed within a few seconds after an ad display but the corresponding sale --if any-- will take hours, if not days to happen. This paper proposes and investigates a new stochas-tic multi-armed bandit model in the framework proposed by Chapelle (2014) --based on empirical studies in the field of web advertising-- in which each action may trigger a future reward that will then happen with a stochas-tic delay. We assume that the probability of conversion associated with each action is unknown while the distribution of the conversion delay is known, distinguishing between the (idealized) case where the conversion events may be observed whatever their delay and the more realistic setting in which late conversions are censored. We provide performance lower bounds as well as two simple but efficient algorithms based on the UCB and KLUCB frameworks. The latter algorithm, which is preferable when conversion rates are low, is based on a Poissonization argument, of independent interest in other settings where aggregation of Bernoulli observations with different success probabilities is required.
• In this paper we prove existence and pathwise uniqueness for a class of stochastic differential equations (with coefficients $\sigma_{ij},b_i$ and initial condition $y$ in the space of tempered distributions) that maybe viewed as a generalisation of Ito's original equations with smooth coefficients . The solutions are characterized as the translates of a finite dimensional diffusion whose coefficients $\sigma_{ij}\star \tilde{y},b_i\star \tilde{y}$ are assumed to be locally Lipshitz.Here $\star$ denotes convolution and $\tilde{y}$ is the distribution which on functions, is realised by the formula $\tilde{y}(r) := y(-r)$ . The expected value of the solution satisfies a non linear evolution equation which is related to the forward Kolmogorov equation associated with the above finite dimensional diffusion.
• We consider a monitoring application where sensors periodically report data to a common receiver in a time division multiplex fashion. The sensors are constrained by the limited and unpredictable energy availability provided by Energy Harvesting (EH), and by the channel impairments. To maximize the quality of the reported data, the packets transmitted contain newly generated data blocks together with up to $r - 1$ previously unsuccessfully delivered ones, where $r$ is a design parameter; such blocks are compressed, concatenated and encoded with a channel code. The scheme applies lossy compression, such that the fidelity of the individual blocks is traded with the reliability provided by the channel code. We show that the proposed strategy outperforms the one in which retransmissions are not allowed. We also investigate the tradeoff between the value of $r$, the compression and coding rates, under the constraints of the energy availability, and, once $r$ has been decided, use a Markov Decision Process (MDP) to optimize the compression/coding rates. Finally, we implement a reinforcement learning algorithm, through which devices can learn the optimal transmission policy without knowing a priori the statistics of the EH process, and show that it indeed reaches the performance obtained via MDP.
• Jun 29 2017 gr-qc hep-ph hep-th arXiv:1706.09182v1
We consider a model with two real scalar fields which admits phantom domain wall solutions. We investigate the structure and evolution of these phantom domain walls in an expanding homogeneous and isotropic universe. In particular, we show that the increase of the tension of the domain walls with cosmic time, associated to the evolution of the phantom scalar field, is responsible for an additional damping term in their equations of motion. We describe the macroscopic dynamics of phantom domain walls, showing that extended phantom defects whose tension varies on a cosmological timescale cannot be the dark energy.
• The finite size of doubly heavy diquark gives a positive correction to the masses of baryons calculated in the local diquark approximation. We evaluate this correction for the basic states of doubly charmed baryons to give quite accurate predictions actual for current searches of those baryons at LHCb: $m[{\Xi_{cc}^{1/2}}^{+}]\approx m[{\Xi_{cc}^{1/2}}^{++}]=3615\pm 55$ MeV and $m[{\Xi_{cc}^{3/2}}^{+}]\approx m[{\Xi_{cc}^{3/2}}^{++}]=3747\pm 55$ MeV.
• Jun 29 2017 math.NA arXiv:1706.09179v1
In this paper we propose local approximation spaces for localized model order reduction procedures such as domain decomposition and multiscale methods. Those spaces are constructed from local solutions of the partial differential equation (PDE) with random boundary conditions, yield an approximation that converges provably at a nearly optimal rate, and can be generated at close to optimal computational complexity. In many localized model order reduction approaches like the generalized finite element method, static condensation procedures, and the multiscale finite element method local approximation spaces can be constructed by approximating the range of a suitably defined transfer operator that acts on the space of local solutions of the PDE. Optimal local approximation spaces that yield in general an exponentially convergent approximation are given by the left singular vectors of this transfer operator [I. Babuška and R. Lipton 2011, K. Smetana and A. T. Patera 2016]. However, the direct calculation of these singular vectors is computationally very expensive. In this paper, we propose an adaptive randomized algorithm based on methods from randomized linear algebra [N. Halko et al. 2011], which constructs a local reduced space approximating the range of the transfer operator and thus the optimal local approximation spaces. The adaptive algorithm relies on a probabilistic a posteriori error estimator for which we prove that it is both efficient and reliable with high probability. Several numerical experiments confirm the theoretical findings.
• Let $K=\mathbb Q(\sqrt D)$ be a real quadratic field. We obtain a presentation of the additive semigroup $\mathcal O_K^+(+)$ of totally positive integers in $K$; its generators (indecomposable integers) and relations can be nicely described in terms of the periodic continued fraction for $\sqrt D$. We also characterize all uniquely decomposable integers in $K$ and estimate their norms. Using these results, we prove that the semigroup $\mathcal O_K^+(+)$ completely determines the real quadratic field $K$.
• In recent years the coincidence of the operator relations equivalence after extension (EAE) and Schur coupling (SC) was settled for the Hilbert space case. For Banach space operators, it is known that SC implies EAE, but the converse implication is only known for special classes of operators, such as Fredholm operators with index zero and operators that can in norm be approximated by invertible operators. In this paper we prove that the implication EAE $\Rightarrow$ SC also holds for inessential Banach space operators. The inessential operators were introduced as a generalization of the compact operators, and include, besides the compact operators, also the strictly singular and strictly co-singular operators; in fact they form the largest ideal such that the invertible elements in the associated quotient algebra coincide with (the equivalence classes of) the Fredholm operators.
• Utilizing multiwavelength observations and magnetic field data from SDO/AIA, SDO/HMI, GOES and RHESSI, we investigate a large-scale ejective solar eruption of 2014 December 18 from active region NOAA 12241. This event produced a distinctive three-ribbon flare, having two parallel ribbons corresponding to the ribbons of a standard two-ribbon flare, and a larger-scale third quasi-circular ribbon offset from the other two ribbons. There are two components to this eruptive event. First, a flux rope forms above a strong-field polarity-inversion line and erupts and grows as the parallel ribbons turn on, grow, and spread part from that polarity-inversion line; this evolution is consistent with the tether-cutting-reconnection mechanism for eruptions. Second, the eruption of the arcade that has the erupting flux rope in its core under goes magnetic reconnection at the null point of a fan dome that envelops the erupting arcade, resulting in formation of the quasi-circular ribbon; this is consistent with the breakout reconnection mechanism for eruptions. We find that the parallel ribbons begin well before (12 min) circular ribbon onset, indicating that tether-cutting reconnection (or a non-ideal MHD instability) initiated this event, rather than breakout reconnection. The overall setup for this large-scale (circular-ribbon diameter 100000 km) eruption is analogous to that of coronal jets (base size 10000 km), many of which, according to recent findings, result from eruptions of small-scale minifilaments. Thus these findings confirm that eruptions of sheared-core magnetic arcades seated in fan-spine null-point magnetic topology happen on a wide range of size scales on the Sun.
• We apply the master equation with time coarse graining approximation to a pair of detectors interacting with a scalar field. By solving the master equation numerically, we investigate evolution of negativity between comoving detectors in de Sitter space. For the massless conformal scalar field, it is found that a pair of detectors can perceive entanglement beyond the Hubble horizon scale if the initial separation of detectors is sufficiently small. At the same time, violation of the Bell-CHSH inequality on the super horizon scale is also detected. For the massless minimal scalar field, on the other hand, the entanglement decays within Hubble time scale owing to the quantum noise caused by particle creations in de Sitter space and the entanglement on the super horizon scale cannot be detected.
• The timed pattern matching problem is an actively studied topic because of its relevance in monitoring of real-time systems. There one is given a log $w$ and a specification $\mathcal{A}$ (given by a timed word and a timed automaton in this paper), and one wishes to return the set of intervals for which the log $w$, when restricted to the interval, satisfies the specification $\mathcal{A}$. In our previous work we presented an efficient timed pattern matching algorithm: it adopts a skipping mechanism inspired by the classic Boyer--Moore (BM) string matching algorithm. In this work we tackle the problem of online timed pattern matching, towards embedded applications where it is vital to process a vast amount of incoming data in a timely manner. Specifically, we start with the Franek-Jennings-Smyth (FJS) string matching algorithm---a recent variant of the BM algorithm---and extend it to timed pattern matching. Our experiments indicate the efficiency of our FJS-type algorithm in online and offline timed pattern matching.
• Using the method of transversal Laplace transform the field amplitude in the parabolic approximation is calculated in the two-dimensional free space using initial values of the amplitude specified on an arbitrary shaped monotonic curve. The obtained amplitude depends on one \it a priori unknown function, which can be found from a Volterra first kind integral equation. In a special case of field amplitude specified on a concave parabolic curve the exact solution is derived. Both solutions can be used to study the light propagation from surfaces of tilted objects of an arbitrary shape including grazing incidence X-ray mirrors. They can also find applications in the analysis of coherent imaging problems of the X-ray optics, in phase retrieval algorithms as well as to solve inverse problems in cases when the initial field amplitude is sought on a curved surface.
• Jun 29 2017 cs.SE arXiv:1706.09172v1
Sketches and diagrams play an important role in the daily work of software developers. In this paper, we investigate the use of sketches and diagrams in software engineering practice. To this end, we used both quantitative and qualitative methods. We present the results of an exploratory study in three companies and an online survey with 394 participants. Our participants included software developers, software architects, project managers, consultants, as well as researchers. They worked in different countries and on projects from a wide range of application areas. Most questions in the survey were related to the last sketch or diagram that the participants had created. Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements. However, most of them were informal. The most common purposes for creating sketches and diagrams were designing, explaining, and understanding, but analyzing requirements was also named often. More than half of the sketches and diagrams were created on analog media like paper or whiteboards and have been revised after creation. Most of them were used for more than a week and were archived. We found that the majority of participants related their sketches to methods, classes, or packages, but not to source code artifacts with a lower level of abstraction.
• The usual Helmholtz decomposition gives a decomposition of any vector valued function into a sum of gradient of a scalar function and rotation of a vector valued function under some mild condition. In this paper we show that the vector valued function of the second term i.e. the divergence free part of this decomposition can be further decomposed into a sum of a vector valued function polarized in one component and the rotation of a vector valued function also polarized in the same component. Hence the divergence free part only depends on two scalar functions. Further we show the so called completeness of representation associated to this decomposition for the stationary wave field of a homogeneous, isotropic viscoelastic medium. That is by applying this decomposition to this wave field, we can show that each of these three scalar functions satisfies a Helmholtz equation. Our completeness of representation is useful for solving boundary value problem in a cylindrical domain for several partial differential equations of systems in mathematical physics such as stationary isotropic homogeneous elastic/viscoelastic equations of system and stationary isotropic homogeneous Maxwell equations of system. As an example, by using this completeness of representation, we give the solution formula for torsional deformation of a pendulum of cylindrical shaped homogeneous isotropic viscoelastic medium.
• We experimentally investigate the transient dynamics of an optical cavity field interacting with large ion Coulomb crystals in a situation of electromagnetically induced transparency (EIT). EIT is achieved by injecting a probe field at the single photon level and a more intense control field with opposite circular polarization into the same mode of an optical cavity to couple Zeeman substates of a metastable level in 40Ca+ ions. The EIT interaction dynamics are investigated both in the frequency-domain - by measuring the probe field steady state reflectivity spectrum - and in the time-domain - by measuring the progressive buildup of transparency. The experimental results are observed to be in excellent agreement with theoretical predictions taking into account the inhomogeneity of the control field in the interaction volume, and confirm the high degree of control on light-matter interaction that can be achieved with ion Coulomb crystals in optical cavities.
• We define a kind of moduli space of nested surfaces and mappings, which we call a comparison moduli space. We review examples of such spaces in geometric function theory and modern Teichmueller theory, and illustrate how a wide range of phenomena in complex analysis are captured by this notion of moduli space. The paper includes a list of open problems ranging from general theoretical questions to specific technical problems.
• In this paper we establish a general dynamical Central Limit Theorem (CLT) for group actions which are exponentially mixing of all orders. In particular, the main result applies to Cartan flows on finite-volume quotients of simple Lie groups. Our proof uses a novel relativization of the classical method of cumulants, which should be of independent interest. As a sample application of our techniques, we show that the CLT holds along lacunary samples of the horocycle flow on finite-area hyperbolic surfaces applied to any smooth compactly supported function.
• We performed hydrodynamic computations of nonlinear stellar pulsations of population I stars at the evolutionary stages of the ascending red giant branch and the following luminosity drop due to the core helium flash. Red giants populating this region of the Hertzsprung--Russel diagram were found to be the fundamental mode pulsators. The pulsation period is the largest at the tip of the red giant branch and for stars with initial masses from 1.1M_⊙to 1.9M_⊙ranges from 254 day to 33 day, respectively. The rate of period change during the core helium flash is comparable with rates of secular period change in Mira type variables during the thermal pulse in the helium shell source. The period change rate is largest (\dot\Pi/\Pi≈-0.01 yr^-1) in stars with initial mass Mzams=1.1M_⊙and decreases to \dot\Pi/\Pi∼-0.001\ yr^-1 for stars of the evolutionary sequence Mzams=1.9M_⊙. Theoretical light curves of red giants pulsating with periods Pi > 200 day show the presence of the secondary maximum similar to that observed in many Miras.
• Tens of millions of wearable fitness trackers are shipped yearly to consumers who routinely collect information about their exercising patterns. Smartphones push this health-related data to vendors' cloud platforms, enabling users to analyze summary statistics on-line and adjust their habits. Third-parties including health insurance providers now offer discounts and financial rewards in exchange for such private information and evidence of healthy lifestyles. Given the associated monetary value, the authenticity and correctness of the activity data collected becomes imperative. In this paper, we provide an in-depth security analysis of the operation of fitness trackers commercialized by Fitbit, the wearables market leader. We reveal an intricate security through obscurity approach implemented by the user activity synchronization protocol running on the devices we analyze. Although non-trivial to interpret, we reverse engineer the message semantics, demonstrate how falsified user activity reports can be injected, and argue that based on our discoveries, such attacks can be performed at scale to obtain financial gains. We further document a hardware attack vector that enables circumvention of the end-to-end protocol encryption present in the latest Fitbit firmware, leading to the spoofing of valid encrypted fitness data. Finally, we give guidelines for avoiding similar vulnerabilities in future system designs.
• We observe that many of the separation axioms of topology (including $T_0-T_4$) can be expressed concisely and uniformly in terms of category theory as lifting properties (in the sense of Quillen model categories) with respect to (usually open) continuous maps of finite spaces (involving up to 4 points) and the real line.
• We present recent results on Piecewise Deterministic Markov Processes (PDMPs), involved in biological modeling. PDMPs, first introduced in the probabilistic literature by Davis (1984), are a very general class of Markov processes and are being increasingly popular in biological applications. They also give new interesting challenges from the theoretical point of view. We give here different examples on the long time behavior of switching Markov models applied to population dynamics, on uniform sampling in general branching models applied to structured population dynamic, on time scale separation in integrate-and-fire models used in neuroscience, and, finally, on moment calculus in stochastic models of gene expression.
• The present paper deals with the study of pseudo parallel (in the sense of Chaki and in the sense of Deszcz) contact CR-submanifolds with respect to Levi-Civita connection as well as semisymmetric metric connection of Kenmotsu manifolds and prove that these corresponding two classes are equivalent with a certain condition.
• Partial differential equations (p.d.e) equipped of spatial derivatives of fractional order capture anomalous transport behaviors observed in diverse fields of Science. A number of numerical methods approximate their solutions in dimension one. Focusing our effort on such p.d.e. in higher dimension with Dirichlet boundary conditions, we present an approximation based on Lattice Boltzmann Method with Bhatnagar-Gross-Krook (BGK) or Multiple-Relaxation-Time (MRT) collision operators. First, an equilibrium distribution function is defined for simulating space-fractional diffusion equations in dimensions 2 and 3. Then, we check the accuracy of the solutions by comparing with i) random walks derived from stable Lévy motion, and ii) exact solutions. Because of its additional freedom degrees, the MRT collision operator provides accurate approximations to space-fractional advection-diffusion equations, even in the cases which the BGK fails to represent because of anisotropic diffusion tensor or of flow rate destabilizing the BGK LBM scheme.
• The object of the present paper is to study invariant submanifolds of (LCS)n-manifolds with respect to quarter symmetric metric connection. It is shown that the mean curvature of an invariant submanifold of (LCS)n-manifold with respect to quarter symmetric metric connection and Levi-Civita connection are equal. An example is constructed to illustrate the results of the paper. We also obtain some equivalent conditions of such notion.
• We show how to define a canonical Riemannian metric on a "dessin d'enfants' drawn on a topological surface. This gives a possible explanation of a claim of A. Grothendieck.
• Jun 29 2017 math.AT arXiv:1706.09157v1
We introduce the topological complexity of the work map associated to a robot system. In broad terms, this measures the complexity of any algorithm controlling, not just the motion of the configuration space of the given system, but the task for which the system has been designed. From a purely topological point of view, this is a homotopy invariant of a map which generalizes the classical topological complexity of a space.
• We develop a class of algorithms, as variants of the stochastically controlled stochastic gradient (SCSG) methods (Lei and Jordan, 2016), for the smooth non-convex finite-sum optimization problem. Assuming the smoothness of each component, the complexity of SCSG to reach a stationary point with $\mathbb{E} \|\nabla f(x)\|^{2}\le \epsilon$ is $O\left (\min\{\epsilon^{-5/3}, \epsilon^{-1}n^{2/3}\}\right)$, which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state of the art methods based on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG outperforms stochastic gradient methods on training multi-layers neural networks in terms of both training and validation loss.
• We define a general notion of partially ordered Jordan algebra (over a partially ordered ring), and we show that the Jordan geometry associated to such a Jordan algebra admits a natural invariant partial cyclic order, whose intervals are modelled on the symmetric cone of the Jordan algebra. We define and describe, by affine images of intervals, the interval topology on the Jordan geometry, and we outline a reserch program aiming at generalizing main features of the theory of classical symmetric cones and bounded symmetric domains.
• We compute the universal minimal flow of the homeomorphism group of the Lelek fan -- a one-dimensional tree-like continuum with many symmetries.
• Base inertial parameters constitute a minimal inertial parametrization of mechanical systems that is of interest, for example, in parameter estimation and model reduction. Numerical and symbolic methods are available to determine their expressions. In this paper the problems associated with the numerical determination of the base inertial parameters expressions in the context of low mobility mechanisms are analyzed and discussed through and example. To circumvent these problems two alternatives are proposed: a variable precision arithmetic implementation of the customary numerical algorithm and the application of a general symbolic method. Finally, the advantages of both approaches compared to the numerical one are discussed in the context of the proposed low mobility example.
• In this paper we consider the stability of a linear time invariant system in feedback with a string equation. A new Lyapunov functional candidate is proposed based on the use of augmented states which enriches and encompasses the classical Lyapunov functional proposed in the literature. It results in a hierarchical tractable stability condition expressed in terms of linear matrix inequalities. This methodology follows from the application of the Bessel inequality together with Legendre polynomials. Two numerical examples illustrate the potential of our approach through two scenarious: a stable ODE perturbed by the PDE and an unstable ODE stabilized by the PDE.
• We present a quantitative characterization of an electrically tunable Josephson junction defined in an InAs nanowire proximitized by an epitax-ially-grown superconducting Al shell. The gate-dependence of the number of conduction channels and of the set of transmission coefficients are extracted from the highly nonlinear current-voltage characteristics. Although the transmissions evolve non-monotonically, the number of independent channels can be tuned, and configurations with a single quasi-ballistic channel achieved. Superconductor-semiconductor-superconductor weak links are interesting hybrid structures in which the Josephson coupling energy, and therefore the su-percurrent, can be modulated by an electric field [1, 2]. It is even possible to lower enough the carrier density in the weak link to achieve the conceptually simple situation of a quantum point contact (QPC), in which only a small number of conduction channels contribute to transport. Although these kind of hybrid microstructures have been explored for many years [3], inducing strong superconducting correlations into the semiconductor in a reliable way has been achieved only recently. A well-defined ("hard") superconducting gap has been clearly demonstrated both in InAs nanowires [4] and in In-GaAs/InAs two-dimensional electron gases [5] by using in-situ epitaxially grown Al contacts. Many experiments [6, 7, 8, 9, 10] are presently using these hybrid structures because they are promising candidates to implement topological superconduc-tivity and Majorana bound states [11, 12]. A good understanding of their basic microscopic transport features is therefore necessary. Here we track the evo-1
• We perform a measurement of the Hubble constant, $H_0$, using the latest baryonic acoustic oscillations (BAO) measurements from galaxy surveys of 6dFGS, SDSS DR7 Main Galaxy Sample, BOSS DR12 sample, and eBOSS DR14 quasar sample, in the framework of a flat $\Lambda$CDM model. Based on the Kullback-Leibler (KL) divergence, we examine the consistency of $H_0$ values derived from various data sets. We find that our measurement is consistent with that derived from Planck and with the local measurement of $H_0$ using the Cepheids and type Ia supernovae. We perform forecasts on $H_0$ from future BAO measurements, and find that the uncertainty of $H_0$ determined by future BAO data alone, including complete eBOSS, DESI and Euclid-like, is comparable with that from local measurements.
• We investigate the open dynamics of an atomic impurity embedded in a one-dimensional Bose-Hubbard lattice. We derive the reduced evolution equation for the impurity and show that the Bose-Hubbard lattice behaves as a tunable engineered environment allowing to simulate both Markovian and non-Markovian dynamics in a controlled and experimentally realisable way. We demonstrate that the two phases of the environmental ground state, namely the Mott insulator and the superfluid phase, are uniquely associated to the presence or absence of memory effects, respectively. Furthermore, we provide a clear explanation of the physical mechanism responsible for the onset of non-Markovian dynamics.
• In this paper, we introduce a new channel model we term the q-ary multi-bit channel (QMBC). This channel models a memory device, where q-ary symbols (q=2^s) are stored in the form of current/voltage levels. The symbols are read in a measurement process, which provides a symbol bit in each measurement step, starting from the most significant bit. An error event occurs when not all the symbol bits are known. To deal with such error events, we use GF(q) low-density parity-check (LDPC) codes and analyze their decoding performance. We start with iterative-decoding threshold analysis, and derive optimal edge-label distributions for maximizing the decoding threshold. We later move to finite-length iterative-decoding analysis and propose an edge-labeling algorithm for improved decoding performance. We then provide finite-length maximum-likelihood decoding analysis for both the standard non-binary random ensemble and LDPC ensembles. Finally, we demonstrate by simulations that the proposed edge-labeling algorithm improves finite-length decoding performance by orders of magnitude.

xecehim Jun 27 2017 15:03 UTC

It has been [published][1]

Kenneth Goodenough Jun 21 2017 12:48 UTC

Ah yes I see, thank you for the clarification!

Stefano Pirandola Jun 20 2017 13:26 UTC

Hi Kenneth, more precisely that plot is for a particular "Pauli-damping" channel, i.e., a qubit channel that is decomposable into a Pauli channel (1) and an amplitude damping channel (2). This "Pauli-damping" channel can be simulated by performing noisy teleportation over a resource state that corre

...(continued)
Kenneth Goodenough Jun 20 2017 12:47 UTC

Interesting work! I was wondering, how do the new upper bounds for the amplitude-damping channel in Fig. 2 compare to previous bounds?

Barbara Terhal Jun 20 2017 07:25 UTC

It would be good if this conflict on assigning priority and credit is peacefully resolved by the parties involved (i have no opinions on the matter).

Stefano Pirandola Jun 15 2017 05:32 UTC

The secret-key capacity of the pure-loss channel -log(1-t) was proven in [9], not in the follow-up work [13] (which appeared 4 months later). Ref. [13] found that this capacity is also a strong converse bound, which is Eq. (1) here. Same story for Eq. (4) that was proven in [9], not in [13]. Again t

...(continued)
Chris Ferrie Jun 09 2017 10:06 UTC

I have posted an open review of this paper here: https://github.com/csferrie/openreviews/blob/master/arxiv.1703.09835/arxiv.1703.09835.md

Eddie Smolansky May 26 2017 05:23 UTC

Updated summary [here](https://github.com/eddiesmo/papers).

# How they made the dataset
- automated filtering with yolo and landmark detection projects
- crowd source final filtering (AMT - give 50 face images to turks and ask which don't belong)
- quality control through s

...(continued)
Felix Leditzky May 24 2017 20:43 UTC

Yes, that's right, thanks!

For (5), you use the Cauchy-Schwarz inequality $\left| \operatorname{tr}(X^\dagger Y) \right| \leq \sqrt{\operatorname{tr}(X^\dagger X)} \sqrt{\operatorname{tr}(Y^\dagger Y)}$ for the Hilbert-Schmidt inner product $\langle X,Y\rangle := \operatorname{tr}(X^\dagger Y)$ wi

...(continued)
Michael Tolan May 24 2017 20:27 UTC

Just reading over Eq (5) on P5 concerning the diamond norm.

Should the last $\sigma_1$ on the 4th line be replaced with a $\sigma_2$? I think I can see how the proof is working but not entirely certain.