# Top arXiv papers

• We demonstrate a two dimensional grating magneto-optical trap (2D GMOT) with a single input beam and a planar diffraction grating in $^{87}$Rb. This configuration increases experimental access when compared with a traditional 2D MOT. As configured in the paper, the output flux is several hundred million rubidium atoms/s at a mean velocity of 19.0 $\pm~0.2$ m/s. The velocity distribution has a 3.3 $\pm~1.7$ m/s standard deviation. We use the atomic beam from the 2D GMOT to demonstrate loading of a three dimensional grating MOT (3D GMOT) with $2.02\times 10^8 \pm 3 \times 10^6$ atoms. Methods to improve flux output are discussed.
• In this paper we study certain integrability properties of the supersymmetric sine-Gordon equation. We construct Lax pairs with their zero-curvature representations which are equivalent to the supersymmetric sine-Gordon equation. From the fermionic linear spectral problem, we derive coupled sets of super Riccati equations and the auto-Bäcklund transformation of the supersymmetric sine-Gordon equation. In addition, a detailed description of the associated Darboux transformation is presented and non-trivial super multisoliton solutions are constructed. These integrability properties allow us to provide new explicit geometric characterizations of the bosonic supersymmetric version of the Sym--Tafel formula for the immersion of surfaces in a Lie superalgebra. These characterizations are expressed only in terms of the independent bosonic and fermionic variables.
• We introduce and study a class of partition functions of integrable lattice models with triangular boundary. By using the $U_q(sl_2)$ $R$-matrix and a special class of the triangular $K$-matrix, we first introduce an analogue of the wavefunctions of the integrable six-vertex model with triangular boundary. We give a characterization of the wavefunctions by extending our recent work of the Izergin-Korepin analysis of the domain wall boundary partition function with triangular boundary. We determine the explicit form of the symmetric functions representing the wavefunctions by showing that it satisfies all the required properties.
• In this paper, we present a new method of measuring Hubble parameter($H(z)$), making use of the anisotropy of luminosity distance($d_{L}$), and the analysis of gravitational wave(GW) of neutron star(NS) binary system. The method has never been put into practice before due to the lack of the ability of detecting GW. LIGO's success in detecting GW of black hole(BH) binary system merger announced the possibility of this new method. We apply this method to several GW detecting projects, including Advanced LIGO(Adv-LIGO), Einstein Telescope(ET) and DECIGO, finding that the $H(z)$ by Adv-LIGO and ET is of bad accuracy, while the $H(z)$ by DECIGO shows a good accuracy. We use the error information of $H(z)$ by DECIGO to simulate $H(z)$ data at every 0.1 redshift span, and put the mock data into the forecasting of cosmological parameters. Compared with the available 38 observed $H(z)$ data(OHD), mock data shows an obviously tighter constraint on cosmological parameters, and a concomitantly higher value of Figure of Merit(FoM). For a 3-year-observation by standard sirens of DECIGO, the FoM value is as high as 834.9. If a 10-year-observation is launched, the FoM could reach 2783.1. For comparison, the FoM of 38 actual observed $H(z)$ data is 9.3. These improvement indicates that the new method has great potential in further cosmological constraints.
• Study of the Abrikosov vortex motion in superconductors based on time-dependent Ginzburg-Landau equations reveals an opportunity to locally detect the values of the Aharonov-Bohm type curl-less vector potentials.
• We investigate the geodesic motion of massless test particles in the background of a noncommutative geometry inspired Schwarzschild black hole. The behaviour of effective potential is analysed in the equatorial plane and the possible motions of massless particles (i.e. photons) for different values of impact parameter are discussed accordingly. We have also calculated the frequency shift of photons in this spacetime. Further, the mass parameter of a non-commutative inspired Schwarzschild black hole is computed in terms of the measurable redshift of photons emitted by massive particles moving along circular geodesics in equatorial plane. It is observed that the the gravitational field of a non-commutative inspired Schwarzschild black hole is more attractive than the Schwarzschild black hole in General Relativity.
• An implementation of a novel of glass-based detector with fast response and wide detection range is needed to increase resolution for ultra-high energy cosmic rays detection. Such detector has been designed and built for the Horizon-T detector system at Tien Shan high-altitude Science Station. The main characteristics, such as design, duration of the detector pulse and calibration of a single particle response are discussed.
• Using a (1+2)-dimensional boson-vortex duality between non-linear electrodynamics and a two-component compressible Bose-Einstein condensate (BEC) with spin-orbit (SO) coupling, we obtain generalised versions of the hydrodynamic continuity and Euler equations where the phase defect and non-defect degrees of freedom enter separately. We obtain the generalised Magnus force on vortices under SO coupling, and associate the linear confinement of vortices due to SO coupling with instanton fluctuations of the dual theory.
• Regular satellites of giant planets are formed by accretion of solid bodies in circumplanetary disks. Planetesimals that are moving on heliocentric orbits and are sufficiently large to be decoupled from the flow of the protoplanetary gas disk can be captured by gas drag from the circumplanetary disk. In the present work, we examine the distribution of captured planetesimals in circumplanetary disks using orbital integrations. We find that the number of captured planetesimals reaches an equilibrium state as a balance between continuous capture and orbital decay into the planet. The number of planetesimals captured into retrograde orbits is much smaller than those on prograde orbits, because the former ones experience strong headwind and spiral into the planet rapidly. We find that the surface number density of planetesimals at the current radial location of regular satellites can be significantly enhanced by gas drag capture, depending on the velocity dispersions of planetesimals and the width of the gap in the protoplanetary disk. Using a simple model, we also examine the ratio of the surface densities of dust and captured planetesimals in the circumplanetary disk, and find that solid material at the current location of regular satellites can be dominated by captured planetesimals when the velocity dispersion of planetesimals is rather small and a wide gap is not formed in the protoplanetary disk. In this case, captured planetesimals in such a region can grow by mutual collision before spiraling into the planet, and would contribute to the growth of regular satellites.
• Recent studies showed that the in-plane and inter-plane thermal conductivities of two-dimensional (2D) MoS2 are low, posing a significant challenge in heat management in MoS2-based electronic devices. To address this challenge, we design the interfaces between MoS2 and graphene by fully utilizing graphene, a 2D material with an ultra-high thermal conduction. We first perform ab initio atomistic simulations to understand the bonding nature and structure stability of the interfaces. Our results show that the designed interfaces, which are found to be connected together by strong covalent bonds between Mo and C atoms, are energetically stable. We then perform molecular dynamics simulations to investigate the interfacial thermal conductance. It is found surprisingly that the interface thermal conductance is high, comparable to that of graphene-metal covalent-bonded interfaces. Importantly, each interfacial Mo-C bond serves as an independent thermal channel, enabling the modulation of interfacial thermal conductance by controlling Mo vacancy concentration at the interface. The present work provides a viable route for heat management in MoS2 based electronic devices.
• Benchmark reactions involving molecular hydrogen, such as H$_2$+D or H$_2$+Cl, provide the ideal platforms to investigate the effect of Near Threshold Resonances (NTR) on scattering processes. Due to the small reduced mass of those systems, shape resonances due to particular partial waves can provide features at scattering energies up to a few Kelvins, reachable in recent experiments. We explore the effect of NTRs on elastic and inelastic scattering for higher partial waves $\ell$ in the case of H$_2$+Cl for $s$-wave and H$_2$+D for $p$-wave scattering, and find that NTRs lead to a different energy scaling of the cross sections as compared to the well known Wigner threshold regime. We give a theoretical analysis based on Jost functions for short range interaction potentials. To explore higher partial waves, we adopt a three channel model that incorporates all key ingredients, and explore how the NTR scaling is affected by $\ell$. A short discussion on the effect of the long-range form of the interaction potential is also provided.
• In high-speed railway (HSR) communication systems, distributed antenna is usually employed to support frequent handover and enhance the signal to noise ratio to user equipments. In this case, dynamic time-domain power allocation and antenna selection (PAWAS) could be jointly optimized to improve the system performances. This paper consider this problem in such a simple way where dynamic switching between multiple-input-multiple-output (MIMO) and single-input-multiple-output (SIMO) is allowed and exclusively utilized, while the channel states and traffic demand are taken into account. The channel states includes sparse and rich scattering terrains, and the traffic patterns includes delay-sensitive and delay-insensitive as well as hybrid. Some important results are obtained in theory. In sparse scattering terrains, for delay-sensitive traffic, the PAWAS can be viewed as the generalization of channel-inversion associated with transmit antenna selection. On the contrary, for delay-insensitive traffic, the power allocation with MIMO can be viewed as channel-inversion, but with SIMO, it is traditional water-filling. For the hybrid traffic, the PAWAS can be partitioned as delay-sensitive and delay-insensitive parts by some specific strategies. In rich scattering terrains, the corresponding PAWAS is derived by some amendments in sparse scattering terrains and similar results are then presented.
• We study the behavior of entanglement between different degrees of freedom of scattering fermions, based on an exemplary QED scattering process $e^+e^-\longrightarrow\mu^+\mu^-$. The variation of entanglement entropy between two fermions from an initial state to the final state was computed, with respect to different entanglement between the ingoing particles. This variation of entanglement entropy is found to be proportional to an area quantity, the total cross section. We also study the spin-momentum and helicity-momentum entanglements within one particle in the aforementioned scattering process. The calculations of the relevant variations of mutual information in the same inertial frame reveals that, for a maximally entangled initial state, the scattering between the particles does not affect the degree of both of these entanglements of one particle in the final state. It is also found that the increasing degree of entanglement between two ingoing particles would restrict the generation of entanglement between spin (helicity) and momentum of one outgoing particle. And the entanglement between spin and momentum within one particle in the final state is shown to always be stronger than that for helicity-momentum for a general initial entanglement state, implying significantly distinct properties of entanglement for the helicity and spin perceived by an inertial observer.
• This paper revisits polynomial residue codes with non-pairwise coprime moduli by presenting a new decoding, called the minimum degree-weighted distance decoding. This decoding is based on the degree-weighted distance and different from the traditional minimum Hamming distance decoding. It is shown that for the two types of minimum distance decoders, i.e., the minimum degree-weighted distance decoding and the minimum Hamming distance decoding, one is not absolutely stronger than the other, but they can complement each other from different points of view.
• Based on unique decoding of the polynomial residue code with non-pairwise coprime moduli, a polynomial with degree less than that of the least common multiple (lcm) of all the moduli can be accurately reconstructed when the number of residue errors is less than half the minimum distance of the code. However, once the number of residue errors is beyond half the minimum distance of the code, the unique decoding may fail and lead to a large reconstruction error. In this paper, assuming that all the residues are allowed to have errors with small degrees, we consider how to reconstruct the polynomial as accurately as possible in the sense that a reconstructed polynomial is obtained with only the last $\tau$ number of coefficients being possibly erroneous, when the residues are affected by errors with degrees upper bounded by $\tau$. In this regard, we first propose a multi-level robust Chinese remainder theorem (CRT) for polynomials, namely, a trade-off between the dynamic range of the degree of the polynomial to be reconstructed and the residue error bound $\tau$ is formulated. Furthermore, a simple closed-form reconstruction algorithm is also proposed.
• Let $A$ be a finite-dimensional algebra over an algebraically closed field $\Bbbk$. For any finite-dimensional $A$-module $M$ we give a general formula that computes the indecomposable decomposition of $M$ without decomposing it, for which we use the knowledge of AR-quivers that are already computed in many cases. The proof of the formula here is much simpler than that in a prior literature by Dowbor and Mróz. As an example we apply this formula to the Kronecker algebra $A$ and give an explicit formula to compute the indecomposable decomposition of $M$, which enables us to make a computer program.
• A Cayley graph for a group $G$ is CCA if every automorphism of the graph that preserves the edge-orbits under the regular representation of $G$ is an element of the normaliser of $G$. A group $G$ is then said to be CCA if every connected Cayley graph on $G$ is CCA. We show that a finite simple group is CCA if and only if it has no element of order 4. We also show that "many" 2-groups are non-CCA.
• Mar 24 2017 stat.ME stat.ML arXiv:1703.07904v1
Cross-validation is one of the most popular model selection methods in statistics and machine learning. Despite its wide applicability, traditional cross-validation methods tend to select overfitting models, unless the ratio between the training and testing sample sizes is much smaller than conventional choices. We argue that such an overfitting tendency of cross-validation is due to the ignorance of the uncertainty in the testing sample. Starting from this observation, we develop a new, statistically principled inference tool based on cross-validation that takes into account the uncertainty in the testing sample. This new method outputs a small set of highly competitive candidate models containing the best one with guaranteed probability. As a consequence, our method can achieve consistent variable selection in a classical linear regression setting, for which existing cross-validation methods require unconventional split ratios. We demonstrate the performance of the proposed method in several simulated and real data examples.
• In this paper we show that the limiting distribution of the real and the imaginary part of the double Fourier transform of a stationary random field is almost surely an independent vector with Gaussian marginal distributions, whose variance is, up to a constant, the field's spectral density. The dependence structure of the random field is general and we do not impose any restrictions on the speed of convergence to zero of the covariances, or smoothness of the spectral density. The only condition required is that the variables are adapted to a commuting filtration and are regular in some sense. The results go beyond the Bernoulli fields and apply to both short range and long range dependence. They can be easily applied to derive the asymptotic behavior of the periodogram associated to the random field. The method of proof is based on new probabilistic methods involving martingale approximations and also on borrowed and new tools from harmonic analysis.
• In this paper we study the Cauchy problem for the semilinear damped wave equation for the sub-Laplacian on the Heisenberg group. In the case of the positive mass, we show the global in time well-posedness for small data for power like nonlinearities. We also obtain similar well-posedness results for the wave equations for Rockland operators on general graded Lie groups. In particular, this includes higher order operators on $\mathbb R^n$ and on the Heisenberg group, such as powers of the Laplacian or of the sub-Laplacian. In addition, we establish a new family of Gagliardo-Nirenberg inequalities on graded Lie groups that play a crucial role in the proof but which are also of interest on their own: if $G$ is a graded Lie group of homogeneous dimension $Q$ and $a>0$, $1<r<\frac{Q}{a},$ and $1\leq p\leq q\leq \frac{rQ}{Q-ar},$ then we have the following Gagliardo-Nirenberg type inequality $$\|u\|_L^q(G)≲\|u\|_\dotL_a^r(G)^s \|u\|_L^p(G)^1-s$$ for $s=\left(\frac1p-\frac1q\right) \left(\frac{a}Q+\frac1p-\frac1r\right)^{-1}\in [0,1]$ provided that $\frac{a}Q+\frac1p-\frac1r\not=0$, where $\dot{L}_{a}^{r}$ is the homogeneous Sobolev space of order $a$ over $L^r$. If $\frac{a}Q+\frac1p-\frac1r=0$, we have $p=q=\frac{rQ}{Q-ar}$, and then the above inequality holds for any $0\leq s\leq 1$.
• We study a semi-linear version of the Skyrme system due to Adkins and Nappi. The objects in this system are maps from $(1+3)$-dimensional Minkowski space into the $3$-sphere and 1-forms on $\mathbb{R}^{1+3}$, coupled via a Lagrangian action. Under a co-rotational symmetry reduction we establish the existence, uniqueness, and unconditional asymptotic stability of a family of stationary solutions $Q_n$, indexed by the topological degree $n \in \mathbb{N} \cup \{0\}$ of the underlying map. We also prove that an arbitrarily large equivariant perturbation of $Q_n$ leads to a globally defined solution that scatters to $Q_n$ in infinite time as long as the critical norm for the solution remains bounded on the maximal interval of existence given by the local Cauchy theory. We remark that the evolution equations are super-critical with respect to the conserved energy.
• In this paper we show the validity, under certain geometric conditions, of Wheeler's thin sandwich conjecture for higher-dimensional theories of gravity. The results we present here are an extension of the results already shown by R. Bartnik and G. Fodor for the 3-dimensional. In spite of the geometric restrictions needed for the proofs, we show that on any compact n-dimensional manifold, n greater than three, there is an open set in the space of solutions of the constraint equations where all these properties are satisfied.
• Let X be a closed symplectic manifold equipped a Lagrangian torus fibration. A construction first considered by Kontsevich and Soibelman produces from this data a rigid analytic space, which can be considered as a variant of the T-dual introduced by Strominger, Yau, and Zaslow. We prove that the Fukaya category of X embeds fully faithfully in the derived category of coherent sheaves on the rigid analytic dual, under the technical assumption that the second homotopy group of X vanishes (all known examples satisfy this assumption). The main new tool is the construction and computation of Floer cohomology groups of Lagrangian fibres equipped with topologised infinite rank local systems that correspond, under mirror symmetry, to the affinoid rings introduced by Tate, equipped with their natural topologies as Banach algebras.
• Great advances have been made in the study of ultra-high energy cosmic rays (UHECR) in the past two decades. These include the discovery of the spectral cut-off near 5 x 10^19 eV and complex structure at lower energies, as well as increasingly precise information about the composition of cosmic rays as a function of energy. Important improvements in techniques, including extensive surface detector arrays and high resolution air fluorescence detectors, have been instrumental in facilitating this progress. We discuss the status of the field, including the open questions about the nature of spectral structure, systematic issues related to our understanding of composition, and emerging evidence for anisotropy at the highest energies. We review prospects for upgraded and future observatories including Telescope Array, Pierre Auger and JEM-EUSO and other space-based proposals, and discuss promising new technologies based on radio emission from extensive air showers produced by UHECR.
• Let $M$ be a compact subset of a superreflexive Banach space. We prove a certain `weak$^\ast$-version' of Pełczyński's property (V) for the Banach space of Lipschitz functions on $M$. As a consequence, we show that its predual, the Lipschitz-free space $\mathcal{F}(M)$, is weakly sequentially complete.
• Recent years have seen growing interest in the streaming instability as a candidate mechanism to produce planetesimals. However, these investigations have been limited to small-scale simulations. We now present the results of a global protoplanetary disk evolution model that incorporates planetesimal formation by the streaming instability, along with viscous accretion, photoevaporation by EUV, FUV, and X-ray photons, dust evolution, the water ice line, and stratified turbulence. Our simulations produce massive (60-130 $M_\oplus$) planetesimal belts beyond 100 au and up to $\sim 20 M_\oplus$ of planetesimals in the middle regions (3-100 au). Our most comprehensive model forms 8 $M_\oplus$ of planetesimals inside 3 au, where they can give rise to terrestrial planets. The planetesimal mass formed in the inner disk depends critically on the timing of the formation of an inner cavity in the disk by high-energy photons. Our results show that the combination of photoevaporation and the streaming instability are efficient at converting the solid component of protoplanetary disks into planetesimals. Our model, however, does not form enough early planetesimals in the inner and middle regions of the disk to give rise to giant planets and super-Earths with gaseous envelopes. Additional processes such as particle pileups and mass loss driven by MHD winds may be needed to drive the formation of early planetesimal generations in the planet forming regions of protoplanetary disks.
• We reanalyse the cosmic microwave background (CMB) Cold Spot (CS) anomaly with particular focus on understanding the bias a mask (to remove Galactic and point sources contamination) may introduce. We measure the coldest spot, found by applying the Spherical Mexican Hat Wavelet (SMHW) transform on 100,000 masked and unmasked CMB simulated maps. The coldest spot in masked maps is the same as in unmasked maps only 48% of the time, suggesting that false minima are more frequently measured in masked maps. Given the temperature profile of the CS, we estimate at 94% the probability that the CS is the coldest spot on the sky, making the comparison to the unmasked coldest spots appropriate. We found that the significance of the CS is approximately 1.9 sigma for an angular scale R=5 degrees of the SMHW (< 2 sigma for all R). Furthermore, the Integrated Sachs-Wolfe (ISW) contributes approximately 10% (mean of approximately -10 microK but consistent with zero) of the full profile of the coldest spots. This is consistent with recent LambdaCDM ISW reconstructions of line of sight voids.
• We study two-body decays of a new neutral pseudoscalar into gauge bosons within the context of the Littlest Higgs model. Concretely, the $\Phi^P \to WW, VV, gg$ processes induced at the one-loop level, with $V=\gamma, Z$, are considered. Since the branching ratios of the $\Phi^P \to VV$ decays result very suppressed, only the $\Phi^P \to WW, gg$ processes are thoroughly studied. The branching ratios for the $\Phi^P \to gg$ and $\Phi^P \to WW$ decays are of the order of $10^{-4}$ and $10^{-6}$, respectively, for $f$ around 2 TeV, which represents the global symmetry breaking scale of the theory. The production cross section of the $\Phi^P$ boson via gluon fusion at LHC is estimated.
• It is well known that a dense subgroup $G$ of the complex unitary group $U(d)$ cannot be amenable as a discrete group when $d>1$. When $d$ is large enough we give quantitative versions of this phenomenon in connection with certain estimates of random Fourier series on the compact group $\bar G$ that is the closure of $G$. Roughly, we show that if $\bar G$ covers a large enough part of $U(d)$ in the sense of metric entropy then $G$ cannot be amenable. The results are all based on a version of a classical theorem of Jordan that says that if $G$ is finite, or amenable as a discrete group, then $G$ contains an Abelian subgroup with index $e^{o(d^2)}$.
• We consider quantum, nondterministic and probabilistic versions of known computational model Ordered Read-$k$-times Branching Programs or Ordered Binary Decision Diagrams with repeated test ($k$-QOBDD, $k$-NOBDD and $k$-POBDD). We show width hierarchy for complexity classes of Boolean function computed by these models and discuss relation between different variants of $k$-OBDD.
• How can we enable novice users to create effective task plans for collaborative robots? Must there be a tradeoff between generalizability and ease of use? To answer these questions, we conducted a user study with the CoSTAR system, which integrates perception and reasoning into a Behavior Tree-based task plan editor. In our study, we ask novice users to perform simple pick-and-place assembly tasks under varying perception and planning capabilities. Our study shows that users found Behavior Trees to be an effective way of specifying task plans. Furthermore, users were also able to more quickly, effectively, and generally author task plans with the addition of CoSTAR's planning, perception, and reasoning capabilities. Despite these improvements, concepts associated with these capabilities were rated by users as less usable, and our results suggest a direction for further refinement.
• Spin patterns of spiral galaxies can be broadly separated into galaxies with clockwise patterns and galaxies with counterclockwise spin patterns. While the differences between these patterns are visually noticeable, they are a matter of the perspective of the observer, and therefore in a sufficiently large universe no other differences are expected between galaxies with clockwise and counterclockwise spin patterns. Here large datasets of spiral galaxies separated by their spin patterns are used to show that spiral galaxies with clockwise spin patterns are photometrically different from spiral galaxies with counterclockwise patterns. That asymmetry changes based on the direction of observation, such that the observed asymmetry in one hemisphere is aligned with the inverse observed asymmetry in the opposite hemisphere. The results are consistent across different sky surveys (SDSS and PanSTARRS) and analysis methods, showing that the origin of the asymmetry is not a certain telescope system or a flaw in the analysis algorithms.
• Let $p$ be a prime and let $K$ be a finite extension of $\mathbb{Q}_p$. Let $E/K$ be an elliptic curve with additive reduction. In this paper, we study the topological group structure of the set of points of good reduction of $E(K)$. In particular, if $K/\mathbb{Q}_p$ is unramified, we show how one can read off the topological group structure from the Weierstrass coefficients defining $E$.
• Dictionary learning and component analysis are part of one of the most well-studied and active research fields, at the intersection of signal and image processing, computer vision, and statistical machine learning. In dictionary learning, the current methods of choice are arguably K-SVD and its variants, which learn a dictionary (i.e., a decomposition) for sparse coding via Singular Value Decomposition. In robust component analysis, leading methods derive from Principal Component Pursuit (PCP), which recovers a low-rank matrix from sparse corruptions of unknown magnitude and support. While K-SVD is sensitive to the presence of noise and outliers in the training set, PCP does not provide a dictionary that respects the structure of the data (e.g., images), and requires expensive SVD computations when solved by convex relaxation. In this paper, we introduce a new robust decomposition of images by combining ideas from sparse dictionary learning and PCP. We propose a novel Kronecker-decomposable component analysis which is robust to gross corruption, can be used for low-rank modeling, and leverages separability to solve significantly smaller problems. We design an efficient learning algorithm by drawing links with a restricted form of tensor factorization. The effectiveness of the proposed approach is demonstrated on real-world applications, namely background subtraction and image denoising, by performing a thorough comparison with the current state of the art.
• Recently it has been understood that certain soft factorization theorems for scattering amplitudes can be written as Ward identities of new asymptotic symmetries. This relationship has been established for soft particles with spins $s > 0$, most notably for soft gravitons and photons. Here we study the remaining case of soft scalars. We show that a class of Yukawa-type theories, where a massless scalar couples to massive particles, have an infinite number of conserved charges. We study the charges in terms of asymptotic fields and explore their associated symmetries, pointing out similarities and differences with the $s>0$ case. Finally, we discuss an identification of the soft scalars studied in this work as Nambu--Goldstone dilatons associated to spontaneous breaking of scale invariance.
• Large gauge symmetries in Minkowski spacetime are often studied in two distinct regimes: either at asymptotic (past or future) times or at spatial infinity. By working in harmonic gauge, we provide a unified description of large gauge symmetries (and their associated charges) that applies to both regimes. At spatial infinity the charges are conserved and interpolate between those defined at the asymptotic past and future. This explains the equality of asymptotic past and future charges, as recently proposed in connection with Weinberg's soft photon theorem.
• We describe a first principles method to calculate scanning tunneling microscopy (STM) images, and compare the results to well-characterized experiments combining STM with atomic force microscopy (AFM). The theory is based on density functional theory (DFT) with a localized basis set, where the wave functions in the vacuum gap are computed by propagating the localized-basis wave functions into the gap using a real-space grid. Constant-height STM images are computed using Bardeen's approximation method, including averaging over the reciprocal space. We consider copper adatoms and single CO molecules adsorbed on Cu(111), scanned with a single-atom copper tip with and without CO functionalization. The calculated images agree with state-of-the-art experiments, where the atomic structure of the tip apex is determined by AFM. The comparison further allows for detailed interpretation of the STM images.
• The study of fluctuation-induced transport is concerned with the directed motion of particles on a substrate when subjected to a fluctuating external field. Work over the last two decades provides now precise clues on how the average transport depends on three fundamental aspects: the shape of the substrate, the correlations of the fluctuations and the mass, geometry, interaction and density of the particles. These three aspects, reviewed here, acquire additional relevance because the same notions apply to a bewildering variety of problems at very different scales, from the small nano or micro-scale, where thermal fluctuations effects dominate, up to very large scales including ubiquitous cooperative phenomena in granular materials.
• We show that if $X$ is a toric scheme over a regular commutative ring $k$ then the direct limit of the $K$-groups of $X$ taken over any infinite sequence of nontrivial dilations is homotopy invariant. This theorem was previously known for regular commutative rings containing a field. The affine case of our result was conjectured by Gubeladze. We prove analogous results when $k$ is replaced by an appropriate $K$-regular, not necessarily commutative $k$-algebra.
• GaN is a key material for lighting and power electronics. Yet, the carrier transport and ultrafast dynamics that are central in GaN devices are not completely understood. We present first-principles calculations of carrier dynamics in GaN, focusing on electron-phonon (e-ph) scattering and the cooling of hot carriers. We find that e-ph scattering is significantly faster for holes compared to electrons, and that for hot carriers with an initial 0.5$-$1 eV excess energy, holes take a significantly shorter time ($\sim$0.1 ps) to relax to the band edge compared to electrons, which take $\sim$1 ps. The asymmetry in the hot carrier dynamics is shown to originate from the valence band degeneracy, the heavier effective mass of holes compared to electrons, and the details of the coupling to different phonon modes in the valence and conduction bands. The ballistic mean free paths (MFPs) of electrons and holes also differ significantly. The slow cooling of hot electrons and their long MFPs (over 3 nm) are investigated as a possible cause of efficiency droop in GaN light emitting diodes. Taken together, our work provides microscopic insight into the carrier dynamics of GaN, and shows a computational approach to design novel lighting materials.
• Particle filters are a popular and flexible class of numerical algorithms to solve a large class of nonlinear filtering problems. However, standard particle filters with importance weights have been shown to require a sample size that increases exponentially with the dimension D of the state space in order to achieve a certain performance, which precludes their use in very high-dimensional filtering problems. Here, we focus on the dynamic aspect of this curse of dimensionality (COD) in continuous time filtering, which is caused by the degeneracy of importance weights over time. We show that the degeneracy occurs on a time-scale that decreases with increasing D. In order to soften the effects of weight degeneracy, most particle filters use particle resampling and improved proposal functions for the particle motion. We explain why neither of the two can prevent the COD in general. In order to address this fundamental problem, we investigate an existing filtering algorithm based on optimal feedback control that sidesteps the use of importance weights. We use numerical experiments to show that this Feedback Particle Filter (FPF) by Yang et al. (2013) does not exhibit a COD.
• We compute the power spectrum at one-loop order in standard perturbation theory for the matter density field to which a standard Lagrangian Baryonic acoustic oscillation (BAO) reconstruction technique is applied. The BAO reconstruction method corrects the bulk motion associated with the gravitational evolution using the inverse Zel'dovich approximation (ZA) for the smoothed density field. We find that the overall amplitude of one-loop contributions in the matter power spectrum substantially decrease after reconstruction. The reconstructed power spectrum thereby approaches the initial linear spectrum when the smoothed density field is close enough to linear, i.e., the smoothing scale $R_s$ larger than around 10$h^{-1}$Mpc. On smaller $R_s$,however, the deviation from the linear spectrum becomes significant on large scales ($k\lt R_s^{-1}$) due to the nonlinearity in the smoothed density field, and the reconstruction is inaccurate. Compared with N-body simulations, we show that the reconstructed power spectrum at one loop order agrees with simulations better than the unreconstructed power spectrum. We also calculate the tree-level bispectrum in standard perturbation theory to investigate non-Gaussianity in the reconstructed matter density field. We show that the amplitude of the bispectrum significantly decreases for small $k$ after reconstruction and that the tree-level bispectrum agrees well with N-body results in the weakly nonlinear regime.
• We carried out multiwavelength (0.7-5 cm), multiepoch (1994-2015) Very Large Array (VLA) observations toward the region enclosing the bright far-IR sources FIR 3 (HOPS 370) and FIR 4 (HOPS 108) in OMC-2. We report the detection of 10 radio sources, seven of them identified as young stellar objects. We image a well-collimated radio jet with a thermal free-free core (VLA 11) associated with the Class I intermediate-mass protostar HOPS 370. The jet presents several knots (VLA 12N, 12C, 12S) of non-thermal radio emission (likely synchrotron from shock-accelerated relativistic electrons) at distances of ~7,500-12,500 au from the protostar, in a region where other shock tracers have been previously identified. These knots are moving away from the HOPS 370 protostar at ~ 100 km/s. The Class 0 protostar HOPS 108, which itself is detected as an independent, kinematically decoupled radio source, falls in the path of these non-thermal radio knots. These results favor the previously proposed scenario where the formation of HOPS 108 has been triggered by the impact of the HOPS 370 outflow with a dense clump. However, HOPS 108 presents a large proper motion velocity of ~ 30 km/s, similar to that of other runaway stars in Orion, whose origin would be puzzling within this scenario. Alternatively, an apparent proper motion could result because of changes in the position of the centroid of the source due to blending with nearby extended emission, variations in the source shape, and /or opacity effects.
• Indoor localization and Location Based Services (LBS) can greatly benefit from the widescale proliferation of communication devices. The basic requirements of a system that can provide the aforementioned services are energy efficiency, scalability, lower costs, wide reception range, high localization accuracy and availability. Different technologies such as WiFi, UWB, RFID have been leveraged to provide LBS and Proximity Based Services (PBS), however they do not meet the aforementioned requirements. Apple's Bluetooth Low Energy (BLE) based iBeacon solution primarily intends to provide Proximity Based Services (PBS). However, it suffers from poor proximity detection accuracy due to its reliance on Received Signal Strength Indicator (RSSI) that is prone to multipath fading and drastic fluctuations in the indoor environment. Therefore, in this paper, we present our iBeacon based accurate proximity and indoor localization system. Our two algorithms Server-Side Running Average (SRA) and Server-Side Kalman Filter (SKF) improve the proximity detection accuracy of iBeacons by 29% and 32% respectively, when compared with Apple's current moving average based approach. We also present our novel cascaded Kalman Filter-Particle Filter (KFPF) algorithm for indoor localization. Our cascaded filter approach uses a Kalman Filter (KF) to reduce the RSSI fluctuation and then inputs the filtered RSSI values into a Particle Filter (PF) to improve the accuracy of indoor localization. Our experimental results, obtained through experiments in a space replicating real-world scenario, show that our cascaded filter approach outperforms the use of only PF by 28.16% and 25.59% in 2-Dimensional (2D) and 3-Dimensional (3D) environments respectively, and achieves a localization error as low as 0.70 meters in 2D environment and 0.947 meters in 3D environment.
• We report on the design and construction of a high-energy photon polarimeter for measuring the degree of polarization of a linearly-polarized photon beam. The photon polarimeter uses the process of pair production on an atomic electron (triplet production). The azimuthal distribution of scattered atomic electrons following triplet production yields information regarding the degree of linear polarization of the incident photon beam. The polarimeter, operated in conjunction with a pair spectrometer, uses a silicon strip detector to measure the recoil electron distribution resulting from triplet photoproduction in a beryllium target foil. The analyzing power $\Sigma_A$ for the device using a 75 $\rm{\mu m}$ beryllium converter foil is about 0.2, with a relative systematic uncertainty in $\Sigma_A$ of 1.5%.
• Applying Dixon's general equations of motion for extended bodies, we compute the Papapetrou's equations for an extended test body on static and isotropic metrics. We incorporate the force and the torque terms which involve multipole moments, beyond dipole moment, from the energy-momentum tensor. We obtain the vector form equations for both Corinaldesi-Papapetrou and Tulczyjew-Dixon spin supplementary conditions. An expanded effective mass, including interactions between the structure of the body and the gravitational field, is also found.
• Formation of dressed light-matter states in optical structures, manifested as Rabi splitting of the eigen energies of a coupled system, is one of the key effects in quantum optics. In pursuing this regime with semiconductors, light is usually made to interact with excitons $-$ electrically neutral quasiparticles of semiconductors, meanwhile interactions with charged three-particle states $-$ trions $-$ have received little attention. Here, we report on strong interaction between plasmons in silver nanoprisms and charged excitons $-$ trions $-$ in monolayer tungsten disulphide (WS$_{2}$). We show that the plasmon-exciton interactions in this system can be efficiently tuned by controlling the charged versus neutral exciton contribution to the coupling process. In particular, we show that a stable trion state emerges and couples efficiently to the plasmon resonance at low temperature by forming three bright intermixed plasmon-exciton-trion polariton states. Our findings open up a possibility to exploit electrically charged trion polaritons $-$ previously unexplored mixed states of light and matter in nanoscale hybrid plasmonic systems.
• A classic result of Asplund and Grünbaum states that intersection graphs of axis-aligned rectangles in the plane are $\chi$-bounded. This theorem can be equivalently stated in terms of path-decompositions as follows: There exists a function $f:\mathbb{N}\to\mathbb{N}$ such that every graph that has two path-decompositions such that each bag of the first decomposition intersects each bag of the second in at most $k$ vertices has chromatic number at most $f(k)$. Recently, Dujmović, Joret, Morin, Norin, and Wood asked whether this remains true more generally for two tree-decompositions. In this note we provide a negative answer: There are graphs with arbitrarily large chromatic number for which one can find two tree-decompositions such that each bag of the first decomposition intersects each bag of the second in at most two vertices. Furthermore, this remains true even if one of the two decompositions is restricted to be a path-decomposition. This is shown using a construction of triangle-free graphs with unbounded chromatic number due to Burling, which we believe should be more widely known.
• We introduce the Suggest-and-Improve framework for general nonconvex quadratically constrained quadratic programs (QCQPs). Using this framework, we generalize a number of known methods and provide heuristics to get approximate solutions to QCQPs for which no specialized methods are available. We also introduce an open-source Python package QCQP, which implements the heuristics discussed in the paper.
• Let $(\mathbf{B}, \|\cdot\|)$ be a real separable Banach space. Let $\varphi(\cdot)$ and $\psi(\cdot)$ be two continuous and increasing functions defined on $[0, \infty)$ such that $\varphi(0) = \psi(0) = 0$, $\lim_{t \rightarrow \infty} \varphi(t) = \infty$, and $\frac{\psi(\cdot)}{\varphi(\cdot)}$ is a nondecreasing function on $[0, \infty)$. Let $\{V_{n};~n \geq 1 \}$ be a sequence of independent and symmetric \bf B-valued random variables. In this note, we establish a probability inequality for sums of independent \bf B-valued random variables by showing that for every $n \geq 1$ and all $t \geq 0$, $\mathbbP\left(\left\|\sum_i=1^n V_i \right\| > t b_n \right) ≤4 \mathbbP \left(\left\|\sum_i=1^n \varphi\left(\psi^-1(\|V_i\|)\right) \fracV_i\|V_i\| \right\| > t a_n \right) + \sum_i=1^n\mathbbP\left(\|V_i\| > b_n \right),$where $a_{n} = \varphi(n)$ and $b_{n} = \psi(n)$, $n \geq 1$. As an application of this inequality, we establish what we call a comparison theorem for the weak law of large numbers for independent and identically distributed ${\bf B}$-valued random variables.

gae spedalieri Mar 13 2017 14:13 UTC

1) Sorry but this is false.

1a) That analysis is specifically for reducing QECC protocol to an entanglement distillation protocol over certain class of discrete variable channels. Exactly as in BDSW96. Task of the protocol is changed in the reduction.

1b) The simulation is not via a general LOCC b

...(continued)
Siddhartha Das Mar 13 2017 13:22 UTC

We feel that we have cited and credited previous works appropriately in our paper. To clarify:

1) The LOCC simulation of a channel and the corresponding adaptive reduction can be found worked out in full generality in the 2012 Master's thesis of Muller-Hermes. We have cited the original paper BD

...(continued)
gae spedalieri Mar 13 2017 08:56 UTC

This is one of those papers where the contribution of previous literature is omitted and not fairly represented.

1- the LOCC simulation of quantum channels (not necessarily teleportation based) and the corresponding general reduction of adaptive protocols was developed in PLOB15 (https://arxiv.org/

...(continued)
Noon van der Silk Mar 08 2017 04:45 UTC

I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent

...(continued)
Qian Wang Mar 07 2017 17:21 UTC

"To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics."

Christopher Chamberland Mar 02 2017 18:48 UTC

A good paper for learning about exRec's is this one https://arxiv.org/abs/quant-ph/0504218. Also, rigorous threshold lower bounds are obtained using an adversarial noise model approach.

Anirudh Krishna Mar 02 2017 18:40 UTC

Here's a link to a lecture from Dan Gottesman's course at PI about exRecs.
http://pirsa.org/displayFlash.php?id=07020028

You can find all the lectures here:
http://www.perimeterinstitute.ca/personal/dgottesman/QECC2007/index.html

Ben Criger Mar 02 2017 08:58 UTC

Good point, I wish I knew more about ExRecs.

Robin Blume-Kohout Feb 28 2017 09:55 UTC

I totally agree -- that part is confusing. It's not clear whether "arbitrary good precision ... using a limited amount of hardware" is supposed to mean that arbitrarily low error rates can be achieved with codes of fixed size (clearly wrong) or just that the resources required to achieve arbitraril

...(continued)
James Wootton Feb 28 2017 08:54 UTC

I think I was mostly reacting to where he tries to sell the importance of the work.

>Fault tolerant theorems show that an arbitrary good precision can be obtained using a limited amount of hardware...we unveil the role of an implicit assumption made in these mathematical theorems: the ability to

...(continued)