- arXiv.org
- Space Physics
- Atomic and Molecular Clusters
- Popular Physics
- Fluid Dynamics
- Optics
- History and Philosophy of Physics
- Biological Physics
- Physics and Society
- General Physics
- Data Analysis, Statistics and Probability
- Plasma Physics
- Geophysics
- Medical Physics
- Atomic Physics
- Physics Education
- Chemical Physics
- Atmospheric and Oceanic Physics
- Accelerator Physics
- Instrumentation and Detectors
- Computational Physics
- Classical Physics

- Number Theory
- Analysis of PDEs
- Statistics Theory
- Information Theory
- History and Overview
- Group Theory
- Probability
- Complex Variables
- Combinatorics
- Mathematical Physics
- Algebraic Geometry
- Representation Theory
- Operator Algebras
- Symplectic Geometry
- General Mathematics
- General Topology
- Geometric Topology
- Optimization and Control
- Differential Geometry
- Quantum Algebra
- Numerical Analysis
- Dynamical Systems
- Metric Geometry
- Logic
- Classical Analysis and ODEs
- Rings and Algebras
- Algebraic Topology
- Commutative Algebra
- K-Theory and Homology
- Category Theory
- Functional Analysis
- Spectral Theory

- General Literature
- Symbolic Computation
- Information Theory
- Neural and Evolutionary Computing
- Learning
- Computer Vision and Pattern Recognition
- Sound
- Programming Languages
- Emerging Technologies
- Multiagent Systems
- Software Engineering
- Operating Systems
- Social and Information Networks
- Formal Languages and Automata Theory
- Databases
- Information Retrieval
- Distributed, Parallel, and Cluster Computing
- Discrete Mathematics
- Hardware Architecture
- Artificial Intelligence
- Cryptography and Security
- Human-Computer Interaction
- Computational Complexity
- Computer Science and Game Theory
- Other Computer Science
- Systems and Control
- Mathematical Software
- Numerical Analysis
- Robotics
- Networking and Internet Architecture
- Computation and Language
- Data Structures and Algorithms
- Computers and Society
- Logic in Computer Science
- Graphics
- Multimedia
- Computational Geometry
- Performance
- Computational Engineering, Finance, and Science
- Digital Libraries

- We present the solution of a century-old problem known as Schur Number Five: What is the largest (natural) number $n$ such that there exists a five-coloring of the positive numbers up to $n$ without a monochromatic solution of the equation $a + b = c$? We obtained the solution, $n = 160$, by encoding the problem into propositional logic and applying massively parallel satisfiability solving techniques on the resulting formula. We constructed and validated a proof of the solution to increase trust in the correctness of the multi-CPU-year computations. The proof is two petabytes in size and was certified using a formally verified proof checker, demonstrating that any result by satisfiability solvers---no matter how large---can now be validated using highly trustworthy systems.
- Nov 23 2017 cs.CR arXiv:1711.08075v1Growing concern for individual privacy, driven by an increased public awareness of the degree to which many of our electronic activities are tracked by interested third parties (e.g. Google knows what I am thinking before I finish entering my search query), is driving the development anonymizing technologies (e.g. Tor). The coming mass migration to IPv6 as the primary transport of Internet traffic promises to make one such technology, end-to-end host based encryption, more readily available to the average user. In a world where end-to-end encryption is ubiquitous, what can replace the existing models for network intrusion detection? How can network administrators and operators, responsible for securing networks against hostile activity, protect a network they cannot see? In an encrypted world, signature based event detection is unlikely to prove useful. In order to secure a network in such an environment, without trampling the privacy afforded to users by end-to-end encryption, our threat detection model needs to evolve from signature based detection to a heuristic model that flags deviations from normal network-wide behavior for further investigation. In this paper we present such a heuristic model and test its effectiveness for detecting intrusions in an entirely encrypted network environment. Our results demonstrate the network intrusion detection system's ability to monitor a network carrying only host-to-host encrypted traffic. This work indicates that a broad perspective change is required. Network security models need to evolve from endeavoring to define attack signatures to describing what the network looks like under normal conditions and searching for deviations from the norm.
- Magnetic particle imaging (MPI) is a promising new in-vivo medical imaging modality in which distributions of super-paramagnetic nanoparticles are tracked based on their response in an applied magnetic field. In this paper we provide a mathematical analysis of the modeled MPI operator in the univariate situation. We provide a Hilbert space setup, in which the MPI operator is decomposed into simple building blocks and in which these building blocks are analyzed with respect to their mathematical properties. In turn, we obtain an analysis of the MPI forward operator and, in particular, of its ill-posedness properties. We further get that the singular values of the MPI core operator decrease exponentially. We complement our analytic results by some numerical studies which, in particular, suggest a rapid decay of the singular values of the MPI operator.
- Nov 23 2017 cond-mat.str-el cond-mat.mtrl-sci arXiv:1711.08073v1Using first-principles calculation methods, we reveal a series of phase transitions as a function of electron doping in single-layer 1T$'$-MoTe$_2$ and 1T$'$-WTe$_2$ exhibiting quantum spin Hall (QSH) edge states without doping. As increasing doping, we show that a phonon mediated superconducting phase first realizes and is followed by a charge density wave (CDW) phase with a nonsymmorphic lattice symmetry. The newly found CDW phase exhibits Dirac or Weyl energy bands with a spin-orbit coupling in case of a fractional band filling and re-enters into topological insulating phase with fully filled bands. The robust resurgence of QSH state coexisting with the CDW phase is shown to originate from band inversions induced by the nonsymmorphic lattice distortion through the strong electron-phonon interaction, thus suggesting a realization of various interfacial states between superconducting, density wave and topological states on a two-dimensional crystal only by doping.
- In the context of model uncertainty and selection, empirical Bayes procedures can have undesirable properties such as extreme estimates of inclusion probabilities (Scott and Berger, 2010) or inconsistency under the null model (Liang et al., 2008). To avoid these issues, we define empirical Bayes priors with constraints that ensure that the estimates of the hyperparameters are at least as "vague" as those of proper default priors. In our examples, we observe that constrained EB procedures are better behaved than their unconstrained counterparts and that the Bayesian Information Criterion (BIC) is similar to an intuitively appealing constrained EB procedure.
- Nov 23 2017 math.RA arXiv:1711.08071v1We study Zariski cancellation problem for noncommutative algebras that are not necessarily domains.
- Nov 23 2017 math.AP arXiv:1711.08070v1In this paper we recall the contribution given by Hermann Weyl and André Marchad in the definition of the notion of the fractional derivative. In addition we discuss some relationships between the fractional Laplace operator and Marchaud derivative in the perspective to generalize these objects to different fields of the mathematics.
- Nov 23 2017 math.DS arXiv:1711.08069v1This paper focuses on the construction of periodic solutions of nonlinear beam equations on the $d$-dimensional tori. For a large set of frequencies, we demonstrate that an equivalent form of the nonlinear equations can be obtained by a para-differential conjugation. Given the non-resonant conditions on each finite dimensional subspaces, it is shown that the periodic solutions can be constructed for the block diagonal equation by a classical iteration scheme.
- Policy optimization methods have shown great promise in solving complex reinforcement and imitation learning tasks. While model-free methods are broadly applicable, they often require many samples to optimize complex policies. Model-based methods greatly improve sample-efficiency but at the cost of poor generalization, requiring a carefully handcrafted model of the system dynamics for each task. Recently, hybrid methods have been successful in trading off applicability for improved sample-complexity. However, these have been limited to continuous action spaces. In this work, we present a new hybrid method based on an approximation of the dynamics as an expectation over the next state under the current policy. This relaxation allows us to derive a novel hybrid policy gradient estimator, combining score function and pathwise derivative estimators, that is applicable to discrete action spaces. We show significant gains in sample complexity, ranging between $1.7$ and $25\times$, when learning parameterized policies on Cart Pole, Acrobot, Mountain Car and Hand Mass. Our method is applicable to both discrete and continuous action spaces, when competing pathwise methods are limited to the latter.
- Identification and quantification of trace gas sources is a major challenge for understanding and regulating air quality and greenhouse gas emissions. Current approaches either provide continuous but localized monitoring, or quasi-instantaneous 'snapshot-in-time' regional monitoring. There is a need for emissions detection that provides both continuous and regional coverage, because sources and sinks can be episodic and spatially variable. We field deploy a dual frequency comb laser spectrometer for the first time, enabling an observing system that provides continuous detection of trace gas sources over multiple-square-kilometer regions. Field tests simulating methane emissions from oil and gas production demonstrate detection and quantification of a 1.6 g min^-1 source (approximate emissions from a small pneumatic valve) from a distance of 1 km, and the ability to discern two leaks among a field of many potential sources. The technology achieves the goal of detecting, quantifying, and attributing emissions sources continuously through time, over large areas, and at emissions rates ~1000x lower than current regional approaches. It therefore provides a useful tool for monitoring and mitigating undesirable sources and closes a major information gap in the atmospheric sciences.
- Hippocampal dentate granule cells are among the few neuronal cell types generated throughout adult life in mammals. In the normal brain, new granule cells are generated from progenitors in the subgranular zone and integrate in a typical fashion. During the development of epilepsy, granule cell integration is profoundly altered. The new cells migrate to ectopic locations and develop misoriented basal dendrites. Although it has been established that these abnormal cells are newly generated, it is not known whether they arise ubiquitously throughout the progenitor cell pool or are derived from a smaller number of bad actor progenitors. To explore this question, we conducted a clonal analysis study in mice expressing the Brainbow fluorescent protein reporter construct in dentate granule cell progenitors. Mice were examined 2 months after pilocarpine-induced status epilepticus, a treatment that leads to the development of epilepsy. Brain sections were rendered translucent so that entire hippocampi could be reconstructed and all fluorescently labeled cells identified. Our findings reveal that a small number of progenitors produce the majority of ectopic cells following status epilepticus, indicating that either the affected progenitors or their local microenvironments have become pathological. By contrast, granule cells with basal dendrites were equally distributed among clonal groups. This indicates that these progenitors can produce normal cells and suggests that global factors sporadically disrupt the dendritic development of some new cells. Together, these findings strongly predict that distinct mechanisms regulate different aspects
- The consequences of coupling magnetic and elastic degrees of freedom, where spins and deformations are carried by point-like objects subject to local interactions, are studied, theoretically and by detailed numerical simulations. From the constrained Lagrangians we derive consistent equations of motion for the coupled dynamical variables. In order to probe the dynamics of such a system, we consider external perturbations, such as spin transfer torques for the magnetic part, and homogeneous stresses for the elastic part, associated to their corresponding damping. This approach is applied to the study of ultrafast switching processes in anti-ferromagnetic systems, which have recently attracted attention as candidates for anti-ferromagnetic spintronic devices. Our strategy is then checked in simple, but instructive, situations. We carried out numerical experiments to study, in particular, how the magnetostrictive coupling and external stresses affect the nature of the switching processes in a prototype anti-ferromagnetic material.
- In the previous decades, the theory of first passage percolation became a highly important area of probability theory. In this work, we will observe what can be said about the corresponding structure if we forget about the probability measure defined on the product space of edges and simply consider topology in the terms of residuality. We focus on interesting questions arising in the probabilistic setup that make sense in this setting, too. We will see that certain classical almost sure events, as the existence of geodesics have residual counterparts, while the notion of the limit shape or time constants gets as chaotic as possible.
- Nov 23 2017 cs.DC arXiv:1711.08060v1The biggest overhead for the instantiation of a virtual machine in a cloud infrastructure is the time spent in transferring the image of the virtual machine into the physical node that executes it. This overhead becomes larger for requests composed of several virtual machines to be started concurrently, and the illusion of flexibility and elasticity usually associated with the cloud computing model may vanish. This poses a problem for both the resource providers and the software developers, since tackling those overheads is not a trivial issue. In this work we implement and evaluate several improvements for virtual machine image distribution problem in a cloud infrastructure and propose a method based on BitTorrent and local caching of the virtual machine images that reduces the transfer time when large requests are made
- Nov 23 2017 cond-mat.quant-gas physics.optics arXiv:1711.08059v1We present and analyze a protocol in which polaritons in a non-coplanar optical cavity form fractional quantum Hall states. We model the formation of these states and present techniques for subsequently creating anyons and measuring their fractional exchange statistics. In this protocol, we use a rapid adiabatic passage scheme to sequentially add polaritons to the system, such that the system is coherently driven from $n$ to $n+1$-particle Laughlin states. Quasiholes are created by slowly moving local pinning potentials in from outside the cloud. They are braided by dragging the pinning centers around one another, and the resulting phases are measured interferometrically. The most technically challenging issue with implementing our procedure is that maintaining adiabaticity and coherence requires that the two-particle interaction energy $V_0$ is sufficiently large compared to the single-polariton decay rate $\gamma$, $V_0 /\gamma \gg 10 N^2 \ln N$, where $N$ is the number of particles in the target state. Current experiments have $V_0 /\gamma\sim 50$.
- We propose using cascaded classifiers for a keyword spotting (KWS) task on narrow-band (NB), 8kHz audio acquired in non-IID environments --- a more challenging task than most state-of-the-art KWS systems face. We present a model that incorporates Deep Neural Networks (DNNs), cascading, multiple-feature representations, and multiple-instance learning. The cascaded classifiers handle the task's class imbalance and reduce power consumption on computationally-constrained devices via early termination. The KWS system achieves a false negative rate of 6% at an hourly false positive rate of 0.75
- In this work, we consider the task of classifying the binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal re-weighting strategy for U data, so that a decent decision boundary can be found. In contrast, we provide a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GANs). Our generative positive-unlabeled (GPU) learning model is devised to express P and N data distributions. It comprises of three discriminators and two generators with different roles, producing both positive and negative samples that resemble those come from the real training dataset. Even with rather limited labeled P data, our GPU framework is capable of capturing the underlying P and N data distribution with infinite realistic sample streams. In this way, an optimal classifier can be trained on those generated samples using a very deep neural networks (DNNs). Moreover, an useful variant of GPU is also introduced for semi-supervised classification.
- Lightning induced radio emission has been observed on solar system planets. Lecavelier des Etangs et al. [2013] carried out radio transit observations of the exoplanet HAT-P-11b, and suggested a tentative detection of a radio signal. Here, we explore the possibility of the radio emission having been produced by lightning activity on the exoplanet, following and expanding the work of Hodosán et al. [2016a]. After a summary of our previous work [Hodosán et al. 2016a], we extend it with a parameter study. The lightning activity of the hypothetical storm is largely dependent on the radio spectral roll-off, $n$, and the flash duration, $\tau_\mathrm{fl}$. The best-case scenario would require a flash density of the same order of magnitude as can be found during volcanic eruptions on Earth. On average, $3.8 \times 10^6$ times larger flash densities than the Earth-storms with the largest lightning activity is needed to produce the observed signal from HAT-P-11b. Combined with the results of Hodosán et al. [2016a] regarding the chemical effects of planet-wide thunderstorms, we conclude that future radio and infrared observations may lead to lightning detection on planets outside the solar system.
- Nov 23 2017 math.AP arXiv:1711.08047v1For non-monotone single and two-populations time-dependent Mean-Field Game systems we obtain the existence of branches of solutions that exhibit an oscillatory behaviour in time. The existence of such branches is derived using local and global bifurcation methods, that rely on the analysis of the space-time Fourier coefficients of the linearized problem. Numerical analysis is performed on two different models to observe the oscillatory behaviour of solutions predicted by bifurcation theory, and to study further properties of branches far away from bifurcation points.
- C. G. Escudero, F. R. Faifer, A. V. Smith Castelli, J. C. Forte, L. A. Sesto, N. M. González, M. C. ScaliaNov 23 2017 astro-ph.GA arXiv:1711.08046v1We used deep Gemini-South/GMOS g'r'i'z' images to study the globular cluster (GC) system of the massive elliptical galaxy NGC 1395, located in the Eridanus supergroup. The photometric analysis of the GC candidates reveals a clear colour bimodality distribution, indicating the presence of "blue" and "red" GC subpopulations. While a negative radial colour gradient is detected in the projected spatial distribution of the red GCs, the blue GCs display a shallow colour gradient. The blue GCs also display a remarkable shallow and extended surface density profile, suggesting a significant accretion of low-mass satellites in the outer halo of the galaxy. In addition, the slope of the projected spatial distribution of the blue GCs in the outer regions of the galaxy, is similar to that of the X-ray halo emission. Integrating up to 165 kpc the profile of the projected spatial distribution of the GCs, we estimated a total GC population and specific frequency of 6000$\pm$1100 and $S_N$=7.4$\pm$1.4, respectively. Regarding NGC 1395 itself, the analysis of the deep Gemini/GMOS images shows a low surface brightness umbrella-like structure indicating, at least, one recent merger event. Through relations recently published in the literature, we obtained global parameters, such as $M_\mathrm{stellar}=9.32\times10^{11}$ M$\odot$ and $M_h=6.46\times10^{13}$ M$\odot$. Using public spectroscopic data, we derive stellar population parameters of the central region of the galaxy by the full spectral fitting technique. We have found that, this region, seems to be dominated for an old stellar population, in contrast to findings of young stellar populations from the literature.
- Technology market is continuing a rapid growth phase where different resource providers and Cloud Management Frameworks are positioning to provide ad-hoc solutions -in terms of management interfaces, information discovery or billing- trying to differentiate from competitors but that as a result remain incompatible between them when addressing more complex scenarios like federated clouds. Grasping interoperability problems present in current infrastructures is then a must-do, tackled by studying how existing and emerging standards could enhance user experience in the cloud ecosystem. In this paper we will review the current open challenges in Infrastructure as a Service cloud interoperability and federation, as well as point to the potential standards that should alleviate these problems.
- Lightning and thundercloud are the most dramatic natural particle accelerators on the Earth. Relativistic electrons accelerated by electric fields therein emit bremsstrahlung gamma rays, which have been detected at ground observations, by airborne detectors, and as terrestrial gamma-ray flashes (TGFs) from space. The energy of the gamma rays is sufficiently high to potentially invoke atmospheric photonuclear reactions 14N(gamma, n)13N, which would produce neutrons and eventually positrons via beta-plus decay of generated unstable radioactive isotopes, especially 13N. However, no clear observational evidence for the reaction has been reported to date. Here we report the first detection of neutron and positron signals from lightning with a ground observation. During a thunderstorm on 6 February 2017 in Japan, a TGF-like intense flash (within 1 ms) was detected at our monitoring sites 0.5-1.7 km away from the lightning. The subsequent initial burst quickly subsided with an exponential decay constant of 40-60 ms, followed by a prolonged line emission at about 0.511 megaelectronvolt (MeV), lasting for a minute. The observed decay timescale and spectral cutoff at about 10 MeV of the initial emission are well explained with de-excitation gamma rays from the nuclei excited by neutron capture. The centre energy of the prolonged line emission corresponds to the electron-positron annihilation, and hence is the conclusive indication of positrons produced after the lightning. Our detection of neutrons and positrons is unequivocal evidence that natural lightning triggers photonuclear reactions. No other natural event on the Earth is known to trigger photonuclear reactions. This discovery places lightning as only the second known natural channel on the Earth after the atmospheric cosmic-ray interaction, in which isotopes, such as 13C, 14C, and 15N, are produced.
- We develop a comprehensive mathematical framework for polynomial jump-diffusions, which nest affine jump-diffusions and have broad applications in finance. We show that the polynomial property is preserved under exponentiation and subordination. We present a generic method for option pricing based on moment expansions. As an application, we introduce a large class of novel financial asset pricing models that are based on polynomial jump-diffusions.
- Nov 23 2017 cs.DS arXiv:1711.08041v1In the Set Cover problem, the input is a ground set of $n$ elements and a collection of $m$ sets, and the goal is to find the smallest sub-collection of sets whose union is the entire ground set. In spite of extensive effort, the fastest algorithm known for the general case runs in time $O(mn2^n)$ [Fomin et al., WG 2004]. In 2012, as progress seemed to halt, Cygan et al. [TALG 2016] have put forth the Set Cover Conjecture (SeCoCo), which asserts that for every fixed $\varepsilon>0$, no algorithm with runtime $2^{(1-\varepsilon)n} poly(m)$ can solve Set Cover, even if the input sets are of arbitrary large constant size. We propose a weaker conjecture, which we call Log-SeCoCo, that is similar to SeCoCo but allows input sets of size $O(\log n)$. To support Log-SeCoCo, we show that its failure implies an algorithm that is faster than currently known for the famous Directed Hamiltonicity problem. Even though Directed Hamiltonicity has been studied extensively for over half a century, no algorithm significantly faster than $2^n poly(n)$ is known for it. In fact, we show a fine-grained reduction to Log-SeCoCo from a generalization of Directed Hamiltonicity, known as the nTree problem, which too can be solved in time $2^n poly(n)$ [Koutis and Williams, TALG 2016]. We further show an equivalence between solving the parameterized versions of Set Cover and of nTree significantly faster than their current known runtime. Finally, we show that even moderate runtime improvements for Set Cover with bounded-size sets would imply new algorithms for nTree and for Directed Hamiltonicity. Our technical contribution is to reinforce Log-SeCoCo (and arguably SeCoCo) by reductions from other famous problems with known algorithmic barriers, and hope it will lead to more results in this vein, particularly reinforcing the Strong Exponential-Time Hypothesis (SETH) by reductions from other well-known problems.
- Nov 23 2017 cs.CV arXiv:1711.08040v1We present an approach for identifying the most walkable direction for navigation using a hand-held camera. Our approach extracts semantically rich contextual information from the scene using a custom encoder-decoder architecture for semantic segmentation and models the spatial and temporal behavior of objects in the scene using a spatio-temporal graph. The system learns to minimize a cost function over the spatial and temporal object attributes to identify the most walkable direction. We construct a new annotated navigation dataset collected using a hand-held mobile camera in an unconstrained outdoor environment, which includes challenging settings such as highly dynamic scenes, occlusion between objects, and distortions. Our system achieves an accuracy of 84% on predicting a safe direction. We also show that our custom segmentation network is both fast and accurate, achieving mIOU (mean intersection over union) scores of 81 and 44.7 on the PASCAL VOC and the PASCAL Context datasets, respectively, while running at about 21 frames per second.
- Alternating minimization heuristics seek to solve a (difficult) global optimization task through iteratively solving a sequence of (much easier) local optimization tasks on different parts (or blocks) of the input parameters. While popular and widely applicable, very few examples of this heuristic are rigorously shown to converge to optimality, and even fewer to do so efficiently. In this paper we present a general framework which is amenable to rigorous analysis, and expose its applicability. Its main feature is that the local optimization domains are each a group of invertible matrices, together naturally acting on tensors, and the optimization problem is minimizing the norm of an input tensor under this joint action. The solution of this optimization problem captures a basic problem in Invariant Theory, called the null-cone problem. This algebraic framework turns out to encompass natural computational problems in combinatorial optimization, algebra, analysis, quantum information theory, and geometric complexity theory. It includes and extends to high dimensions the recent advances on (2-dimensional) operator scaling. Our main result is a fully polynomial time approximation scheme for this general problem, which may be viewed as a multi-dimensional scaling algorithm. This directly leads to progress on some of the problems in the areas above, and a unified view of others. We explain how faster convergence of an algorithm for the same problem will allow resolving central open problems. Our main techniques come from Invariant Theory, and include its rich non-commutative duality theory, and new bounds on the bitsizes of coefficients of invariant polynomials. They enrich the algorithmic toolbox of this very computational field of mathematics, and are directly related to some challenges in geometric complexity theory (GCT).
- Nov 23 2017 math.DS arXiv:1711.08038v1We analyze a simple macroeconomic model where rational inflation expectations is replaced by a boundedly rational, and genuinely sticky, response to changes in the actual inflation rate. The stickiness is introduced in a novel way using a mathematical operator that is amenable to rigorous analysis. We prove that, when exogenous noise is absent from the system, the unique equilibrium of the rational expectations model is replaced by an entire line segment of possible equilibria with the one chosen depending, in a deterministic way, upon the previous states of the system. The agents are sufficiently far-removed from the rational expectations paradigm that problems o indeterminacy do not arise. The response to exogenous noise is far more subtle than in a unique equilibrium model. After sufficiently small shocks the system will indeed revert to the same equilibrium but larger ones will move the system to a different one (at the same model parameters). The path to this new equilibrium may be very long with a highly unpredictable endpoint. At certain model parameters exogenously-triggered runaway inflation can occur. Finally, we analyze a variant model in which the same form of sticky response is introduced into the interest rate rule instead.
- T.M. Brown, J.M. Taylor, S. Cassisi, A.V. Sweigart, A. Bellini, L.R. Bedin, M. Salaris, A. Renzini, E. DalessandroNov 23 2017 astro-ph.SR arXiv:1711.08036v1In the color-magnitude diagrams (CMDs) of globular clusters, when the locus of stars on the horizontal branch (HB) extends to hot temperatures, discontinuities are observed at colors corresponding to ~12,000 K and ~18,000 K. The former is the "Grundahl jump" that is associated with the onset of radiative levitation in the atmospheres of hot subdwarfs. The latter is the "Momany jump" that has remained unexplained. Using the Space Telescope Imaging Spectrograph on the Hubble Space Telescope, we have obtained ultraviolet and blue spectroscopy of six hot subdwarfs straddling the Momany jump in the massive globular cluster omega Cen. By comparison to model atmospheres and synthetic spectra, we find that the feature is due primarily to a decrease in atmospheric Fe for stars hotter than the feature, amplified by the temperature dependence of the Fe absorption at these effective temperatures.
- Nov 23 2017 math.NA arXiv:1711.08035v1We present a configurable trajectory planning strategy on planar paths for offline definition of time-dependent C2 piecewise quintic feedrates. The more conservative formulation ensures chord tolerance, as well as prescribed bounds on velocity, acceleration and jerk Cartesian components. Since the less restrictive formulations of our strategy can usually still ensure all the desired bounds while simultaneously producing faster motions, the configurability feature is useful not only when reduced motion control is desired but also when full kinematic control has to be guaranteed. Our approach can be applied to any planar path with a piecewise sufficiently smooth parametric representation. When Pythagoreanhodograph spline curves are considered, the corresponding accurate and efficient CNC interpolator algorithms can be exploited.
- The non-Abelian tensor gauge fields take value in extended Poincaré algebra. In order to define the invariant Lagrangian we introduce a vector variable in two alternative ways: through the transversal representation of the extended Poincaré algebra and through the path integral over the auxiliary vector field with the U(1) Abelian action. We demonstrate that this allows to fix the unitary gauge and derive scattering amplitudes in spinor representation.
- Nov 23 2017 physics.chem-ph arXiv:1711.08033v1The magnetic tuning of the low rotational levels in the Xsigma+(0,0,0), Api(0,0,0), and Bsigma+ (0,0,0) electronic states of strontium hydroxide, SrOH, have been experimentally investigated using high resolution optical Zeeman spectroscopy of a cold molecular beam sample. The observed Zeeman shifts and splittings are successfully modeled using a traditional effective Hamiltonian approach to account for the interaction between the Api and Bsigma+ states. The determined magnetic g-factors for the Xsigma+, Api , and Bsigma+ states are compared to those predicted by perturbation theory. The dispersed fluorescence resulting from laser excitation of rotationally resolved branch features of the Bsigma+ (0,0,0)<Xsigma+(0,0,0) Api(0,0,0)< Xsigma+(0,0,0) transitions have been recorded and analyzed. The measured fluorescence branching ratios are compared with Franck-Condon calculations. The required bending motion wave functions are derived using a discrete variable representation (DVR) method. Implications for laser slowing and magneto-optical trapping experiments for SrOH are described.
- Nov 23 2017 q-bio.NC arXiv:1711.08032v1Continuous "bump" attractors are an established model of cortical working memory for continuous variables and can be implemented using various neuron and network models. Here, we develop a generalizable approach for the approximation of bump states of continuous attractor networks implemented in networks of both rate-based and spiking neurons. The method relies on a low-dimensional parametrization of the spatial shape of firing rates, allowing to apply efficient numerical optimization methods. Using our theory, we can establish a mapping between network structure and attractor properties that allows the prediction of the effects of network parameters on the steady state firing rate profile and the existence of bumps, and vice-versa, to fine-tune a network to produce bumps of a given shape.
- We show that quantum curves arise in infinite families and have the structure of singular vectors of a relevant symmetry algebra. We analyze in detail the case of the hermitian one-matrix model with the underlying Virasoro algebra, and the super-eigenvalue model with the underlying super-Virasoro algebra. In the Virasoro case we relate singular vector structure of quantum curves to the topological recursion, and in the super-Virasoro case we introduce the notion of super-quantum curves. We also discuss the double quantum structure of the quantum curves and analyze specific examples of Gaussian and multi-Penner models.
- Nov 23 2017 stat.CO arXiv:1711.08030v1The global sensitivity analysis of time-dependent processes requires history-aware approaches. We develop for that purpose a variance-based method that leverages the correlation structure of the problems under study and employs surrogate models to accelerate the computations. The errors resulting from fixing unimportant uncertain parameters to their nominal values are analyzed through a priori estimates. We illustrate our approach on a harmonic oscillator example and on a nonlinear dynamic cholera model.
- Nov 23 2017 cs.PL arXiv:1711.08029v1We propose a new conflict-driven program synthesis technique that is capable of learning from past mistakes. Given a spurious program that violates the desired specification, our synthesis algorithm identifies the root cause of the conflict and learns new lemmas that can prevent similar mistakes in the future. Specifically, we introduce the notion of equivalence modulo conflict and show how this idea can be used to learn useful lemmas that allow the synthesizer to prune large parts of the search space. We have implemented a general-purpose CDCL-style program synthesizer called Neo and evaluate it in two different application domains, namely data wrangling in R and functional programming over lists. Our experiments demonstrate the substantial benefits of conflict-driven learning and show that Neo outperforms two state-of-the-art synthesis tools, Morpheus and Deepcoder, that target these respective domains.
- Nov 23 2017 cs.AI arXiv:1711.08028v1Humans possess an ability to abstractly reason about objects and their interactions, an ability not shared with state-of-the-art deep learning models. Relational networks, introduced by Santoro et al. (2017), add the capacity for relational reasoning to deep neural networks, but are limited in the complexity of the reasoning tasks they can address. We introduce recurrent relational networks which increase the suite of solvable tasks to those that require an order of magnitude more steps of relational reasoning. We use recurrent relational networks to solve Sudoku puzzles and achieve state-of-the-art results by solving 96.6% of the hardest Sudoku puzzles, where relational networks fail to solve any. We also apply our model to the BaBi textual QA dataset solving 19/20 tasks which is competitive with state-of-the-art sparse differentiable neural computers. The recurrent relational network is a general purpose module that can augment any neural network model with the capacity to do many-step relational reasoning.
- Nov 23 2017 cond-mat.stat-mech quant-ph arXiv:1711.08027v1In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as $1/\Delta^2$, where $\Delta$ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests there is no quantum advantage in using QAs w.r.t. quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model, where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving the door open for potential quantum speedup, even for stoquastic models. In this work, we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as $1/\Delta$, i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain points at an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
- Nov 23 2017 gr-qc astro-ph.CO arXiv:1711.08026v1We analyze the evolution of a Friedmann-Robertson-Walker spacetime within the framework of $f(R)$ metric gravity using an exponential model. We show that $f(R)$ gravity may lead to a vanishing effective cosmological constant in the far future (i.e. $R\rightarrow 0$) and yet produce a transient accelerated expansion at present time with a potentially viable cosmological history. This is in contrast with several $f(R)$ models which, while viable, produce in general a non-vanishing effective cosmological constant asymptotically in time ($R\rightarrow 4\Lambda_{\rm eff}$). We also show that relativistic stars in asymptotically flat spacetimes can be supported within this framework without encountering any singularity, notably in the Ricci scalar $R$.
- Nov 23 2017 math.CO arXiv:1711.08025v1In the interest of studying formulas with reversal of high avoidability index, we find $n$-avoidance bases for formulas with reversal for $n\in\{1,2,3\}$. We demonstrate that there is a unique formula with reversal in each of these three bases of highest avoidability index $n+2$; these formulas are $xx$, $xyx\cdot y^R$, and $xyzx\cdot y^R\cdot z^R$, which belong to an infinite family of formulas with reversal that has been the subject of recent study by the authors.
- In this paper we survey recent developments in the classical theory of minimal surfaces in Euclidean spaces which have been obtained as applications of both classical and modern complex analytic methods; in particular, Oka theory, period dominating holomorphic sprays, gluing methods for holomorphic maps, and the Riemann-Hilbert boundary value problem. Emphasis is on results pertaining to the global theory of minimal surfaces, in particular, the Calabi-Yau problem, constructions of properly immersed and embedded minimal surfaces in $\mathbb{R}^n$ and in minimally convex domains of $\mathbb{R}^n$, results on the complex Gauss map, isotopies of conformal minimal immersions, and the analysis of the homotopy type of the space of all conformal minimal immersions from a given open Riemann surface.
- Precise theoretical predictions are vital for the interpretation of Standard Model measurements and facilitate conclusive searches for New Physics phenomena at the LHC. In this contribution I highlight some of the ongoing efforts in the framework of the Sherpa event generator to provide accurate and realistic simulations of Standard Model production processes. This includes the automated evaluation of NLO QCD and NLO EW corrections and their consistent consideration in the parton-shower evolution of scattering events. Particular emphasis is given to examples and applications involving the production of top quarks.
- Nov 23 2017 cond-mat.mtrl-sci arXiv:1711.08022v1Ferroelectrics with suitable band gaps have recently attracted attention as candidate solar absorbing materials for photovoltaics. The inversion symmetry breaking may promote the separation of photo-excited carriers and allow voltages higher than the band gap. However, these effects are not fully understood, in part because of a lack of suitable model systems for studying these effects in detail. Here, we report properties of ferroelectric Sn$_2$P$_2$S$_6$ and Sn$_2$P$_2$Se$_6$ using first principles calculations. Results are given for the electronic structure, carrier pocket shapes, optical absorption and transport. We find indirect band gaps of 2.20 eV and 1.55 eV, respectively, and favorable band structures for carrier transport, including both holes and electrons. Strong absorption is found above the direct gaps of 2.43 eV and 1.76 eV. Thus these compounds may serve as useful model systems for understanding photovoltaic effects in ferroelectric semiconductors.
- Nov 23 2017 physics.plasm-ph arXiv:1711.08021v1The electrostatic stability of electron-positron plasmas is investigated in the point-dipole and Z-pinch limits of dipole geometry. The kinetic dispersion relation for sub-bounce-frequency instabilities is derived and solved. For the zero-Debye-length case, the stability diagram is found to exhibit singular behavior. However, when the Debye length is non-zero, a fluid mode appears, which resolves the observed singularity, and also demonstrates that both the temperature and density gradients can drive instability. It is concluded that a finite Debye length is necessary to determine the stability boundaries in parameter space. Landau damping is investigated at scales sufficiently smaller than the Debye length, where instability is absent.
- First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but employ different primal variable updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence as well as global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising and a convex quadratically constrained quadratic program to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.
- Nov 23 2017 cs.HC arXiv:1711.08019v1Assessing and understanding intelligent agents is a difficult task for users that lack an AI background. A relatively new area, called "Explainable AI," is emerging to help address this problem, but little is known about how users would forage through information an explanation system might offer. To inform the development of Explainable AI systems, we conducted a formative study, using the lens of Information Foraging Theory, into how experienced users foraged in the domain of StarCraft to assess an agent. Our results showed that participants faced difficult foraging problems. These foraging problems caused participants to entirely miss events that were important to them, reluctantly choose to ignore actions they did not want to ignore, and bear high cognitive, navigation, and information costs to access the information they needed.
- We design new algorithms for the combinatorial pure exploration problem in the multi-arm bandit framework. In this problem, we are given K distributions and a collection of subsets $\mathcal{V} \subset 2^K$ of these distributions, and we would like to find the subset $v \in \mathcal{V}$ that has largest cumulative mean, while collecting, in a sequential fashion, as few samples from the distributions as possible. We study both the fixed budget and fixed confidence settings, and our algorithms essentially achieve state-of-the-art performance in all settings, improving on previous guarantees for structures like matchings and submatrices that have large augmenting sets. Moreover, our algorithms can be implemented efficiently whenever the decision set V admits linear optimization. Our analysis involves precise concentration-of-measure arguments and a new algorithm for linear programming with exponentially many constraints.
- Nov 23 2017 quant-ph arXiv:1711.08017v1This work describes the continuous-variable entanglement of the counter-propagating twin beams generated in a Mirrorless Optical Parametric Oscillator below threshold, encompassing both their quadrature and photon-number correlation. In the first case, a comparison with the single-pass co-propagating geometry outlines the huge difference of the bandwidth involved and a completely different stability of the two sources with respect to the phase-angle. In the second case, stimulated by the critical divergence of the correlation time evidenced by Corti et al., we address the issue of the temporal bandwidth of the intensity squeezing.
- Far-field speech recognition in noisy and reverberant conditions remains a challenging problem despite recent deep learning breakthroughs. This problem is commonly addressed by acquiring a speech signal from multiple microphones and performing beamforming over them. In this paper, we propose to use a recurrent neural network with long short-term memory (LSTM) architecture to adaptively estimate real-time beamforming filter coefficients to cope with non-stationary environmental noise and dynamic nature of source and microphones positions which results in a set of timevarying room impulse responses. The LSTM adaptive beamformer is jointly trained with a deep LSTM acoustic model to predict senone labels. Further, we use hidden units in the deep LSTM acoustic model to assist in predicting the beamforming filter coefficients. The proposed system achieves 7.97% absolute gain over baseline systems with no beamforming on CHiME-3 real evaluation set.
- Nov 23 2017 physics.space-ph astro-ph.HE arXiv:1711.08015v1We have used new measurements of the B/C ratio in galactic cosmic rays at both low and high energies by the Voyager and AMS-2 spacecraft, respectively, along with propagation calculations using a truncated LBM to examine the implications of these new measurements over an extended energy range from a few MeV/nuc to 1 TeV/nuc. We find that the predictions from both the truncated LBM and the Diffusive Reacceleration model for GALPROP both agree with the Voyager and AMS-2 measurements of the B/C ratio to within +/- 10 percent throughout the entire energy range from 50 MeV/nuc to 1 TeV/nuc. The two propagation approaches also agree with each other to within +/-10 percent or less throughout this energy range. In effect a diffusion model, without significant additional acceleration, provides a match within +/-10 percent to the combined data from Voyager 1 and AMS-2 on the B/C ratio from 50 MeV/nuc to 1 TeV/nuc. The B/C ratio below 50 MeV/nuc measured at V1 exceeds the predictions of both propagation models by as much as 3 sigma in the data measurement errors.
- Deep generative models learn a mapping from a low dimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. In this paper, we investigate the Riemannian geometry of these generated manifolds. First, we develop efficient algorithms for computing geodesic curves, which provide an intrinsic notion of distance between points on the manifold. Second, we develop an algorithm for parallel translation of a tangent vector along a path on the manifold. We show how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models, while nonlinear, are surprisingly close to zero curvature. The practical implication is that linear paths in the latent space closely approximate geodesics on the generated manifold. However, further investigation into this phenomenon is warranted, to identify if there are other architectures or datasets where curvature plays a more prominent role. We believe that exploring the Riemannian geometry of deep generative models, using the tools developed in this paper, will be an important step in understanding the high-dimensional, nonlinear spaces these models learn.

Minimal rights required by arXiv.org. Select this unless you understand t...

Zoltán Zimborás Nov 17 2017 07:59 UTCAram Harrow Nov 07 2017 08:52 UTC

I am not sure, but the title is great.

Noon van der Silk Nov 07 2017 05:13 UTC

I'm not against this idea; but what's the point? Clearly it's to provide some benefit to efficient implementation of particular procedures in Quil, but it'd be nice to see some detail of that, and how this might matter outside of Quil.

Gui-Lu Long Nov 06 2017 20:23 UTC

great！

Noon van der Silk Nov 01 2017 21:51 UTC

This is an awesome paper; great work! :)

Xiaodong Qi Oct 25 2017 19:55 UTC

Paper source repository is here https://github.com/CQuIC/NanofiberPaper2014

Comments can be submitted as an issue in the repository. Thanks!

Siddhartha Das Oct 06 2017 03:18 UTC

Here is a work in related direction: "Unification of Bell, Leggett-Garg and Kochen-Specker inequalities: Hybrid spatio-temporal inequalities", Europhysics Letters 104, 60006 (2013), which may be relevant to the discussions in your paper. [https://arxiv.org/abs/1308.0270]

Bin Shi Oct 05 2017 00:07 UTC

Welcome to give the comments for this paper!

Martin Henessey Oct 03 2017 01:48 UTC

I am confortable with it. Good job

Martin Henessey Oct 03 2017 01:40 UTC

Well done