- arXiv.org
- Popular Physics
- Data Analysis, Statistics and Probability
- Space Physics
- Fluid Dynamics
- History and Philosophy of Physics
- Atomic and Molecular Clusters
- Physics and Society
- Biological Physics
- Optics
- General Physics
- Geophysics
- Plasma Physics
- Atomic Physics
- Medical Physics
- Computational Physics
- Instrumentation and Detectors
- Chemical Physics
- Physics Education
- Accelerator Physics
- Classical Physics
- Atmospheric and Oceanic Physics

- Analysis of PDEs
- Information Theory
- Statistics Theory
- Number Theory
- History and Overview
- Algebraic Geometry
- Probability
- Metric Geometry
- Combinatorics
- Representation Theory
- Group Theory
- Mathematical Physics
- Operator Algebras
- Complex Variables
- Symplectic Geometry
- Dynamical Systems
- Logic
- Optimization and Control
- Differential Geometry
- Numerical Analysis
- Geometric Topology
- Quantum Algebra
- General Mathematics
- General Topology
- Commutative Algebra
- Classical Analysis and ODEs
- Functional Analysis
- Rings and Algebras
- Algebraic Topology
- Category Theory
- Spectral Theory
- K-Theory and Homology

- General Literature
- Information Theory
- Multiagent Systems
- Formal Languages and Automata Theory
- Computational Complexity
- Information Retrieval
- Symbolic Computation
- Numerical Analysis
- Learning
- Computer Vision and Pattern Recognition
- Sound
- Neural and Evolutionary Computing
- Social and Information Networks
- Software Engineering
- Emerging Technologies
- Operating Systems
- Other Computer Science
- Programming Languages
- Databases
- Distributed, Parallel, and Cluster Computing
- Mathematical Software
- Artificial Intelligence
- Discrete Mathematics
- Computational Engineering, Finance, and Science
- Human-Computer Interaction
- Cryptography and Security
- Computer Science and Game Theory
- Digital Libraries
- Hardware Architecture
- Systems and Control
- Performance
- Networking and Internet Architecture
- Multimedia
- Computation and Language
- Data Structures and Algorithms
- Computers and Society
- Robotics
- Logic in Computer Science
- Computational Geometry
- Graphics

- Apr 28 2017 cs.DC arXiv:1704.08273v1Hardware accelerators have become a de-facto standard to achieve high performance on current supercomputers and there are indications that this trend will increase in the future. Modern accelerators feature high-bandwidth memory next to the computing cores. For example, the Intel Knights Landing (KNL) processor is equipped with 16 GB of high-bandwidth memory (HBM) that works together with conventional DRAM memory. Theoretically, HBM can provide 5x higher bandwidth than conventional DRAM. However, many factors impact the effective performance achieved by applications, including the application memory access pattern, the problem size, the threading level and the actual memory configuration. In this paper, we analyze the Intel KNL system and quantify the impact of the most important factors on the application performance by using a set of applications that are representative of scientific and data-analytics workloads. Our results show that applications with regular memory access benefit from MCDRAM, achieving up to 3x performance when compared to the performance obtained using only DRAM. On the contrary, applications with random memory access pattern are latency-bound and may suffer from performance degradation when using only MCDRAM. For those applications, the use of additional hardware threads may help hide latency and achieve higher aggregated bandwidth when using HBM.
- Apr 28 2017 physics.flu-dyn arXiv:1704.08272v1Secondary shear flows transverse to the direction of a forward Poiseuille flow can be generated by using superhydrophobic channels with misaligned textured surfaces. Such a transverse shear can strongly enhance spreading of Brownian particles in the channel cross-section as compared to their normal diffusion. We provide simple scaling results to argue that one can induce an advective superdiffusion in such a flow, and that its subballistic or superballistic regimes can be attained depending on conditions. We then relate spreading at a given cross-section to a Peclet number and average slip velocities at superhydrophobic walls, and argue that maximal spreading corresponds to a crossover between subballistic and superballistic regimes. Simulations of spreading of the particle assembly in superhydrophobic channels validate our scaling analysis and allow us to deduce exact values of scaling prefactors. Our results may find application in passive microfluidic mixing.
- Apr 28 2017 cond-mat.str-el arXiv:1704.08271v1The interplay of almost degenerate levels in quantum dots and molecular junctions with possibly different couplings to the reservoirs has lead to many observable phenomena, such as the Fano effect, transmission phase slips and the SU(4) Kondo effect. Here we predict a dramatic repeated disappearance and reemergence of the SU(4) and anomalous SU(2) Kondo effects with increasing gate voltage. This phenomenon is attributed to the level occupation switching which has been previously invoked to explain the universal transmission phase slips in the conductance through a quantum dot. We use analytical arguments and numerical renormalization group calculations to explain the observations and discuss their experimental relevance and dependence on the physical parameters.
- The advancement of nanoscale electronics has been limited by energy dissipation challenges for over a decade. Such limitations could be particularly severe for two-dimensional (2D) semiconductors integrated with flexible substrates or multi-layered processors, both being critical thermal bottlenecks. To shed light into fundamental aspects of this problem, here we report the first direct measurement of spatially resolved temperature in functioning 2D monolayer MoS$_2$ transistors. Using Raman thermometry we simultaneously obtain temperature maps of the device channel and its substrate. This differential measurement reveals the thermal boundary conductance (TBC) of the MoS$_2$ interface (14 $\pm$ 4 MWm$^-$$^2$K$^-$$^1$) is an order magnitude larger than previously thought, yet near the low end of known solid-solid interfaces. Our study also reveals unexpected insight into non-uniformities of the MoS$_2$ transistors (small bilayer regions), which do not cause significant self-heating, suggesting that such semiconductors are less sensitive to inhomogeneity than expected. These results provide key insights into energy dissipation of 2D semiconductors and pave the way for the future design of energy-efficient 2D electronics.
- Apr 28 2017 hep-th arXiv:1704.08269v1We explore the behaviour of renormalized entanglement entropy in a variety of holographic models: non-conformal branes; the Witten model for QCD; UV conformal RG flows driven by explicit and spontaneous symmetry breaking and Schrödinger geometries. Focussing on slab entangling regions, we find that the renormalized entanglement entropy captures features of the previously defined entropic c-function but also captures deep IR behaviour that is not seen by the c-function. In particular, in theories with symmetry breaking, the renormalized entanglement entropy saturates for large entangling regions to values that are controlled by the symmetry breaking parameters.
- Apr 28 2017 gr-qc arXiv:1704.08268v1Gravitational waves encode invaluable information about the nature of the relatively unexplored extreme gravity regime, where the gravitational interaction is strong, non-linear and highly dynamical. Recent gravitational wave observations by advanced LIGO have provided the first glimpses into this regime, allowing for the extraction of new inferences on different aspects of theoretical physics. For example, these detections provide constraints on the mass of the graviton, Lorentz violation in the gravitational sector, the existence of large extra dimensions, the temporal variability of Newton's gravitational constant, and modified dispersion relations of gravitational waves. Many of these constraints, however, are not yet competitive with constraints obtained, for example, through Solar System observations or binary pulsar observations. In this paper, we study the degree to which theoretical physics inferences drawn from gravitational wave observations will strengthen with detections from future detectors. We consider future ground-based detectors, such as the LIGO-class expansions A+, Voyager, Cosmic Explorer and the Einstein Telescope, as well as various configurations the space-based detector LISA. We find that space-based detectors will place constraints on General Relativity up to 12 orders of magnitude more stringently than current aLIGO bounds, but these space-based constraints are comparable to those obtained with the ground-based Cosmic Explorer or the Einstein Telescope. We also generically find that improvements in the instrument sensitivity band at low frequencies lead to large improvements in certain classes of constraints, while sensitivity improvements at high frequencies lead to more modest gains. These results strengthen the case for the development of future detectors, while providing additional information that could be useful in future design decisions.
- Apr 28 2017 hep-th cond-mat.quant-gas arXiv:1704.08267v1We apply a recently developed effective string theory for vortex lines to the case of two-dimensional trapped superfluids. We do not assume a perturbative microscopic description for the superfluid, but only a gradient expansion for the long-distance hydrodynamical description and for the trapping potential. We compute the spatial dependence of the superfluid density and the orbital frequency and trajectory of an off-center vortex. Our results are fully relativistic, and in the non-relativistic limit reduce to known results based on the Gross-Pitaevskii model. In our formalism, the leading effect in the non-relativistic limit comes from a simple Feynman diagram in which the vortex exchanges a phonon with the trapping potential.
- Apr 28 2017 astro-ph.GA astro-ph.CO arXiv:1704.08266v1We present $Suzaku$ off-center observations of two poor galaxy groups, NGC 3402 and NGC 5129, with temperatures below 1 keV. Through spectral decomposition, we measure their surface brightnesses and temperatures out to 330 and 680 times the critical density of the universe for NGC 3402 and NGC 5129, respectively. These quantities are consistent with extrapolations from existing inner measurements of the two groups. With the refined X-ray luminosities, both groups prefer $L_X-T$ relations without a break in the group regime. Furthermore, we measure the electron number densities and hydrostatic masses at these radii. We find that the electron number density profiles require three $\beta$ model components, with nearly flat slopes in the 3$^{rd}$ $\beta$ component for both groups. However, we find the effective slope in the outskirts to be $\beta_{out}$ = 0.59 and 0.49 for NGC 3402 and NGC 5129, respectively. Adding the gas mass measured from the X-ray data and stellar mass from group galaxy members, we measure baryon fractions of $f_b$ = 0.113 $\pm$ 0.013 and 0.091 $\pm$ 0.006 for NGC 3402 and NGC 5129, respectively. Combining other poor groups with well measured X-ray emission to the outskirts, we find an average baryon fraction of $f_{b,ave}$ = 0.100 $\pm$ 0.004 for X-ray bright groups with temperatures between 0.8$-$1.3 keV, extending existing constraints to lower mass systems.
- In the context of variable selection, ensemble learning has gained increasing interest due to its great potential to improve selection accuracy and to reduce false discovery rate. A novel ordering-based selective ensemble learning strategy is designed in this paper to obtain smaller but more accurate ensembles. In particular, a greedy sorting strategy is proposed to rearrange the order by which the members are included into the integration process. Through stopping the fusion process early, a smaller subensemble with higher selection accuracy can be obtained. More importantly, the sequential inclusion criterion reveals the fundamental strength-diversity trade-off among ensemble members. By taking stability selection (abbreviated as StabSel) as an example, some experiments are conducted with both simulated and real-world data to examine the performance of the novel algorithm. Experimental results demonstrate that pruned StabSel generally achieves higher selection accuracy and lower false discovery rates than StabSel and several other benchmark methods.
- Patricio Sanhueza, James M. Jackson, Qizhou Zhang, Andres E. Guzman, Xing Lu, Ian W. Stephens, Ke Wang, Ken'ichi TatematsuApr 28 2017 astro-ph.GA arXiv:1704.08264v1The Infrared Dark Cloud (IRDC) G028.23-00.19 hosts a massive (1,500 Msun), cold (12 K), and 3.6-70 um IR dark clump (MM1) that has the potential to form high-mass stars. We observed this prestellar clump candidate with the SMA (~3.5" resolution) and JVLA (~2.1" resolution) in order to characterize the early stages of high-mass star formation and to constrain theoretical models. Dust emission at 1.3 mm wavelength reveals 5 cores with masses <15 Msun. None of the cores currently have the mass reservoir to form a high-mass star in the prestellar phase. If the MM1 clump will ultimately form high-mass stars, its embedded cores must gather a significant amount of additional mass over time. No molecular outflows are detected in the CO (2-1) and SiO (5-4) transitions, suggesting that the SMA cores are starless. By using the NH3 (1,1) line, the velocity dispersion of the gas is determined to be transonic or mildly supersonic (DeltaV_nt/DeltaV_th ~1.1-1.8). The cores are not highly supersonic as some theories of high-mass star formation predict. The embedded cores are 4 to 7 times more massive than the clump thermal Jeans mass and the most massive core (SMA1) is 9 times less massive than the clump turbulent Jeans mass. These values indicate that neither thermal pressure nor turbulent pressure dominates the fragmentation of MM1. The low virial parameters of the cores (0.1-0.5) suggest that they are not in virial equilibrium, unless strong magnetic fields of ~1-2 mG are present. We discuss high-mass star formation scenarios in a context based on IRDC G028.23-00.19, a study case believed to represent the initial fragmentation of molecular clouds that will form high-mass stars.
- We analyze correlations between pairs of particle detectors quadratically coupled to a real scalar field. We find that, while a single quadratically coupled detector presents no divergences, when one considers pairs of detectors there emerge unanticipated persistent divergences (not regularizable via smooth switching or smearing) in the entanglement they acquire from the field. We have characterized such divergences, discussed whether a suitable regularization can allow for fair comparison of the entanglement harvesting ability of the quadratic and the linear couplings, and finally we have found a UV-safe quantifier of harvested correlations. Our results are relevant to future studies of the entanglement structure of the fermionic vacuum.
- Daniele Bertolini, Daniel Kolodrubetz, Duff Neill, Piotr Pietrulewicz, Iain W. Stewart, Frank J. Tackmann, Wouter J. WaalewijnApr 28 2017 hep-ph arXiv:1704.08262v1We introduce a method to compute one-loop soft functions for exclusive $N$-jet processes at hadron colliders, allowing for different definitions of the algorithm that determines the jet regions and of the measurements in those regions. In particular, we generalize the $N$-jettiness hemisphere decomposition of ref.~\citeJouttenus:2011wh in a manner that separates the dependence on the jet boundary from the observables measured inside the jet and beam regions. Results are given for several factorizable jet definitions, including anti-$k_T$, XCone, and other geometric partitionings. We calculate explicitly the soft functions for angularity measurements, including jet mass and jet broadening, in $pp \to L + 1$ jet and explore the differences for various jet vetoes and algorithms. This includes a consistent treatment of rapidity divergences when applicable. We also compute analytic results for these soft functions in an expansion for a small jet radius $R$. We find that the small-$R$ results, including corrections up to $\mathcal{O}(R^2)$, accurately capture the full behavior over a large range of $R$.
- We characterize the contribution from accreted material to the galactic discs of the Auriga Project, a set of high resolution magnetohydrodynamic cosmological simulations of late-type galaxies performed with the moving-mesh code AREPO. Our goal is to explore whether a significant accreted (or ex-situ) stellar component in the Milky Way disc could be hidden within the near-circular orbit population, which is strongly dominated by stars born in-situ. One third of our models shows a significant ex-situ disc but this fraction would be larger if constraints on orbital circularity were relaxed. Most of the ex-situ material ($\gtrsim 50\%$) comes from single massive satellites ($> 6 \times 10^{10}~M_{\odot}$). These satellites are accreted with a wide range of infall times and inclination angles (up to $85^{\circ}$). Ex-situ discs are thicker, older and more metal-poor than their in-situ counterparts. They show a flat median age profile, which differs from the negative gradient observed in the in-situ component. As a result, the likelihood of identifying an ex-situ disc in samples of old stars on near-circular orbits increases towards the outskirts of the disc. We show three examples that, in addition to ex-situ discs, have a strongly rotating dark matter component. Interestingly, two of these ex-situ stellar discs show an orbital circularity distribution that is consistent with that of the in-situ disc. Thus, they would not be detected in typical kinematic studies.
- Apr 28 2017 hep-ph arXiv:1704.08259v1We consider interference between the Higgs signal and QCD background in $gg\rightarrow h \rightarrow \gamma\gamma$ and its effect on the on-shell Higgs rate. The existence of sizable strong phases leads to destructive interference of about 2% of the on-shell cross section in the Standard Model. This effect can be enhanced by beyond the standard model physics. In particular, since it scales differently from the usual rates, the presence of interference allows indirect limits to be placed on the Higgs width in a novel way, using on-shell rate measurements. Our study motivates further QCD calculations to reduce uncertainties. We discuss potential width-sensitive observables, both using total and differential rates and find that the HL-LHC can potentially indirectly probe widths of order tens of MeV.
- Apr 28 2017 astro-ph.SR astro-ph.HE arXiv:1704.08260v1Tight binaries of helium white dwarfs (He WDs) orbiting millisecond pulsars (MSPs) will eventually "merge" due to gravitational damping of the orbit. The outcome has been predicted to be the production of long-lived ultra-compact X-ray binaries (UCXBs), in which the WD transfers material to the accreting neutron star (NS). Here we present complete numerical computations, for the first time, of such stable mass transfer from a He WD to a NS. We have calculated a number of complete binary stellar evolution tracks, starting from pre-LMXB systems, and evolved these to detached MSP+WD systems and further on to UCXBs. The minimum orbital period is found to be as short as 5.6 minutes. We followed the subsequent widening of the systems until the donor stars become planets with a mass of ~0.005 Msun after roughly a Hubble time. Our models are able to explain the properties of observed UCXBs with high helium abundances and we can identify these sources on the ascending or descending branch in a diagram displaying mass-transfer rate vs. orbital period.
- We study the thermalization, injection, and acceleration of ions with different mass/charge ratios, $A/Z$, in non-relativistic collisionless shocks via hybrid (kinetic ions-fluid electrons) simulations. In general, ions thermalize to a post-shock temperature proportional to $A$. When diffusive shock acceleration is efficient, ions develop a non-thermal tail whose extent scales with $Z$ and whose normalization is enhanced as $(A/Z)^2$, so that incompletely-ionized heavy ions are preferentially accelerated. We discuss how these findings can explain observed heavy-ion enhancements in Galactic cosmic rays.
- We present an in-depth study of the non-equilibrium statistics of the irreversible work produced during sudden quenches in proximity of the structural linear-zigzag transition of ion Coulomb crystals in 1+1 dimensions. By employing both an analytical approach based on a harmonic expansion and numerical simulations, we show the divergence of the average irreversible work in proximity of the transition. We show that the non-analytic behaviour of the work fluctuations can be characterized in terms of the critical exponents of the quantum Ising chain. Thanks to the technological advancements in trapped ion experiments, our results can be readily verified.
- Freeke van de Voort, Eliot Quataert, Claude-André Faucher-Giguère, Dušan Kereš, Philip F. Hopkins, T. K. Chan, Robert Feldmann, Zachary HafenApr 28 2017 astro-ph.GA astro-ph.CO arXiv:1704.08254v1We quantify the gas-phase abundance of deuterium in cosmological zoom-in simulations from the Feedback In Realistic Environments project. The cosmic deuterium fraction decreases with time, because mass lost from stars is deuterium-free. At low metallicity, our simulations confirm that the deuterium abundance is very close to the primordial value. The deuterium abundance decreases towards higher metallicity, with very small scatter between the deuterium and oxygen abundance. We compare our simulations to existing high-redshift observations in order to determine a primordial deuterium fraction of (2.549 +/- 0.033) x 10^-5 and stress that future observations at higher metallicity can also be used to constrain this value. At fixed metallicity, the deuterium fraction decreases slightly with decreasing redshift, due to the increased importance of mass loss from intermediate-mass stars. We find that the evolution of the average deuterium fraction in a galaxy correlates with its star formation history. Our simulations are consistent with observations of the Milky Way's interstellar medium: the deuterium fraction at the solar circle is 83-92% of the primordial deuterium fraction. We use our simulations to make predictions for future observations. In particular, the deuterium abundance is lower at smaller galactocentric radii and in higher mass galaxies, showing that stellar mass loss is more important for fuelling star formation in these regimes (and can even dominate). Gas accreting onto galaxies has a deuterium fraction above that of the galaxies' interstellar medium, but below the primordial fraction, because it is a mix of gas accreting from the intergalactic medium and gas previously ejected or stripped from galaxies.
- Apr 28 2017 astro-ph.HE astro-ph.GA arXiv:1704.08255v1The nature of ultraluminous X-ray sources (ULXs) -- off-nuclear extra-galactic sources with luminosity, assumed isotropic, $\gtrsim 10^{39}$ erg s$^{-1}$ -- is still debated. One possibility is that ULXs are stellar black holes accreting beyond the Eddington limit. This view has been recently reinforced by the discovery of ultrafast outflows at $\sim 0.1$-$0.2c$ in the high resolution spectra of a handful of ULXs, as predicted by models of supercritical accretion discs. Under the assumption that ULXs are powered by super-Eddington accretion onto black holes, we use the properties of the observed outflows to self-consistently constrain their masses and accretion rates. We find masses $\lesssim 100$ M$_{\odot}$ and typical accretion rates $\sim 10^{-5}$ M$_{\odot}$ yr$^{-1}$, i.e. $\approx 10$ times larger than the Eddington limit calculated with a radiative efficiency of 0.1. However, the emitted luminosity is only $\approx 10\%$ beyond the Eddington luminosity, because most of the energy released in the inner part of the accretion disc is used to accelerate the wind, which implies radiative efficiency $\sim 0.01$. Our results are consistent with a formation model where ULXs are black hole remnants of massive stars evolved in low-metallicity environments.
- Apr 28 2017 hep-ph astro-ph.CO arXiv:1704.08256v1We propose a new thermal freeze-out mechanism for ultra-heavy dark matter. Dark matter coannihilates with a lighter unstable species, leading to an annihilation rate that is exponentially enhanced relative to standard WIMPs. This scenario destabilizes any potential dark matter candidate. In order to remain consistent with astrophysical observations, our proposal necessitates very long-lived states, motivating striking phenomenology associated with the late decays of ultra-heavy dark matter, potentially as massive as the scale of grand unified theories, $M_\text{GUT} \sim 10^{16}$ GeV.
- Apr 28 2017 cond-mat.stat-mech cond-mat.soft arXiv:1704.08257v1Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, due to the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally-relevant timescales. In this work we not only close the colossal gap between experiments and simulations but manage to create in-silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is found also in simulations even beyond the experimental glass transition. Our numerical results thus open a new observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation.
- Apr 28 2017 astro-ph.HE hep-ph arXiv:1704.08258v1A possible hint of dark matter annihilation has been found in Cuoco, Korsmeier and Krämer (2017) from an analysis of recent cosmic-ray antiproton data from AMS-02 and taking into account cosmic-ray propagation uncertainties by fitting at the same time dark matter and propagation parameters. Here, we extend this analysis to a wider class of annihilation channels. We find consistent hints of a dark matter signal with an annihilation cross-section close to the thermal value and with masses in range between 40 and 130 GeV depending on the annihilation channel. Furthermore, we investigate in how far the possible signal is compatible with the Galactic center gamma-ray excess and recent observation of dwarf satellite galaxies by performing a joint global fit including uncertainties in the dark matter density profile. As an example, we interpret our results in the framework of the Higgs portal model.
- Machine learning techniques are increasingly being applied toward data analyses at the Large Hadron Collider, especially with applications for discrimination of jets with different originating particles. Previous studies of the power of machine learning to jet physics has typically employed image recognition, natural language processing, or other algorithms that have been extensively developed in computer science. While these studies have demonstrated impressive discrimination power, often exceeding that of widely-used observables, they have been formulated in a non-constructive manner and it is not clear what additional information the machines are learning. In this paper, we study machine learning for jet physics constructively, expressing all of the information in a jet onto sets of observables that completely and minimally span N-body phase space. For concreteness, we study the application of machine learning for discrimination of boosted, hadronic decays of Z bosons from jets initiated by QCD processes. Our results demonstrate that the information in a jet that is useful for discrimination power of QCD jets from Z bosons is saturated by only considering observables that are sensitive to 4-body (8 dimensional) phase space.
- Apr 28 2017 hep-th cond-mat.str-el arXiv:1704.08250v1We compute genus two partition functions in two dimensional conformal field theories at large central charge, focusing on surfaces that give the third Renyi entropy of two intervals. We compute this for generalized free theories and for symmetric orbifolds, and compare it to the result in pure gravity. We find a new phase transition if the theory contains a light operator of dimension $\Delta\leq0.19$. This means in particular that unlike the second Renyi entropy, the third one is no longer universal.
- We study the geometry of elliptic fibrations satisfying the conditions of Step 8 of Tate's algorithm. We call such geometries F$_4$-models, as the dual graph of their special fiber is the twisted affine Dynkin diagram $\widetilde{\text{F}}_4^t$. These geometries are used in string theory to model gauge theories with the exceptional Lie group F$_4$ on a smooth divisor $S$ of the base. Starting with a singular Weierstrass model of an F$_4$-model, we present a crepant resolution of its singularities. We study the fiber structure of this smooth elliptic fibration and identify the fibral divisors up to isomorphism as schemes over $S$. These are $\mathbb{P}^1$-bundles over $S$ or double covers of $\mathbb{P}^1$-bundles over $S$. We compute basic topological invariants such as the double and triple intersection numbers of the fibral divisors and the Euler characteristic of the F$_4$-model. In the case of Calabi-Yau threefolds, we compute the linear form induced by the second Chern class and the Hodge numbers. We also explore the meaning of these geometries for the physics of gauge theories in five and six-dimensional minimal supergravity theories with eight supercharges. We also introduce the notion of "frozen representations" and explore the role of the Stein factorization in the study of fibral divisors of elliptic fibrations.
- Apr 28 2017 physics.app-ph arXiv:1704.08630v1There have been sustained interest in bifacial solar cell technology since 1980s, with prospects of 30-50% increase in the output power from an stand-alone single panel. Moreover, a vertical bifacial panel reduces dust accumulation and provides two output peaks during the day, with the second peak aligned to the peak electricity demand. Recent commercialization and anticipated growth of bifacial panel market have encouraged a closer scrutiny of the integrated power-output and economic viability of bifacial solar farms, where mutual shading will erode some of the anticipated energy gain associated with an isolated, single panel. Towards that goal, in this paper we focus on geography-specific optimizations of ground-mounted vertical bifacial solar farms for the entire world. For local irradiance, we combine the measured meteorological data with the clear-sky model. In addition, we consider the detailed effects of direct, diffuse, and albedo light. We assume the panel is configured into sub-strings with bypass-diodes. Based on calculated light collection and panel output, we analyze the optimum farm design for maximum yearly output at any given location in the world. Our results predict that, regardless of the geographical location, a vertical bifacial farm will yield 10-20% more energy than a traditional monofacial farm for a practical row-spacing of 2m. With the prospect of additional 5-20% energy gain from reduced soiling and tilt optimization, bifacial solar farm do offer a viable technology option for large scale solar energy generation.
- Apr 28 2017 q-bio.PE arXiv:1704.08590v1In recent health policy papers, the social sciences insist that given the increased risks of heart diseases, diabetes, and other chronic conditions women face worldwide, biomedical research should step away from reducing female health to female reproductive health. Arguably the global women's health agenda goes much beyond reproductive concerns, but we contend that it is mistaken to conceptualize women's health as separate from women's evolved reproductive system. This paper elaborates on the evolutionary question 'why do women menstruate?' to review the support for the hypothesis that cyclical immunity is central to the modulation of health and diseases in female bodies. We conclude by outlining the possible implications of conceptualizing female health as cyclical for future research and women's lives.
- A central goal in cancer genomics is to identify the somatic alterations that underpin tumor initiation and progression. This task is challenging as the mutational profiles of cancer genomes exhibit vast heterogeneity, with many alterations observed within each individual, few shared somatically mutated genes across individuals, and important roles in cancer for both frequently and infrequently mutated genes. While commonly mutated cancer genes are readily identifiable, those that are rarely mutated across samples are difficult to distinguish from the large numbers of other infrequently mutated genes. Here, we introduce a method that considers per-individual mutational profiles within the context of protein-protein interaction networks in order to identify small connected subnetworks of genes that, while not individually frequently mutated, comprise pathways that are perturbed across (i.e., "cover") a large fraction of the individuals. We devise a simple yet intuitive objective function that balances identifying a small subset of genes with covering a large fraction of individuals. We show how to solve this problem optimally using integer linear programming and also give a fast heuristic algorithm that works well in practice. We perform a large-scale evaluation of our resulting method, nCOP, on 6,038 TCGA tumor samples across 24 different cancer types. We demonstrate that our approach nCOP is more effective in identifying cancer genes than both methods that do not utilize any network information as well as state-of-the-art network-based methods that aggregate mutational information across individuals. Overall, our work demonstrates the power of combining per-individual mutational information with interaction networks in order to uncover genes functionally relevant in cancers, and in particular those genes that are less frequently mutated.
- Apr 28 2017 hep-ph arXiv:1704.08589v1A Reply to "Comment on `Finding the $0^{--}$ Glueball' " [arXiv:1702.06634] and comment on `Is the exotic $0^{--}$ glueball a pure gluon state?' [arXiv:1611.08698]
- Possible existence of deeply bound kaonic nuclear systems was proposed Akaishi and Yamazaki (2002 more than a decade ago, based on an ansatz that the Lambda* = Lambda(1405) mass is 1405 MeV/c2, where the Lambda* is a Kbar-N quasi-bound state decaying to Sigma-pi. Recently, a large number of data on photo-production of Lambda(1405) in the gamma p to K+ pi0+- Sigma0-+ reaction were provided by the CLAS collaboration Moriya et al.(2013, and double-pole structure of the Lambda* has been intensively discussed by chiral dynamics analyses Roca (2013),Mai 2015, whereas we show that a Lambda* mass of 1405 MeV/c2 is deduced from the same CLAS data.
- T. Zhang, D. Guerin, F. Alibart, D. Vuillaume, K. Lmimouni, S. Lenfant, A. Yassin, M. Ocafrain, P. Blanchard, J. RoncaliApr 28 2017 physics.app-ph cond-mat.mes-hall arXiv:1704.08629v1We report on hybrid memristive devices made of a network of gold nanoparticles (10 nm diameter) functionalized by tailored 3,4(ethylenedioxy)thiophene (TEDOT) molecules, deposited between two planar electrodes with nanometer and micrometer gaps (100 nm to 10 um apart), and electropolymerized in situ to form a monolayer film of conjugated polymer with embedded gold nanoparticles (AuNPs). Electrical properties of these films exhibit two interesting behaviors: (i) a NDR (negative differential resistance) behavior with a peak/valley ratio up to 17, and (ii) a memory behavior with an ON/OFF current ratio of about 1E3 to 1E4. A careful study of the switching dynamics and programming voltage window is conducted demonstrating a non-volatile memory. The data retention of the ON and OFF states is stable (tested up to 24h), well controlled by the voltage and preserved when repeating the switching cycles (800 in this study). We demonstrate reconfigurable Boolean functions in multiterminal connected NP molecule devices.
- Apr 28 2017 cond-mat.mtrl-sci arXiv:1704.08682v1Progress in the development of coupled atomistic-continuum methods for simulations of critical dynamic material behavior has been hampered by a spurious wave reflection problem at the atomistic-continuum interface. This problem is mainly caused by the difference in material descriptions between the atomistic and continuum models, which results in a mismatch in phonon dispersion relations. In this work, we introduce a new method based on atomistic dynamics of lattice coupled with a concurrent atomistic-continuum method to enable a full phonon representation in the continuum description. This then permits the passage of short-wavelength, high-frequency phonon waves from the atomistic to continuum regions. The benchmark examples presented in this work demonstrate that the new scheme enables the passage of all allowable phonons through the atomistic-continuum interface; it also preserves the wave coherency and energy conservation after phonons transport across multiple atomistic-continuum interfaces. This work is the first step towards developing a concurrent atomistic-continuum simulation tool for non-equilibrium phonon-mediated thermal transport in materials with microstructural complexity.
- The analysis of observed time series from nonlinear systems is usually done by making a time-delay reconstruction to unfold the dynamics on a multi-dimensional state space. An important aspect of the analysis is the choice of the correct embedding dimension. The conventional procedure used for this is either the method of false nearest neighbors or the saturation of some invariant measure, such as, correlation dimension. Here we examine this issue from a complex network perspective and propose a recurrence network based measure to determine the acceptable minimum embedding dimension to be used for such analysis. The measure proposed here is based on the well known Kullback-Leibler divergence commonly used in information theory. We show that the measure is simple and direct to compute and give accurate result for short time series. To show the significance of the measure in the analysis of practical data, we present the analysis of two EEG signals as examples.
- Apr 28 2017 q-bio.PE cond-mat.stat-mech arXiv:1704.08583v1A new statistical physics model is introduced for describing the interaction of bacteria with anti-microbial drugs (AMDs) which we show can reproduce qualitative features of the emergence of single and double anti-microbial resistance (AMR) through natural selection. The model portrays a lattice inhabited by agents, the latter modelled by simple Ising perceptrons. Model parameters and outputs are based on actual biological and pharmacological quantities, opening the possibility of comparing our results to controlled in vitro experiments. The model is used to compare different protocols for fighting resistance. Memory effects described in the literature are observed for single and double-drug treatments. The simulations indicate an advantage in mixing drugs among the population compared to other protocols. We use the results to propose a new protocol, which we call mixed cycling, that outperforms all other protocols under the conditions represented by our model.
- Apr 28 2017 physics.chem-ph cond-mat.stat-mech arXiv:1704.08575v1The calculation of caloric properties such as heat capacity, Joule-Thomson coefficients and the speed of sound by classical force-field-based molecular simulation methodology has received scant attention in the literature, particularly for systems composed of complex molecules whose force fields (FFs) are characterized by a combination of intramolecular and intermolecular terms (referred to herein as "flexible FFs"). The calculation of a thermodynamic property for a system whose molecules are described by such a FF involves the calculation of the residual property prior to its addition to the corresponding ideal-gas (IG) property, the latter of which is separately calculated, either using thermochemical compilations or nowadays accurate quantum mechanical calculations. Although the simulation of a volumetric residual property proceeds by simply replacing the intermolecular FF in the rigid molecule case by the total (intramolecular plus intermolecular) FF, this is not the case for a caloric property. We discuss the methodology required in performing such calculations, and focus on the example of the molar heat capacity at constant pressure, $c_P$, one of the most important caloric properties. We also consider three approximations for the calculation procedure, and illustrate their consequences for the examples of the relatively simple molecule 2-propanol, ${\rm CH_3CH(OH)CH_3}$, and for monoethanolamine, ${\rm HO(CH_2)_2NH_2}$, an important fluid used in carbon capture.
- Using random matrix ensembles, mimicking weight matrices from deep and recurrent neural networks, we investigate how increasing connectivity leads to higher accuracy in learning with a related measure on eigenvalue spectra. For this purpose, we quantify spectral ergodicity based on the Thirumalai-Mountain (TM) metric and Kullbach-Leibler (KL) divergence. As a case study, differ- ent size circular random matrix ensembles, i.e., circular unitary ensemble (CUE), circular orthogonal ensemble (COE), and circular symplectic ensemble (CSE), are generated. Eigenvalue spectra are computed along with the approach to spectral ergodicity with increasing connectivity size. As a result, it is argued that success of deep learning architectures attributed to spectral ergodicity conceptually, as this property prominently decreases with increasing connectivity in surrogate weight matrices.
- Apr 28 2017 physics.hist-ph physics.pop-ph arXiv:1704.08309v1Around year 2000 the centenary of Planck's thermal radiation formula awakened interest in the origins of quantum theory, traditionally traced back to the Planck's conference on 14 December 1900 at the Berlin Academy of Sciences. A lot of more accurate historical reconstructions, conducted under the stimulus of that recurrence, placed the birth date of quantum theory in March 1905 when Einstein advanced his light quantum hypothesis. Both interpretations are yet controversial, but science historians agree on one point: the emergence of quantum theory from a presumed "crisis" of classical physics is a myth with scarce adherence to the historical truth. This article, written in Italian language, was originally presented in connection with the celebration of the World Year of Phyics 2005 with the aim of bringing these scholarly theses to a wider audience. --- Tradizionalmente la nascita della teoria quantistica viene fatta risalire al 14 dicembre 1900, quando Planck presentò all'Accademia delle Scienze di Berlino la dimostrazione della formula della radiazione termica. Numerose ricostruzioni storiche più accurate, effettuate nel periodo intorno al 2000 sotto lo stimolo dell'interesse per il centenario di quell'avvenimento, collocano invece la nascita della teoria quantistica nel marzo del 1905, quando Einstein avanzò l'ipotesi dei quanti di luce. Entrambe le interpretazioni sono tuttora controverse, ma gli storici della scienza concordano su un punto: l'emergere della teoria quantistica da una presunta "crisi" della fisica classica è un mito con scarsa aderenza alla verità storica. Con questo articolo in italiano, presentato originariamente in occasione delle celebrazioni per il World Year of Phyics 2005, si è inteso portare a un più largo pubblico queste tesi già ben note agli specialisti.
- Apr 28 2017 cs.MS arXiv:1704.08579v1This paper describes fast sorting techniques using the recent AVX-512 instruction set. Our implementations benefit from the latest possibilities offered by AVX-512 to vectorize a two-parts hybrid algorithm: we sort the small arrays using a branch- free Bitonic variant, and we provide a vectorized partitioning kernel which is the main component of the well-known Quicksort. Our algorithm sorts in-place and is straightforward to implement thanks to the new instructions. Meanwhile, we also show how an algorithm can be adapted and implemented with AVX-512. We report a performance study on the Intel KNL where our approach is faster than the GNU C++ sort algorithm for any size in both integer and double floating-point arithmetics by a factor of 4 in average.
- Apr 28 2017 q-bio.TO arXiv:1704.08307v1Wear on total knee replacements (TKRs) is an important criterion for their performance characteristics. Numerical simulations of such wear has seen increasing attention over the last years. They have the potential to be much faster and less expensive than the in vitro tests in use today. While it is unlikely that in silico tests will replace actual physical tests in the foreseeable future, a judicious combination of both approaches can help making both implant design and pre-clinical testing quicker and more cost-effective. The challenge today for the design of simulation methods is to obtain results that convey quantitative information, and to do so quickly and reliably. This involves the choice of mathematical models as well as the numerical tools used to solve them. The correctness of the choice can only be validated by comparing with experimental results. In this paper we present finite element simulations of the wear in TKRs during the gait cycle standardized in the ISO 14243-1 document, used for compliance testing in several countries. As the ISO 14243-1 standard is precisely defined and publicly available, it can serve as an excellent benchmark for comparison of wear simulation methods. Our novel contact algorithm works without Lagrange multipliers and penalty methods, achieving unparalleled stability and efficiency. We compare our simulation results with the experimental data from physical tests using two different actual TKRs, each test being performed three times. We can closely predict the total mass loss due to wear after five million gait cycles. We also observe a good match between the wear patterns seen in experiments and our simulation results.
- Apr 28 2017 stat.AP arXiv:1704.08299v1Historical studies of labor market outcomes frequently suffer from a lack of data on individual income. The occupational income score (OCCSCORE) is often used as an alternative measure of labor market outcomes, particularly in studies of the U.S. prior to 1950. While researchers have acknowledged that this approach introduces measurement error, no effort has been made to quantify its impact on inferences. Using modern Census data, we find that the use of OCCSCORE biases results towards zero and can frequently result in statistically significant coefficients of the wrong sign. We show that a simple adjustment to OCCSCORE can substantially reduce this bias. We illustrate our results using the 1915 Iowa State Census, a rare source of pre-1950 earnings data. Using OCCSCORE in this context yields an attenuated wage gap for blacks and a statistically significant wage gap of the wrong sign for women; our adjusted OCCSCORE eliminates almost all of this bias. We also examine how bias due to the use of OCCSCORE affects estimates of intergenerational mobility using linked data from the 1850-1910 Censuses.
- Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.

Evaluating gambles using dynamics

Thomas Klimpel Apr 20 2017 09:16 UTCVeaceslav Molodiuc Apr 19 2017 07:26 UTC

http://ibiblio.org/e-notes/Chaos/intermit.htm

Zoltán Zimborás Apr 18 2017 09:47 UTC

...(continued)Great note. I real like the two end-sentences: "Of course, any given new approach to a hard and extensively studied problem has a very low probability to lead to a direct solution (some popular accounts may not have emphasized this to the degree we would have preferred). But arguably, this makes the

James Wootton Apr 18 2017 08:29 UTC

Interesting to start getting perspectives from actual end users. But this does focus massively on quantum annealing, rather than a 'true' universal and fault-tolerant QC.

Aram Harrow Apr 17 2017 13:45 UTC

It must feel good to get this one out there! :)

Planat Apr 14 2017 08:11 UTC

...(continued)First of all, thanks to all for helping to clarify some hidden points of our paper.

As you can see, the field norm generalizes the standard Hilbert-Schmidt norm.

It works for SIC [e.g. d=2, d=3 (the Hesse) and d=8 (the Hoggar)].The first non-trivial case is with d=4 when one needs to extend th

Robin Blume-Kohout Apr 14 2017 03:03 UTC

...(continued)Okay, I see the resolution to my confusion now (and admit that I was confused). Thanks to Michel, Marcus, Blake, and Steve!

Since I don't know the first thing about cyclotomic field norms... can anybody explain the utility of this norm, for this problem? I mean, just to be extreme, I could define

Steve Flammia Apr 13 2017 19:16 UTC

...(continued)Just to clarify Michel's earlier remark, the field norm for the cyclotomics defines the norm in which these vectors are equiangular, and then they will generally **not** be equiangular in the standard norm based on the Hilbert-Schmidt inner product. In the example that he quotes,

$$\|(7\pm 3 \sqrt{

Marcus Appleby Apr 13 2017 19:16 UTC

...(continued)I worded that badly, since you clearly have explained the sense in which you are using the word. I am wondering, however, how your definition relates to the usual one. Is it a generalization? Or just plain different? For instance, would a SIC be equiangular relative to your definition (using SI

Marcus Appleby Apr 13 2017 18:54 UTC

I am a little confused by this. As I use the term, lines are equiangular if and only if the "trace of pairwise product of (distinct) projectors is constant". You seem to be using the word in a different sense. It might be helpful if you were to explain exactly what is that sense.