Top arXiv papers

• McVittie spacetimes embed the vacuum Schwarzschild(-(anti) de Sitter) spacetime in an isotropic FLRW background universe. We study the global structure of McVittie spacetimes with spatially non-flat FLRW backgrounds. This requires the extension of the definition of such spacetimes, previously given only for the flat and open cases, to the closed case. We revisit this definition and show how it gives rise to a unique spacetime (given the FLRW background, the mass parameter $M$ and the cosmological constant $\Lambda$) in the open and flat cases. In the closed case, an additional free function of the cosmic time arises. We derive some basic results on the metric, curvature and matter content of McVittie spacetimes and derive a representation of the line element that makes the study of their global properties possible. In the closed case (independently of the free function mentioned above), the spacetime is confined (at each instant of time) to a region bounded by a minimum and a maximum area radius, and is bounded either to the future or to the past by a scalar curvature singularity. This allowed region only exists when the background scale factor is above a certain minimum. In the open case, radial null geodesics originate in finite affine time in the past at a boundary formed by the union of the Big Bang singularity of the FLRW background and a non-singular hypersurface of varying causal character. Furthermore, in the case of eternally expanding open universes, we show that black holes are ubiquitous: ingoing radial null geodesics extend in finite affine time to a hypersurface that forms the boundary of the region from which photons can escape to future null infinity. We revisit the black hole interpretation of McVittie spacetimes in the spatially flat case, and show that this interpretation holds also in the case of a vanishing cosmological constant, contrary to a previous claim of ours.
• A high fraction of carbon bound in solid carbonaceous material is observed to exist in bodies formed in the cold outskirts of the solar nebula, while bodies in the terrestrial planets region contain nearly none. We study the fate of the carbonaceous material during the spiral-in of matter as the sun accretes matter from the solar nebula. From observational data on the composition of the dust component in comets and interplanetary dust particles, and from data on pyrolysis experiments, we construct a model for the composition of the pristine carbonaceous material in the outer parts of the solar nebula. We study the pyrolysis of the refractory and volatile organic component and the concomitant release of high-molecular-weight hydrocarbons under quiescent conditions of disk evolution where matter migrates inwards. We also study the decomposition and oxidation of the carbonaceous material during violent flash heating events, which are thought to be responsible for the formation of chondrules. It is found that the complex hydrocarbon components are removed from the solid disk matter at temperatures between 250 and 400 K, while the amorphous carbon component survives up to 1200 K. Without efficient carbon destruction during flash-heating associated with chondrule formation the carbon abundance of terrestrial planets, except for Mercury, would be not as low as it is found in cosmochemical studies. Chondrule formation seems to be a process that is crucial for the carbon-poor composition of the material of terrestrial planets.
• We study three types of nonlocal nonlinear Schrödinger (NLS) equations obtained from the coupled NLS system of equations (AKNS equations) by using Ablowitz-Musslimani type nonlocal reductions. By using the Hirota bilinear method we first find soliton solutions of the coupled NLS system of equations then using the reduction formulas we find the soliton solutions of the standard and time ($T$)-, space ($S$)-, and space-time $ST$- reversal symmetric nonlocal NLS equations. We give examples for particular values of the parameters and plot the function $|q(t,x)|^2$ for the standard NLS equation and for each nonlocal NLS equation.
• Purpose: To develop a deep learning approach to digitally-stain optical coherence tomography (OCT) images of the optic nerve head (ONH). Methods: A horizontal B-scan was acquired through the center of the ONH using OCT (Spectralis) for 1 eye of each of 100 subjects (40 normal & 60 glaucoma). All images were enhanced using adaptive compensation. A custom deep learning network was then designed and trained with the compensated images to digitally stain (i.e. highlight) 6 tissue layers of the ONH. The accuracy of our algorithm was assessed (against manual segmentations) using the Dice coefficient, sensitivity, and specificity. We further studied how compensation and the number of training images affected the performance of our algorithm. Results: For images it had not yet assessed, our algorithm was able to digitally stain the retinal nerve fiber layer + prelamina, the retinal pigment epithelium, all other retinal layers, the choroid, and the peripapillary sclera and lamina cribrosa. For all tissues, the mean dice coefficient was $0.84 \pm 0.03$, the mean sensitivity $0.92 \pm 0.03$, and the mean specificity $0.99 \pm 0.00$. Our algorithm performed significantly better when compensated images were used for training. Increasing the number of images (from 10 to 40) to train our algorithm did not significantly improve performance, except for the RPE. Conclusion. Our deep learning algorithm can simultaneously stain neural and connective tissues in ONH images. Our approach offers a framework to automatically measure multiple key structural parameters of the ONH that may be critical to improve glaucoma management.
• Falls are serious and costly for elderly people. The Centers for Disease Control and Prevention of the US reports that millions of older people, 65 and older, fall each year at least once. Serious injuries such as; hip fractures, broken bones or head injury, are caused by 20% of the falls. The time it takes to respond and treat a fallen person is crucial. With this paper we present a new , non-invasive system for fallen people detection. Our approach uses only stereo camera data for passively sensing the environment. The key novelty is a human fall detector which uses a CNN based human pose estimator in combination with stereo data to reconstruct the human pose in 3D and estimate the ground plane in 3D. We have tested our approach in different scenarios covering most activities elderly people might encounter living at home. Based on our extensive evaluations, our system shows high accuracy and almost no miss-classification. To reproduce our results, the implementation will be made publicly available to the scientific community.
• This article brings forward an estimation of the proportion of homonyms in large scale groups based on the distribution of first names and last names in a subset of these groups. The estimation is based on the generalization of the "birthday paradox problem". The main results is that, in societies such as France or the United States, identity collisions (based on first + last names) are frequent. The large majority of the population has at least one homonym. But in smaller settings, it is much less frequent : even if small groups of a few thousand people have at least one couple of homonyms, only a few individuals have an homonym.
• We discuss the structure of rapidity divergences that are presented in the soft factors of transverse momentum dependent (TMD) factorization theorems. To provide the discussion on the most general level we consider soft factors for multi-parton scattering. We show that the rapidity divergences are result of the gluon exchanges with the distant transverse plane, and are structurally equivalent to the ultraviolet divergences. It allows to formulate and to prove the renormalization theorem for rapidity divergences. The proof is made with the help the conformal transformation which maps rapidity divergences to ultraviolet divergences. The theorem is the systematic form of the factorization of rapidity divergences, which is required for the definition of TMD parton distributions. In particular, the definition of multi parton distributions is presented. The equivalence of ultraviolet and rapidity divergences leads to the exact relation between soft and rapidity anomalous dimensions. Using this relation we derive the rapidity anomalous dimension at the three-loop order.
• Deep neural networks have become a primary tool for solving problems in many fields. They are also used for addressing information retrieval problems and show strong performance in several tasks. Training these models requires large, representative datasets and for most IR tasks, such data contains sensitive information from users. Privacy and confidentiality concerns prevent many data owners from sharing the data, thus today the research community can only benefit from research on large-scale datasets in a limited manner. In this paper, we discuss privacy preserving mimic learning, i.e., using predictions from a privacy preserving trained model instead of labels from the original sensitive training data as a supervision signal. We present the results of preliminary experiments in which we apply the idea of mimic learning and privacy preserving mimic learning for the task of document re-ranking as one of the core IR tasks. This research is a step toward laying the ground for enabling researchers from data-rich environments to share knowledge learned from actual users' data, which should facilitate research collaborations.
• We propose to use 2D monolayers possessing optical gaps and high exciton oscillator strength as an element of one-dimensional resonant photonic crystals. We demonstrate that such systems are promising for the creation of effective and compact delay units. In the transition-metal-dichalcogenide-based structures where the frequencies of Bragg and exciton resonances are close, a propagating short pulse can be slowed down by few picoseconds while the pulse intensity decreases only 2 - 5 times. This is realized at the frequency of the "slow" mode situated within the stopband. The pulse retardation and attenuation can be controlled by detuning the Bragg frequency from the exciton resonance frequency.
• It is widely accepted that black holes (BHs) with masses greater than a million solar masses (Msun) lurk at the centres of massive galaxies. The origins of such `supermassive' black holes (SMBHs) remain unknown (Djorgovski et al. 1999), while those of stellar-mass BHs are well-understood. One possible scenario is that intermediate-mass black holes (IMBHs), which are formed by the runaway coalescence of stars in young compact star clusters (Portagies Zwart et al. 1999), merge at the centre of a galaxy to form an SMBH (Ebisuzaki et al. 2001). Although many candidates for IMBHs have been proposed to date, none of them are accepted as definitive. Recently we discovered a peculiar molecular cloud, CO-0.40-0.22, with an extremely broad velocity width near the centre of our Milky Way galaxy. Based on the careful analysis of gas kinematics, we concluded that a compact object with a mass of ~1E5 Msun is lurking in this cloud (Oka et al. 2016). Here we report the detection of a point-like continuum source as well as a compact gas clump near the center of CO-0.40-0.22. This point-like continuum source (CO-0.40-0.22*) has a wide-band spectrum consistent with 1/500 of the Galactic SMBH (Sgr A*) in luminosity. Numerical simulations around a point-like massive object reproduce the kinematics of dense molecular gas well, which suggests that CO-0.40-0.22* is the most promising candidate for an intermediate-mass black hole.
• Jul 25 2017 astro-ph.GA arXiv:1707.07602v1
Some highlights are given of the IAU Symposium 334, Rediscovering our Galaxy, held in Potsdam, in July 2017: from the first stars fossil records found in the halo, the carbon-enhanced metal poor CEMP-no, to the cosmological simulations presenting possible scenarios for the Milky Way formation, passing through the chemo-dynamical models of the various components, thin and thick disks, box/peanut bulge, halo, etc. The domain is experiencing (or will be in the near future) huge improvements with precise and accurate stellar ages, provided by astero-seismology, precise stellar distances and kinematics (parallaxes and proper motions from GAIA), and the big data resulting from large surveys are treated with deep learning algorithms.
• In this paper we propose a model to learn multimodal multilingual representations for matching images and sentences in different languages, with the aim of advancing multilingual versions of image search and image understanding. Our model learns a common representation for images and their descriptions in two different languages (which need not be parallel) by considering the image as a pivot between two languages. We introduce a new pairwise ranking loss function which can handle both symmetric and asymmetric similarity between the two modalities. We evaluate our models on image-description ranking for German and English, and on semantic textual similarity of image descriptions in English. In both cases we achieve state-of-the-art performance.
• An atom interferometer using a Bose-Einstein condensate of $^{87}$Rb atoms is utilized for the measurement of magnetic field gradients. Composite optical pulses are used to construct a spatially symmetric Mach-Zehnder geometry. Using a biased interferometer we demonstrate the ability to measure small residual forces in our system at the position of the atoms. These are a residual magnetic field gradient of 15$\pm$2 mG/cm and and an inertial acceleration of 0.08$\pm$0.02 m/s$^2$. Our method has important applications in the calibration of precision measurement devices and the reduction of systematic errors.
• The paper addresses novel dispersion properties of elastic flexural waves in periodic structures which possess rotational inertia. The structure is represented as a lattice, whose elementary links are formally defined as the Rayleigh beams. Although in the quasi-static regime such beams respond similarly to the classical Euler-Bernoulli beams, as the frequency increases the dispersion of flexural waves possesses new interesting features. For a doubly periodic lattice, we give a special attention to degeneracies associated with so-called Dirac cones on the dispersion surfaces as well as directional anisotropy. Comparative analysis for Floquet-Bloch waves in periodic flexural lattices of different geometries is presented and accompanied by numerical simulations.
• Estimating parameters of Partial Differential Equations (PDEs) is of interest in a number of applications such as geophysical and medical imaging. Parameter estimation is commonly phrased as a PDE-constrained optimization problem that can be solved iteratively using gradient-based optimization. A computational bottleneck in such approaches is that the underlying PDEs needs to be solved numerous times before the model is reconstructed with sufficient accuracy. One way to reduce this computational burden is by using Model Order Reduction (MOR) techniques such as the Multiscale Finite Volume Method (MSFV). In this paper, we apply MSFV for solving high-dimensional parameter estimation problems. Given a finite volume discretization of the PDE on a fine mesh, the MSFV method reduces the problem size by computing a parameter-dependent projection onto a nested coarse mesh. A novelty in our work is the integration of MSFV into a PDE-constrained optimization framework, which updates the reduced space in each iteration. We also present a computationally tractable way of differentiating the MOR solution that acknowledges the change of basis. As we demonstrate in our numerical experiments, our method leads to computational savings particularly for large-scale parameter estimation problems and can benefit from parallelization.
• In present paper, we establish sufficient conditions for existence and stability of solutions for system of nonlinear implicit fractional differential equations. The main techniques are based on method of successive approximations. Finally, an illustrative example is given to show the applicability of our theoretical results.
• Jul 25 2017 math.DG arXiv:1707.07595v1
We prove in a direct, geometric way that for any compatible Riemannian metric on a Lie manifold the injectivity radius is positive
• The political discourse in Western European countries such as Germany has recently seen a resurgence of the topic of refugees, fueled by an influx of refugees from various Middle Eastern and African countries. Even though the topic of refugees evidently plays a large role in online and offline politics of the affected countries, the fact that protests against refugees stem from the right-wight political spectrum has lead to corresponding media to be shared in a decentralized fashion, making an analysis of the underlying social and mediatic networks difficult. In order to contribute to the analysis of these processes, we present a quantitative study of the social media activities of a contemporary nationwide protest movement against local refugee housing in Germany, which organizes itself via dedicated Facebook pages per city. We analyse data from 136 such protest pages in 2015, containing more than 46,000 posts and more than one million interactions by more than 200,000 users. In order to learn about the patterns of communication and interaction among users of far-right social media sites and pages, we investigate the temporal characteristics of the social media activities of this protest movement, as well as the connectedness of the interactions of its participants. We find several activity metrics such as the number of posts issued, discussion volume about crime and housing costs, negative polarity in comments, and user engagement to peak in late 2015, coinciding with chancellor Angela Merkel's much criticized decision of September 2015 to temporarily admit the entry of Syrian refugees to Germany. Furthermore, our evidence suggests a low degree of direct connectedness of participants in this movement, (i.a., indicated by a lack of geographical collaboration patterns), yet we encounter a strong affiliation of the pages' user base with far-right political parties.
• Jul 25 2017 math.LO arXiv:1707.07593v1
We are concerned with the problem of witnessing the Baire property of the Borel and the projective sets (assuming determinacy) through a sufficiently definable function in the codes. We prove that in the case of projective sets it is possible to satisfy this for almost all codes using a continuous function. We also show that it is impossible to improve this to all codes even if more complex functions in the codes are allowed. We also study the intermediate steps of the Borel hierarchy, and we give an estimation for the complexity of such functions in the codes, which verify the Baire property for actually all codes.
• The massive spread of fake news has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
• This work addresses the task of generating English sentences from Abstract Meaning Representation (AMR) graphs. To cope with this task, we transform each input AMR graph into a structure similar to a dependency tree and annotate it with syntactic information by applying various predefined actions to it. Subsequently, a sentence is obtained from this tree structure by visiting its nodes in a specific order. We train maximum entropy models to estimate the probability of each individual action and devise an algorithm that efficiently approximates the best sequence of actions to be applied. Using a substandard language model, our generator achieves a Bleu score of 27.4 on the LDC2014T12 test set, the best result reported so far without using silver standard annotations from another corpus as additional training data.
• A Riemannian manifold is said to be almost positively curved if the sets of points for which all $2$-planes have positive sectional curvature is open and dense. We show that the Grassmannian of oriented $2$-planes in $\mathbb{R}^7$ admits a metric of almost positive curvature, giving the first example of an almost positively curved metric on an irreducible compact symmetric space of rank greater than $1$. The construction and verification rely on the Lie group $\mathbf{G}_2$ and the octonions, so do not obviously generalize to any other Grassmannians.
• In this paper we consider the Iteratively Regularized Gauss-Newton Method (IRGNM) in its classical Tikhonov version and in an Ivanov type version, where regularization is achieved by imposing bounds on the solution. We do so in a general Banach space setting and under a tangential cone condition, while convergence (without source conditions, thus without rates) has so far only been proven under stronger restrictions on the nonlinearity of the operator and/or on the spaces. Moreover, we provide a convergence result for the discretized problem with an appropriate control on the error and show how to provide the required error bounds by goal oriented weighted dual residual estimators. The results are illustrated for an inverse source problem for a nonlinear elliptic boundary value problem, for the cases of a measure valued and of an $L^\infty$ source. For the latter, we also provide numerical results with the Ivanov type IRGNM.
• We show that the fourth integral cohomology of Conway's group $\mathrm{Co}_0$ is a cyclic group of order $24$, generated by the first fractional Pontryagin class of the $24$-dimensional representation.
• Foreground segmentation in video sequences is a classic topic in computer vision. Due to the lack of semantic and prior knowledge, it is difficult for existing methods to deal with sophisticated scenes well. Therefore, in this paper, we propose an end-to-end two-stage deep convolutional neural network (CNN) framework for foreground segmentation in video sequences. In the first stage, a convolutional encoder-decoder sub-network is employed to reconstruct the background images and encode rich prior knowledge of background scenes. In the second stage, the reconstructed background and current frame are input into a multi-channel fully-convolutional sub-network (MCFCN) for accurate foreground segmentation. In the two-stage CNN, the reconstruction loss and segmentation loss are jointly optimized. The background images and foreground objects are output simultaneously in an end-to-end way. Moreover, by incorporating the prior semantic knowledge of foreground and background in the pre-training process, our method could restrain the background noise and keep the integrity of foreground objects at the same time. Experiments on CDNet 2014 show that our method outperforms the state-of-the-art by 4.9%.
• Recent advances obtained in the field of near and sub-barrier heavy-ion fusion reactions are reviewed. Emphasis is given to the results obtained in the last decade, and focus will be mainly on the experimental work performed concerning the influence of transfer channels on fusion cross sections and the hindrance phenomenon far below the barrier. Indeed, early data of sub-barrier fusion taught us that cross sections may strongly depend on the low-energy collective modes of the colliding nuclei, and, possibly, on couplings to transfer channels. The coupled-channels (CC) model has been quite successful in the interpretation of the experimental evidences. Fusion barrier distributions often yield the fingerprint of the relevant coupled channels. Recent results obtained by using radioactive beams are reported. At deep sub-barrier energies, the slope of the excitation function in a semi-logarithmic plot keeps increasing in many cases and standard CC calculations over-predict the cross sections. This was named a hindrance phenomenon, and its physical origin is still a matter of debate. Recent theoretical developments suggest that this effect, at least partially, may be a consequence of the Pauli exclusion principle. The hindrance may have far-reaching consequences in astrophysics where fusion of light systems determines stellar evolution during the carbon and oxygen burning stages, and yields important information for exotic reactions that take place in the inner crust of accreting neutron stars.
• The radio-frequency surface resistance of niobium resonators is incredibly reduced when nitrogen impurities are dissolved as interstitial in the material, conferring ultra-high Q-factors at medium values of accelerating field. This effect has been observed in both high and low temperature nitrogen treatments. As a matter of fact, the peculiar anti Q-slope observed in nitrogen doped cavities, i.e. the decreasing of the Q-factor with the increasing of the radio-frequency field, come from the decreasing of the BCS surface resistance component as a function of the field. Such peculiar behavior has been considered consequence of the interstitial nitrogen present in the niobium lattice after the doping treatment. The study here presented show the field dependence of the BCS surface resistance of cavities with different resonant frequencies, such as: 650 MHz, 1.3 GHz, 2.6 GHz and 3.9 GHz, and processed with different state-of-the-art surface treatments. These findings show for the first time that the anti Q-slope might be seen at high frequency even for clean Niobium cavities, revealing useful suggestion on the physics underneath the anti Q-slope effect.
• A graph with chromatic number $k$ is called $k$-chromatic. Using computational methods, we show that the smallest triangle-free 6-chromatic graphs have at least 32 and at most 40 vertices. We also determine the complete set of all triangle-free 5-chromatic graphs up to 23 vertices and all triangle-free 5-chromatic graphs on 24 vertices with maximum degree at most 7. This implies that Reed's conjecture holds for triangle-free graphs up to at least 24 vertices. Finally, we determine that the smallest regular triangle-free 5-chromatic graphs have 24 vertices.
• We use numerical N-body hydrodynamical simulations with varying PopIII stellar models to investigate the possibility of detecting first star signatures with observations of high-redshift damped Ly$\alpha$ absorbers (DLAs). The simulations include atomic and molecular cooling, star formation, energy feedback and metal spreading due to the evolution of stars with a range of masses and metallicities. Different initial mass functions (IMFs) and corresponding metal-dependent yields and lifetimes are adopted to model primordial stellar populations. The DLAs in the simulations are selected according to either the local gas temperature (temperature selected) or the host mass (mass selected). We find that 3\% (40\%) of mass (temperature) selected high-$z$ ($z\ge5.5$) DLAs retain signatures of pollution from PopIII stars, independently from the first star model. Such DLAs have low halo mass ($<10^{9.6}\,\rm M_{\odot}$), metallicity ($<10^{-3}\,\rm Z_{\odot}$) and star formation rate ($<10^{-1.5}\,\rm M_{\odot}\,yr^{-1}$). Metal abundance ratios of DLAs imprinted in the spectra of QSO can be useful tools to infer the properties of the polluting stellar generation and to constrain the first star mass ranges. Comparing the abundance ratios derived from our simulations to those observed in DLAs at $z\ge5$, we find that most of these DLAs are consistent within errors with PopII stars dominated enrichment and strongly disfavor the pollution pattern of very massive first stars (i.e. 100~$\rm M_{\odot}$-500~$\rm M_{\odot}$). However, some of them could still result from the pollution of first stars in the mass range [0.1, 100]~$\rm M_{\odot}$. In particular, we find that the abundance ratios from SDSS J1202+3235 are consistent with those expected from PopIII enrichment dominated by massive (but not extreme) first stars.
• This paper is concerned with necessary and sufficient second-order conditions for finite-dimensional and infinite-dimensional constrained optimization problems. Using a suitably defined directional curvature functional for the admissible set, we derive no-gap second-order optimality conditions in an abstract functional analytic setting. Our theory not only covers those cases where the classical assumptions of polyhedricity or second-order regularity are satisfied but also allows to study problems in the absence of these requirements. As a tangible example, we consider no-gap second-order conditions for bang-bang optimal control problems.
• Many statistical properties of X-ray aperiodic variability from accreting compact objects can be explained by the propagating fluctuations model applied to the accretion disc. The mass accretion rate fluctuations originate from variability of viscosity, which arises at every radius and causes local fluctuations of the density. The fluctuations diffuse through the disc and result in local variability of the mass accretion rate, which modulates the X-ray flux from the inner disc in the case of black holes, or from the surface in the case of neutron stars. A key role in the theoretical explanation of fast variability belongs to the description of the diffusion process. The propagation and evolution of the fluctuations is described by the diffusion equation, which can be solved by the method of Green functions. We implement Green functions in order to accurately describe the propagation of fluctuations in the disc. For the first time we consider both forward and backward propagation. We show that (i) viscous diffusion efficiently suppress variability at time scales shorter than the viscous time, (ii) local fluctuations of viscosity affect the mass accretion rate variability both in the inner and the outer parts of accretion disc, (iii) propagating fluctuations give rise not only to hard time lags as previously shown, but also produce soft lags at high frequency similar to those routinely attributed to reprocessing, (iv) deviation from the linear rms-flux relation is predicted for the case of very large initial perturbations. Our model naturally predicts bumpy power spectra.
• We present a method allowing us to measure the spectral functions of non-interacting ultra-cold atoms in a three-dimensional disordered potential resulting from an optical speckle field. Varying the disorder strength by two orders of magnitude, we observe the crossover from the "quantum" perturbative regime of low disorder to the "classical" regime at higher disorder strength, and find an excellent agreement with numerical simulations. The experiment relies on the use of state-dependent disorder and the controlled transfer of atoms to create well-defined energy states. This opens new avenues for improved experimental investigations of three-dimensional Anderson localization.
• In this work we present the novel ASTRID method for investigating which attribute interactions classifiers exploit when making predictions. Attribute interactions in classification tasks mean that two or more attributes together provide stronger evidence for a particular class label. Knowledge of such interactions makes models more interpretable by revealing associations between attributes. This has applications, e.g., in pharmacovigilance to identify interactions between drugs or in bioinformatics to investigate associations between single nucleotide polymorphisms. We also show how the found attribute partitioning is related to a factorisation of the data generating distribution and empirically demonstrate the utility of the proposed method.
• We interpret the support $\tau$-tilting complex of any gentle bound quiver as the non-kissing complex of walks on its blossoming quiver. Particularly relevant examples were previously studied for quivers defined by a subset of the grid or by a dissection of a polygon. We then focus on the case when the non-kissing complex is finite. We show that the graph of increasing flips on its facets is the Hasse diagram of a congruence-uniform lattice. Finally, we study its $\mathbf{g}$-vector fan and prove that it is the normal fan of a non-kissing associahedron.
• Ehrenborg, Govindaiah, Park, and Readdy recently introduced the van der Waerden complex, a pure simplicial complex whose facets correspond to arithmetic progressions. Using techniques from combinatorial commutative algebra, we classify when these pure simplicial complexes are vertex decomposable or not Cohen-Macaulay. As a corollary, we classify the van der Waerden complexes that are shellable.
• Massless particles in $n+1$ dimensions lead to massive particles in $n$ dimensions on Kaluza-Klein reduction. In string theory, wrapped branes lead to multiplets of massive particles in $n$ dimensions, in representations of a duality group $G$. By encoding the masses of these particles in auxiliary worldline scalars, also transforming under $G$, we write an action which resembles that for a massless particle on an extended spacetime. We associate this extended spacetime with that appearing in double field theory and exceptional field theory, and formulate a version of the action which is invariant under the generalised diffeomorphism symmetry of these theories. This provides a higher-dimensional perspective on the origin of mass and tension in string theory and M-theory. Finally, we consider the reduction of exceptional field theory on a twisted torus, which is known to give the massive IIA theory of Romans. In this case, our particle action leads naturally to the action for a D0 brane in massive IIA. Here an extra vector field is present on the worldline, whose origin in exceptional field theory is a vector field introduced to ensure invariance under generalised diffeomorphisms.
• We extend recent results on the Asymptotic Equipartition Property for the density of $n$ particles in $\beta$-ensembles, as $n$ tends to infinity. We prove the Large Deviation Principle of the log-density for a general potential and the mod-gaussian convergence in the classical examples.
• In this paper we provide detailed information about the instability of equilibrium solutions of a nonlinear family of localized reaction-difussion equations in dimensione one. Beyond we provide explicit formulas to the equilibrium solutions, via perturbation method and we calculate the exact number of positive eigenvalues of the linear operator associated to the stability problem, which allow us to compute the dimension of the unstable manifold.
• Purpose: To apply tracer kinetic models as temporal constraints during reconstruction of under-sampled dynamic contrast enhanced (DCE) MRI. Methods: A library of concentration v.s time profiles is simulated for a range of physiological kinetic parameters. The library is reduced to a dictionary of temporal bases, where each profile is approximated by a sparse linear combination of the bases. Image reconstruction is formulated as estimation of concentration profiles and sparse model coefficients with a fixed sparsity level. Simulations are performed to evaluate modeling error, and error statistics in kinetic parameter estimation in presence of noise. Retrospective under-sampling experiments are performed on a brain tumor DCE digital reference object (DRO) at different signal to noise levels (SNR=20-40) at (k-t) space under-sampling factor (R=20), and 12 brain tumor in- vivo 3T datasets at (R=20-40). The approach is compared against an existing compressed sensing based temporal finite-difference (tFD) reconstruction approach. Results: Simulations demonstrate that sparsity levels of 2 and 3 model the library profiles from the Patlak and extended Tofts-Kety (ETK) models, respectively. Noise sensitivity analysis showed equivalent kinetic parameter estimation error statistics from noisy concentration profiles, and model approximated profiles. DRO based experiments showed good fidelity in recovery of kinetic maps from 20-fold under- sampled data at SNRs between 10-30. In-vivo experiments demonstrated reduced bias and uncertainty in kinetic mapping with the proposed approach compared to tFD at R>=20. Conclusions: Tracer kinetic models can be applied as temporal constraints during DCE-MRI reconstruction, enabling more accurate reconstruction from under- sampled data. The approach is flexible, can use several kinetic models, and does not require tuning of regularization parameters.
• The paper describes the CAp 2017 challenge. The challenge concerns the problem of Named Entity Recognition (NER) for tweets written in French. We first present the data preparation steps we followed for constructing the dataset released in the framework of the challenge. We begin by demonstrating why NER for tweets is a challenging problem especially when the number of entities increases. We detail the annotation process and the necessary decisions we made. We provide statistics on the inter-annotator agreement, and we conclude the data description part with examples and statistics for the data. We, then, describe the participation in the challenge, where 8 teams participated, with a focus on the methods employed by the challenge participants and the scores achieved in terms of F$_1$ measure. Importantly, the constructed dataset comprising $\sim$6,000 tweets annotated for 13 types of entities, which to the best of our knowledge is the first such dataset in French, is publicly available at \urlhttp://cap2017.imag.fr/competition.html .
• Temporal imaging systems are outstanding tools for single-shot observation of optical signals that have irregular and ultrafast dynamics. They allow long time windows to be recorded with femtosecond resolution, and do not rely on complex algorithms. However, simultaneous recording of amplitude and phase remains an open challenge for these systems. Here we present a new heterodyne time-lens arrangement that efficiently records both the amplitude and phase of complex signals, while keeping the performances of classical time-lens systems ($\sim 200$~fs) and field of view (tens of ps). Phase and time are encoded onto the two spatial dimensions of a camera. We demonstrate direct application of our heterodyne time lens to turbulent-like optical fields and optical rogue waves generated from nonlinear propagation of partially coherent waves inside optical fibres. We also show how this phase-sensitive time-lens system enables digital temporal holography to be performed with even higher temporal resolution (80 fs).
• Consideration of the entropy production in the creation of the CMB leads to a simple model of the evolution of the universe during this period which suggests a connection between the small observed acceleration term and the early inflation of a closed universe. From this we find an unexpected relationship between the Omega's of cosmology and calculate the total volume of the universe.
• The progression of breast cancer can be quantified in lymph node whole-slide images (WSIs). We describe a novel method for effectively performing classification of whole-slide images and patient level breast cancer grading. Our method utilises a deep neural network. The method performs classification on small patches and uses model averaging for boosting. In the first step, region of interest patches are determined and cropped automatically by color thresholding and then classified by the deep neural network. The classification results are used to determine a slide level class and for further aggregation to predict a patient level grade. Fast processing speed of our method enables high throughput image analysis.
• This viewpoint relates to an article by Jorge Kurchan (1998 J. Phys. A: Math. Gen. 31, 3719) as part of a series of commentaries celebrating the most influential papers published in the J. Phys. series, which is celebrating its 50th anniversary.
• An opto-electro-mechanical system formed by a nanomembrane capacitively coupled to an LC resonator and to an optical interferometer has been recently employed for the high--sensitive optical readout of rf signals [T. Bagci, \emphet~al., Nature \bf 507, 81 (2013)]. Here we propose and experimentally demonstrate how the bandwidth of such kind of transducer can be increased by controlling the interference between two-electromechanical interaction pathways of a two--mode mechanical system. Using a $1 \times 1 mm$ SiN membrane coated with a $27 nm$ Nb film we achieve a sensitivity of $300 nV/\sqrt{Hz}$ over a bandwidth of 15 kHz, at room temperature.
• John S. Bell is well known for the result now referred to simply as "Bell's theorem," which removed from serious consideration by physics of local hidden-variable theories. Under these circumstances, if quantum theory is to serve as a truly \em fundamental theory, conceptual precision in its interpretation is not only even more desirable but paramount. John Bell was accordingly concerned about what he viewed as conceptual imprecision, from the physical point of view, in the standard approaches to the theory. He saw this as most acute in the case of their treatment of \em measurement at the level of principle. Bell pointed out that conceptual imprecision is reflected in the terminology of the theory, a great deal of which he deemed worthy of banishment from discussions of principle. For him, it corresponded to a set of what he saw as vague and, in some instances, outright destructive concepts. Here, I consider this critique of standard quantum measurement theory and some alternative treatments wherein he saw greater conceptual precision, and make further suggestions as to how to proceed along the lines he advocated.
• We consider structure learning of linear Gaussian structural equation models with weak edges. Since the presence of weak edges can lead to a loss of edge orientations in the true underlying CPDAG, we define a new graphical object that can contain more edge orientations. We show that this object can be recovered from observational data under a type of strong faithfulness assumption. We present a new algorithm for this purpose, called aggregated greedy equivalence search (AGES), that aggregates the solution path of the greedy equivalence search (GES) algorithm for varying values of the penalty parameter. We prove consistency of AGES and demonstrate its performance in a simulation study and on single cell data from Sachs et al. (2005). The algorithm will be made available in the R-package pcalg.
• We study the linear spatiotemporal stability of an infinite row of equal point-vortices under symmetric confinement between parallel walls. This serves to model the secondary pairing instability in free shear layers, allowing us to study how confinement limits the growth of shear layers through vortex pairings. Using a geometric construction akin to a Legendre transform on the dispersion relation, we compute the growth rate of the instability in different reference frames as a function of frame velocity. This new approach is verified and complemented with numerical computations of the linear impulse response, fully characterizing the absolute/convective nature of the instability. As for the primary instability of parallel tanh profiles, we observe a range of confinement in which absolute instability is promoted. For a given parallel shear layer and channel width, the threshold for absolute/convective instability of the pairing instability depends on the separation distance between consecutive vortices, which is physically determined by the wavelength selected by the previous (primary or pairing) instability. With counterflow and moderate to weak confinement, small wavelength of the vortex row leads to absolute instability. In the present case, however, the result of the secondary pairing instability is to regenerate the flow with an increased wavelength, eventually leading to convective instability. This leads us to propose a wavelength selection criteria, according to which a spatially developing row of vortices in a free shear layer with counterflow can only occur if the distance between vortices is sufficiently large, in comparison to the channel width, so that the pairing instability is convective. The proposed wavelength selection mechanism can serve as a guideline for experimentally obtaining plane shear layers with counterflow, which has remained an experimental challenge.
• For the solution of the Poisson problem with an $L^\infty$ right hand side \beginequation* \begincases -∆u(x) = f (x) & \mboxin D, u=0 & \mboxon ∂D, \endcases \endequation* we derive an optimal estimate of the form $$\|u\|_∞≤\|f\|_∞\sigma_D(\|f\|_1/\|f\|_∞),$$ where $\sigma_D$ is a modulus of continuity defined in the interval $[0, |D|]$ and depends only on the domain $D$. In the case when $f\geq 0$ in $D$ the inequality is optimal for any domain and for any values of $\|f\|_1$ and $\|f\|_\infty.$ We also show that $$\sigma_D(t)≤\sigma_B(t),\text for t∈[0,|D|],$$ where $B$ is a ball and $|B|=|D|$. Using this optimality property of $\sigma,$ we derive Brezis-Galloute-Wainger type inequalities on the $L^\infty$ norm of $u$ in terms of the $L^1$ and $L^\infty$ norms of $f.$ The estimates have explicit coefficients depending on the space dimension $n$ and turn to equality for a specific choice of $u$ when the domain $D$ is a ball. As an application we derive $L^\infty-L^1$ estimates on the $k-$th Laplace eigenfunction of the domain $D.$

Alvaro M. Alhambra Jul 24 2017 16:10 UTC

This paper has just been updated and we thought it would be a good
idea to advertise it here. It was originally submitted a year ago, and
it has now been essentially rewritten, with two new authors added.

We have fixed some of the original results and now we:
-Show how some fundamental theorem

...(continued)
gae Jul 21 2017 17:58 UTC

Dear Marco, indeed the description in those two papers is very general because they treat both DV and CV channels. However, things become "easier" and more specific if you restrict things to DVs. In this regard, let me point you at this paper https://arxiv.org/pdf/1706.05384.pdf , in particular to

...(continued)
Marco Piani Jul 21 2017 16:33 UTC

Is it really the case for the general definition of teleportation-covariant channel given in https://arxiv.org/abs/1609.02160 or https://arxiv.org/abs/1510.08863 ? I understand that there special classes of teleportation-covariant channels are considered where what you say holds (that is, for pairs

...(continued)
gae Jul 21 2017 15:51 UTC

If two channels are teleportation-covariant and between Hilbert spaces with the same dimension, then the correction unitaries are exactly the same. For instance, for any pair of Pauli channels (not just a Pauli and the identity), the corrections are Pauli operators.

Marco Piani Jul 21 2017 15:36 UTC

Is it more precisely that the result holds for any pair of *jointly* teleportation-covariant channels? The definition of teleportation-covariant channel (according to what I see in https://arxiv.org/abs/1609.02160 ) is such that the covariance can be achieved with a unitary at the output that depend

...(continued)
gae Jul 21 2017 14:01 UTC

Thx Steve for pointing out this paper too, which is relevant as well. Let me just remark that the PRL mentioned in my previous comment [PRL 118, 100502 (2017), https://arxiv.org/abs/1609.02160 ] finds the result for any pair of teleportation-covariant channels (not just between a Pauli channel and t

...(continued)
Steve Flammia Jul 21 2017 13:43 UTC

Actually, there is even earlier work that shows this result. In [arXiv:1109.6887][1], Magesan, Gambetta, and Emerson showed that for any Pauli channel the diamond distance to the identity is equal to the trace distance between the associated Choi states. They prefer to phrase their results in terms

...(continued)
Stefano Pirandola Jul 21 2017 09:43 UTC

This is very interesting. In my reading list!

gae Jul 21 2017 09:00 UTC

In relation with the discussion at page 21 of this paper. Consider depolarizing channels (including the trivial case of the identity channel) which are teleportation covariant as in the definition Eq. (9) of https://arxiv.org/abs/1510.08863 [Nature Communications 8, 15043 (2017)]. The diamond norm b

...(continued)
Chris Ferrie Jul 18 2017 02:32 UTC

Since arXiv now supports supplementary material, we did not host the source externally. The easiest way to view the code is using https://nbviewer.jupyter.org: https://nbviewer.jupyter.org/urls/arxiv.org/src/1707.05088v1/anc/specdens-est.ipynb.

By the way, if you are having difficulty navigating

...(continued)