Top arXiv papers

sign in to customize
  • PDF
    The success of the various secondary operations involved in the production of particulate products depends on the production of particles with a desired size and shape from a previous primary operation such as crystallisation. This is because these properties of size and shape affect the behaviour of the particles in the secondary processes. The size and the shape of the particles are very sensitive to the conditions of the crystallisation processes, and so control of these processes is essential. This control requires the development of software tools that can effectively and efficiently process the sensor data captured in situ. However, these tools have various strengths and limitations depending on the process conditions and the nature of the particles. In this work, we employ wet milling of crystalline particles as a case study of a process which produces effects typical to crystallisation processes. We study some of the strengths and limitations of our previously introduced tools for estimating the particle size distribution (PSD) and the aspect ratio from chord length distribution (CLD) and imaging data. We find situations where the CLD tool works better than the imaging tool and vice versa. However, in general both tools complement each other, and can therefore be employed in a suitable multi-objective optimisation approach to estimate PSD and aspect ratio.
  • PDF
    We study confinement-deconfinement phase transition in a holographic soft-wall QCD model. By solving the Einstein-Maxwell-scalar system analytically, we obtain the phase structure of the black hole backgrounds. We then impose probe open strings in such background to investigate the confinement-deconfinement phase transition from different open string configurations under various temperatures and chemical potentials. Furthermore, we study the Wilson loop by calculating the minimal surface of the probing open string world-sheet and obtain the Cornell potential in confinement phase analytically.
  • PDF
    To shed light on the time evolution of local star formation episodes in M33, we study the association between 566 Giant Molecular Clouds (GMCs), identified through the CO (J=2-1) IRAM-all-disk survey, and 630 Young Stellar Cluster Candidates (YSCCs), selected via Spitzer-24~$\mu$m emission. The spatial correlation between YSCCs and GMCs is extremely strong, with a typical separation of 17~pc, less than half the CO(2--1) beamsize, illustrating the remarkable physical link between the two populations. GMCs and YSCCs follow the HI filaments, except in the outermost regions where the survey finds fewer GMCs than YSCCs, likely due to undetected, low CO-luminosity clouds. The GMCs have masses between 2$\times 10^4$ and 2$\times 10^6$ M$_\odot$ and are classified according to different cloud evolutionary stages: inactive clouds are 32$\%$ of the total, classified clouds with embedded and exposed star formation are 16$\%$ and 52$\%$ of the total respectively. Across the regular southern spiral arm, inactive clouds are preferentially located in the inner part of the arm, possibly suggesting a triggering of star formation as the cloud crosses the arm. Some YSCCs are embedded star-forming sites while the majority have GALEX-UV and H$\alpha$ counterparts with estimated cluster masses and ages. The distribution of the non-embedded YSCC ages peaks around 5~Myrs with only a few being as old as 8--10~Myrs. These age estimates together with the number of GMCs in the various evolutionary stages lead us to conclude that 14~Myrs is a typical lifetime of a GMC in M33, prior to cloud dispersal. The inactive and embedded phases are short, lasting about 4 and 2~Myrs respectively. This underlines that embedded YSCCs rapidly break out from the clouds and become partially visible in H$\alpha$ or UV long before cloud dispersal.
  • PDF
    The quantum dynamics of a system of Rb atoms, modeled by a V-type three-level system interacting with intense probe and pump pulses, are studied. The time-delay-dependent transient-absorption spectrum of an intense probe pulse is thus predicted, when this is preceded or followed by a strong pump pulse. Numerical results are interpreted in terms of an analytical model, which allows us to quantify the oscillating features of the resulting transient-absorption spectra in terms of the atomic populations and phases generated by the intense pulses. Strong-field-induced phases and their influence on the resulting transient-absorption spectra are thereby investigated for different values of pump and probe intensities and frequencies, focusing on the atomic properties which are encoded in the absorption line shapes for positive and negative time delays.
  • PDF
    A symmetry-preserving treatment of a vector-vector contact interaction is used to study charmed heavy-light mesons. The contact interaction is a representation of nonperturbative kernels used in Dyson-Schwinger and Bethe-Salpeter equations of QCD. The Dyson-Schwinger equation is solved for the $u,\,d,\,s$ and $c$ quark propagators and the bound-state Bethe-Salpeter amplitudes respecting spacetime-translation invariance and the Ward-Green-Takahashi identities associated with global symmetries of QCD are obtained to calculate masses and electroweak decay constants of the pseudoscalar $\pi,\,K$, $D$ and $D_s$ and vector $\rho$, $K^*$, $D^*$, and $D^*_s$ mesons. The predictions of the model are in good agreement with available experimental and lattice QCD data.
  • PDF
    In this paper, we develop new first-order method for composite non-convex minimization problems with simple constraints and inexact oracle. The objective function is given as a sum of "`hard"', possibly non-convex part, and "`simple"' convex part. Informally speaking, oracle inexactness means that, for the "`hard"' part, at any point we can approximately calculate the value of the function and construct a quadratic function, which approximately bounds this function from above. We give several examples of such inexactness: smooth non-convex functions with inexact Hölder-continuous gradient, functions given by auxiliary uniformly concave maximization problem, which can be solved only approximately. For the introduced class of problems, we propose a gradient-type method, which allows to use different proximal setup to adapt to geometry of the feasible set, adaptively chooses controlled oracle error, allows for inexact proximal mapping. We provide convergence rate for our method in terms of the norm of generalized gradient mapping and show that, in the case of inexact Hölder-continuous gradient, our method is universal with respect to Hölder parameters of the problem. Finally, in a particular case, we show that small value of the norm of generalized gradient mapping at a point means that a necessary condition of local minimum approximately holds at that point.
  • PDF
    The impact of non-equilibrium effects on the dynamics of heavy-ion collisions is investigated by comparing a non-equilibrium transport approach, the Parton-Hadron-String-Dynamics (PHSD), to a 2D+1 viscous hydrodynamical model, which is based on the assumption of local equilibrium and conservation laws. Starting the hydrodynamical model from the same non-equilibrium initial condition as in the PHSD, using an equivalent lQCD Equation-of-State (EoS), the same transport coefficients, i.e. shear viscosity $\eta$ and the bulk viscosity $\zeta$ in the hydrodynamical model, we compare the time evolution of the system in terms of energy density, Fourier transformed energy density, spatial and momentum eccentricities and ellipticity in order to quantify the traces of non-equilibrium phenomena. In addition, we also investigate the role of initial pre-equilibrium flow on the hydrodynamical evolution and demonstrate its importance for final state observables. We find that due to non-equilibrium effects, the event-by-event transport calculations show large fluctuations in the collective properties, while ensemble averaged observables are close to the hydrodynamical results.
  • PDF
    In this work, we investigate an application of a Nash equilibrium seeking algorithm in a social network. In a networked game each player (user) takes action in response to other players' actions in order to decrease (increase) his cost (profit) in the network. We assume that the players' cost functions are not necessarily dependent on the actions of all players. This is due to better mimicking the standard social media rules. A communication graph is defined for the game through which players are able to share their information with only their neighbors. We assume that the communication neighbors necessarily affect the players' cost functions while the reverse is not always true. In this game, the players are only aware of their own cost functions and actions. Thus, each of them maintains an estimate of the others' actions and share it with the neighbors to update his action and estimates.
  • PDF
    We prove the almost sure invariance principle for Hölder continuous observables on Young towers with exponential tails with rate $o(n^{1/p})$ for every $p$. As a part of our method, we show that Young towers can always be constructed to have zero distortion.
  • PDF
    Resonant Raman scattering is investigated in monolayer WS$_2$ at low temperature with the aid of an unconventional spectroscopy technique, $i.e.$, Raman scattering excitation (RSE). The RSE spectrum is made up by sweeping the excitation energy, when the detection energy is fixed in resonance with excitonic transitions related to neutral and/or charged excitons. We demonstrate that the shape of the RSE spectrum strongly depends on a selected detection energy. The out-going resonance with the neutral exciton leads to an extremely rich RSE spectrum displaying several Raman scattering features not reported so far, while no clear effect on the associated background photoluminescence is observed. Instead, a strong enhancement of the emission due to the negatively charged exciton is apparent when the out-going photons resonate with this exciton. Presented results show that the RSE spectroscopy can be a useful technique to study electron-phonon interactions in thin layers of transition metal dichalcogenides.
  • PDF
    This paper deals with the theory of collisions between two ultracold particles with a special focus on molecules. It describes the general features of the scattering theory of two particles with internal structure, using a time-independent quantum formalism. It starts from the Schrödinger equation and introduces the experimental observables such as the differential or integral cross sections, and rate coefficients. Using a partial-wave expansion of the scattering wavefunction, the radial motion of the collision is described through a linear system of coupled equations, which is solved numerically. Using a matching procedure of the scattering wavefunction with its asymptotic form, the observables such as cross sections and rate coefficients are obtained from the extraction of the reactance, scattering and transition matrices. The example of the collision of two dipolar molecules in the presence of an electric field is presented, showing how dipolar interactions and collisions can be controlled.
  • PDF
    We study topological defects in anisotropic ferromagnets with competing interactions near the Lifshitz point. We show that skyrmions and bi-merons are stable in a large part of the phase diagram. We calculate skyrmion-skyrmion and meron-meron interactions and show that skyrmions attract each other and form ring-shaped bound states in a zero magnetic field. At the Lifshitz point merons carrying a fractional topological charge become deconfined. These results imply that unusual topological excitations may exist in weakly frustrated magnets with conventional crystal lattices.
  • PDF
    In general, having positive lower density and being piecewise syndetic are incomparable properties for subsets of $\mathbb{N}$. However, we show that for any $n\geq 1$, the $n$-fold product of a frequently hypercyclic operator $T$ on a separable $F$-space $X$, is piecewise syndetic hypercyclic. As a consequence we prove that for any frequently hypercyclic operator $T$, any frequently hypercyclic vector $x$ and any non-empty open set $U$ of $X$, the recurrence set $\{n\geq 0: T^n x\in U\}$ happens to have positive and different upper density and upper Banach density respectively. Finally, we show that reiteratively hypercyclic weighted backward shifts on Fréchet sequence spaces are very close to syndetic transitive ones.
  • PDF
    Kademlia is a decentralized overlay network, up to now mainly used for highly scalable file sharing applications. Due to its distributed nature, it is free from single points of failure. Communication can happen over redundant network paths, which makes information distribution with Kademlia resilient against failing nodes and attacks. This makes it applicable to more scenarios than Internet file sharing. In this paper, we simulate Kademlia networks with varying parameters and analyze the number of node-disjoint paths in the network, and thereby the network connectivity. A high network connectivity is required for communication and system-wide adaptation even when some nodes or communication channels fail or get compromised by an attacker. With our results, we show the influence of these parameters on the connectivity and, therefore, the resilience against failing nodes and communication channels.
  • PDF
    The Dold$-$Thom theorem states that for a sufficiently nice topological space, M, there is an isomorphism between the homotopy groups of the infinite symmetric product of M and the homology groups of M itself. The crux of most known proofs of this is to check that a certain map is a quasi-fibration. It is our goal to present a more direct proof of the Dold$-$Thom theorem which does appeal to any such fact. The heart of our proof lies in identification of the infinite symmetric product as an instance of factorization homology.
  • PDF
    We present an experiment in which a horizontal quasi-2D granular system with a fixed neighbor network is cyclically compressed and decompressed over 1000 cycles. We remove basal friction by floating the particles on a thin air cushion, so that particles only interact in-plane. As expected for a granular system, the applied load is not distributed uniformly, but is instead concentrated in force chains which form a network throughout the system. To visualize the structure of these networks, we use particles made from photoelastic material. The experimental setup and a new data-processing pipeline allow us to map out the evolution subject to the cyclic compressions. We characterize several statistical properties of the packing, including the probability density function of the contact force, and compare them with theoretical and numerical predictions from the force network ensemble theory.
  • PDF
    We present a method to calculate, without making assumptions about the local dark matter velocity distribution, the maximal and minimal number of signal events in a direct detection experiment given a set of constraints from other direct detection experiments and/or neutrino telescopes. The method also allows to determine the velocity distribution that optimizes the signal rates. We illustrate our method with three concrete applications: i) to derive a halo-independent upper limit on the cross section from a set of null results, ii) to confront in a halo-independent way a detection claim to a set of null results and iii) to assess, in a halo-independent manner, the prospects for detection in a future experiment given a set of current null results.
  • PDF
    We demonstrate a fiber source with the best performance from an ultrafast fiber oscillator to date. The ring-cavity Mamyshev oscillator produces 50-nJ and 40-fs pulses. The peak power is an order of magnitude higher than that of previous lasers with similar fiber mode area. This performance is achieved by designing the oscillator to support parabolic pulse formation which enables the management of unprecedented nonlinear phase shifts. Experimental results are limited by available pump power. Numerical simulations reveal key aspects of the pulse evolution, and realistically suggest that (after external compression) peak powers that approach 10 MW are possible from ordinary single-mode fiber. The combination of practical features such as environmental stability, established previously, with the performance described here make the Mamyshev oscillator extremely attractive for applications.
  • PDF
    In astrophysics, we often aim to estimate one or more parameters for each member object in a population and study the distribution of the fitted parameters across the population. In this paper, we develop novel methods that allow us to take advantage of existing software designed for such case-by-case analyses to simultaneously fit parameters of both the individual objects and the parameters that quantify their distribution across the population. Our methods are based on Bayesian hierarchical modelling which is known to produce parameter estimators for the individual objects that are on average closer to their true values than estimators based on case-by-case analyses. We verify this in the context of estimating ages of Galactic halo white dwarfs (WDs) via a series of simulation studies. Finally, we deploy our new techniques on optical and near-infrared photometry of ten candidate halo WDs to obtain estimates of their ages along with an estimate of the mean age of Galactic halo WDs of [11.25, 12.96] Gyr. Although this sample is small, our technique lays the ground work for large-scale studies using data from the Gaia mission.
  • PDF
    Bayesian shrinkage methods have generated a lot of recent interest as tools for high-dimensional regression and model selection. These methods naturally facilitate tractable uncertainty quantification and incorporation of prior information. A common feature of these models, including the Bayesian lasso, global-local shrinkage priors, and spike-and-slab priors is that the corresponding priors on the regression coefficients can be expressed as scale mixture of normals. While the three-step Gibbs sampler used to sample from the often intractable associated posterior density has been shown to be geometrically ergodic for several of these models (Khare and Hobert, 2013; Pal and Khare, 2014), it has been demonstrated recently that convergence of this sampler can still be quite slow in modern high-dimensional settings despite this apparent theoretical safeguard. We propose a new method to draw from the same posterior via a tractable two-step blocked Gibbs sampler. We demonstrate that our proposed two-step blocked sampler exhibits vastly superior convergence behavior compared to the original three- step sampler in high-dimensional regimes on both real and simulated data. We also provide a detailed theoretical underpinning to the new method in the context of the Bayesian lasso. First, we derive explicit upper bounds for the (geometric) rate of convergence. Furthermore, we demonstrate theoretically that while the original Bayesian lasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and hence Hilbert-Schmidt). The trace class property has useful theoretical and practical implications. It implies that the corresponding Markov operator is compact, and its eigenvalues are summable. It also facilitates a rigorous comparison of the two-step blocked chain with "sandwich" algorithms which aim to improve performance of the two-step chain by inserting an inexpensive extra step.
  • PDF
    We derive the trace and diffeomorphism anomalies of the Schrödinger field on the Newton-Cartan background in $2+1$ dimensions using Fujikawa's approach. The resulting trace anomaly contains terms which have the form of the $1+1$ and $3+1$ dimensional relativistic anomalies. We further determine the coefficients in this case and demonstrate that gravitational anomalies for this theory always arise in odd dimensions.
  • PDF
    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform PSF photometry and subsequently produce observed colour magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with a real observation of the same cluster carried out with a 2.2 meter optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.
  • PDF
    The observed evolution of the broad-band spectral energy distribution (SED) in NS X-ray Nova Aql X-1 during the rise phase of a bright FRED-type outburst in 2013 can be understood in the framework of thermal emission from unstationary accretion disc with temperature radial distribution transforming from a single-temperature blackbody emitting ring into the multi-colour irradiated accretion disc. SED evolution during the hard to soft X-ray state transition looks curious, as it can not be repro- duced by the standard disc irradiation model with a single irradiation parameter for NUV, Optical and NIR spectral bands. NIR (NUV) band is correlated with soft (hard) X-ray flux changes during the state transition interval, respectively. In our interpreta- tion, at the moment of X-ray state transition UV-emitting parts of the accretion disc are screened from direct X-ray illumination from the central source and are heated primary by hard X-rays (E > 10 keV), scattered in the hot corona or wind possibly formed above the optically-thick outer accretion flow; the outer edge of multi-colour disc, which emits in Optical-NIR, can be heated primary by direct X-ray illumination. We point out that future simultaneous multi-wavelength observations of X-ray Nova systems during the fast X-ray state transition interval are of great importance, as it can serve as 'X-ray tomograph' to study physical conditions in outer regions of accretion flow. This can provide an effective tool to directly test the energy-dependent X-ray heating efficiency, vertical structure and accretion flow geometry in transient LMXBs.
  • PDF
    Many methods have been experimented to study decoherence in nanostructures. Tsallis, Shannon and Gaussian entropy have been used to study decoherence separately; in this paper, we compared the results of the sus-mentioned entropies in nanostructures. The linear combination operator and the unitary transformation was used to derive the magnetopolaron spectrum that strongly interact with the LO phonons in the presence of electric field in the pseudo harmonic and delta quantum dot. Numerical results revealed for the quantum pseudo dot that: (i) The amplitude of Gauss entropy is greater than the amplitude of Tsallis entropy which inturn is greater than the amplitude of Shannon entropy. The Tsallis entropy is not more significant in nanostructure compared to Shannon and Gauss entropies, (ii) With an increase of the zero point, the dominance of the Gauss entropy on the Shannon entropy was observed on one hand and the dominance of the Shannon entropy on the Tsallis entropy on the other hand ; this suggested that in nanostructures, Gauss entropy is more suitable in the evaluation of the average of information in the system, for the delta quantum dot it was observed that (iii) when the Gauss entropy is considered, a lot of information about the system is missed. The collapse revival phenomenon in Shannon entropy was observed in RbCl and GaAs delta quantum dot with the enhancement of delta parameter; with an increase in this parameter, the system in the case of CsI evolved coherently; with Shannon and Tsallis entropies , information in the system is faster and coherently exchanged; (iv) The Shannon entropy is more significant because its amplitude outweighs on the others when the delta dimension length enhances. The Tsallis entropy involves as wave bundle; which oscillate periodically with an increase of the oscillation period when delta dimension length is improved.
  • PDF
    Many state-of-the-art methods have been proposed for infrared small target detection. They work well on the images with homogeneous backgrounds and high-contrast targets. However, when facing highly heterogeneous backgrounds, they would not perform very well, mainly due to: 1) the existence of strong edges and other interfering components, 2) not utilizing the priors fully. Inspired by this, we propose a novel method to exploit both local and non-local priors simultaneously. Firstly, we employ a new infrared patch-tensor (IPT) model to represent the image and preserve its spatial correlations. Exploiting the target sparse prior and background non-local self-correlation prior, the target-background separation is modeled as a robust low-rank tensor recovery problem. Moreover, with the help of the structure tensor and reweighted idea, we design an entry-wise local-structure-adaptive and sparsity enhancing weight to replace the globally constant weighting parameter. The decomposition could be achieved via the element-wise reweighted higher-order robust principal component analysis with an additional convergence condition according to the practical situation of target detection. Extensive experiments demonstrate that our model outperforms the other state-of-the-arts, in particular for the images with very dim targets and heavy clutters.
  • PDF
    We discuss the feasibility of detecting spin polarized electronic transitions with a vortex filter. This approach does not rely on the principal condition of the standard energy loss magnetic chiral dichroism (EMCD) technique, the precise alignment of the crystal, and thus paves the way for the application of EMCD to new classes of materials and problems. The dichroic signal strength in the L$_{2,3}$-edge of ferromagnetic cobalt is estimated on theoretical grounds. It is shown that magnetic dichroism can, in principle, be detected. However, as an experimental test shows, count rates are currently too low under standard conditions.
  • PDF
    We show that quandle coverings in the sense of Eisermann form a (regular epi)-reflective subcategory of the category of surjective quandle homomorphisms, both by using arguments coming from categorical Galois theory and by constructing concretely a centralization congruence. Moreover, we show that a similar result holds for normal quandle extensions.
  • PDF
    In this paper, we consider an equivariant Hopf bifurcation of relative periodic solutions from relative equilibria in systems of functional differential equations respecting $\Gamma \times S^1$-spatial symmetries. The existence of branches of relative periodic solutions together with their symmetric classification is established using the equivariant twisted $\Gamma\times S^1$-degree with one free parameter. As a case study, we consider a delay differential model of coupled identical passively mode-locked semiconductor lasers with the dihedral symmetry group $\Gamma=D_8$.
  • PDF
    We study a simple one-loop induced neutrino mass model that contains both bosonic and fermionic dark matter candidates and has the capacity to explain the muon anomalous magnetic moment anomaly. We perform a comprehensive analysis by taking into account the relevant constraints of charged lepton flavor violation, electric dipole moments, and neutrino oscillation data. We examine the constraints from lepton flavor-changing $Z$ boson decays at one-loop level, particularly when the involved couplings contribute to the muon $g-2$. It is found that $\text{BR}(Z\to \mu\tau)\simeq (10^{-7}$ - $10^{-6})$ while $\text{BR}(\tau\to\mu\gamma)\lesssim 10^{-11}$ in the fermionic dark matter scenario. The former can be probed by the precision measurement of the $Z$ boson at future lepton colliders.
  • PDF
    We use the Sloan Digital Sky Survey Data Release 12, which is the largest available white dwarf catalog to date, to study the evolution of the kinematical properties of the population of white dwarfs in the Galactic disc. We derive masses, ages, photometric distances and radial velocities for all white dwarfs with hydrogen-rich atmospheres. For those stars for which proper motions from the USNO-B1 catalog are available the true three-dimensional components of the stellar space velocity are obtained. This subset of the original sample comprises 20,247 objects, making it the largest sample of white dwarfs with measured three-dimensional velocities. Furthermore, the volume probed by our sample is large, allowing us to obtain relevant kinematical information. In particular, our sample extends from a Galactocentric radial distance $R_{\rm G}=7.8$~kpc to 9.3~kpc, and vertical distances from the Galactic plane ranging from $Z=-0.5$~kpc to 0.5~kpc. We examine the mean components of the stellar three-dimensional velocities, as well as their dispersions with respect to the Galactocentric and vertical distances. We confirm the existence of a mean Galactocentric radial velocity gradient, $\partial\langle V_{\rm R}\rangle/\partial R_{\rm G}=-3\pm5$~km~s$^{-1}$~kpc$^{-1}$. We also confirm North-South differences in $\langle V_{\rm z}\rangle$. Specifically, we find that white dwarfs with $Z>0$ (in the North Galactic hemisphere) have $\langle V_{\rm z}\rangle<0$, while the reverse is true for white dwarfs with $Z<0$. The age-velocity dispersion relation derived from the present sample indicates that the Galactic population of white dwarfs may have experienced an additional source of heating, which adds to the secular evolution of the Galactic disc.
  • PDF
    We estimate the maximum-order complexity of a binary sequence in terms of its correlation measures. Roughly speaking, we show that any sequence with small correlation measure up to a sufficiently large order $k$ cannot have very small maximum-order complexity.
  • PDF
    The passage of energetic ions through tissue initiates a series of physico-chemical events, which lead to biodamage. The study of this scenario using a multiscale approach brought about the theoretical prediction of shock waves initiated by energy deposited within ion tracks. These waves are being explored in this letter in different aspects. The radial dose that sets their initial conditions is calculated using diffusion equations extended to include the effect of energetic $\delta$-electrons. The resulting shock waves are simulated by means of reactive classical molecular dynamics. The simulations predict a characteristic distribution of reactive species which may have a significant contribution to biodamage, and also suggests experimental means to detect the shock waves.
  • PDF
    We consider two different conformal field theories with central charge c=7/10. One is the diagonal invariant minimal model in which all fields have integer spins; the other is the local fermionic theory with superconformal symmetry in which fields can have half-integer spin. We construct new conformal (but not topological or factorised) defects in the minimal model. We do this by first constructing defects in the fermionic model as boundary conditions in a fermionic theory of central charge c=7/5, using the folding trick as first proposed by Gang and Yamaguchi. We then acting on these with interface defects to find the new conformal defects. As part of the construction, we find the topological defects in the fermionic theory and the interfaces between the fermionic theory and the minimal model. We also consider the simpler case of defects in the theory of a single free fermion and interface defects between the Ising model and a single fermion as a prelude to calculations in the TCIM.
  • PDF
    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
  • PDF
    In this Technical Design Report (TDR) we describe the LZ detector to be built at the Sanford Underground Research Facility (SURF). The LZ dark matter experiment is designed to achieve sensitivity to a WIMP-nucleon spin-independent cross section of three times ten to the negative forty-eighth square centimeters.
  • PDF
    We discuss a new method for unveiling the possible blazar AGN nature among the numerous population of Unassociated Gamma-ray sources (UGS) in the Fermi catalogues. Our tool relies on positional correspondence of the Fermi object with X-ray sources (mostly from Swift-XRT), correlated with other radio, IR and optical data in the field. We built a set of Spectral Energy Distributions (SED) templates representative of the various blazar classes, and we quantitatively compared them to the observed multi-wavelength flux density data for all Swift-XRT sources found within the Fermi error-box, by taking advantage of some well-recognised regularities in the broad-band spectral properties of the objects. We tested the procedure by comparison with a few well-known blazars, and tested the chance for false positive recognition of UGS sources against known pulsars and other Galactic and extragalactic sources. Based on our spectral recognition tool, we find the blazar candidate counterparts for 14 2FGL UGSs among 183 selected at high galactic latitudes. Further our tool also allows us rough estimates of the redshift for the candidate blazar. In a few cases in which this has been possible (i.e. when the counterpart was a SDSS object), we verified that our estimate is consistent with the measured redshift. The estimated redshifts of the proposed UGS counterparts are larger, on average, than those of known Fermi blazars, a fact that might explain the lack of previous association or identification in published catalogues.
  • PDF
    Chiral domain walls of Neel type emerge in heterostructures that include heavy metal (HM) and ferromagnetic metal (FM) layers owing to the Dzyaloshinskii-Moriya (DM) interaction at the HM/FM interface. In developing storage class memories based on the current induced motion of chiral domain walls, it remains to be seen how dense such domain walls can be packed together. Here we show that a universal short range repulsion that scales with the strength of the DM interaction exists among chiral domain walls. The distance between the two walls can be reduced with application of out of plane field, allowing formation of coupled domain walls. Surprisingly, the current driven velocity of such coupled walls is independent of the out of plane field, enabling manipulation of significantly compressed coupled domain walls using current pulses. Moreover, we find that a single current pulse with optimum amplitude can create a large number of closely spaced domain walls. These features allow current induced generation and synchronous motion of highly packed chiral domain walls, a key feature essential for developing domain wall based storage devices.
  • PDF
    Hawkes Processes capture self- and mutual-excitation between events when the arrival of one event makes future ones more likely to happen in time-series data. Identification of the temporal covariance kernel can reveal the underlying structure to better predict future events. In this paper, we present a new framework to represent time-series events with a composition of self-triggering kernels of Hawkes Processes. That is, the input time-series events are decomposed into multiple Hawkes Processes with heterogeneous kernels. Our automatic decomposition procedure is composed of three main steps: (1) discretized kernel estimation through frequency domain inversion equation associated with the covariance density, (2) greedy kernel decomposition through four base kernels and their combinations (addition and multiplication), and (3) automated report generation. We demonstrate that the new automatic decomposition procedure performs better to predict future events than the existing framework in real-world data.
  • PDF
    Assessing and improving the quality of data are fundamental challenges for data-intensive systems that have given rise to applications targeting transformation and cleaning of data. However, while schema design, data cleaning, and data migration are now reasonably well understood in isolation, not much attention has been given to the interplay between the tools addressing issues in these areas. We focus on the problem of determining whether the available data-processing procedures can be used together to bring about the desired quality of the given data. For instance, consider an organization introducing new data-analysis tasks. Depending on the tasks, it may be a priority to determine whether the data can be processed and transformed using the available data-processing tools to satisfy certain properties or quality assurances needed for the success of the task. Here, while the organization may control some of its tools, some other tools may be external or proprietary, with only basic information available on how they process data. The problem is then, how to decide which tools to apply, and in which order, to make the data ready for the new tasks? Toward addressing this problem, we develop a new framework that abstracts data-processing tools as black-box procedures with only some of the properties exposed, such as the applicability requirements, the parts of the data that the procedure modifies, and the conditions that the data satisfy once the procedure has been applied. We show how common tasks such as data cleaning and data migration are encapsulated into our framework and, as a proof of concept, we study basic properties of the framework for the case of procedures described by standard relational constraints. While reasoning in this framework may be computationally infeasible in general, we show that there exist well-behaved special cases with potential practical applications.
  • PDF
    We revisit the problem of characterizing the eigenvalue distribution of the Dirichlet-Laplacian on bounded open sets $\Omega\subset\mathbb{R}$ with fractal boundaries. It is well-known from the results of Lapidus and Pomerance \citeLapPo1 that the asymptotic second term of the eigenvalue counting function can be described in terms of the Minkowski content of the boundary of $\Omega$ provided it exists. He and Lapidus \citeHeLap2 discussed a remarkable extension of this characterization to sets $\Omega$ with boundaries that are not necessarily Minkowski measurable. They employed so-called generalized Minkowski contents given in terms of gauge functions more general than the usual power functions. The class of valid gauge functions in their theory is characterized by some technical conditions, the geometric meaning and necessity of which is not obvious. Therefore, it is not completely clear how general the approach is and which sets $\Omega$ are covered. Here we revisit these results and put them in the context of regularly varying functions. Using Karamata theory, it is possible to get rid of most of the technical conditions and simplify the proofs given by He and Lapidus, revealing thus even more of the beauty of their results. Further simplifications arise from characterization results for Minkowski contents obtained in \citeRW13. We hope our new point of view on these spectral problems will initiate some further investigations of this beautiful theory.
  • PDF
    We investigate the formation of circumstellar disks and outflows subsequent to the collapse of molecular cloud cores with the magnetic field and turbulence. Numerical simulations are performed by using an adaptive mesh refinement to follow the evolution up to $\sim 1000$~yr after the formation of a protostar. In the simulations, circumstellar disks are formed around the protostars; those in magnetized models are considerably smaller than those in nonmagnetized models, but their size increases with time. The models with stronger magnetic field tends to produce smaller disks. During evolution in the magnetized models, the mass ratios of a disk to a protostar is approximately constant at $\sim 1-10$\%. The circumstellar disks are aligned according to their angular momentum, and the outflows accelerate along the magnetic field on the $10-100$~au scale; this produces a disk that is misaligned with the outflow. The outflows are classified into two types: a magneto-centrifugal wind and a spiral flow. In the latter, because of the geometry, the axis of rotation is misaligned with the magnetic field. The magnetic field has an internal structure in the cloud cores, which also causes misalignment between the outflows and the magnetic field on the scale of the cloud core. The distribution of the angular momentum vectors in a core also has a non-monotonic internal structure. This should create a time-dependent accretion of angular momenta onto the circumstellar disk. Therefore, the circumstellar disks are expected to change their orientation as well as their sizes in the long-term evolutions.
  • PDF
    El Nino is probably the most influential climate phenomenon on interannual time scales. It affects the global climate system and is associated with natural disasters and serious consequences in many aspects of human life. However, the forecasting of the onset and in particular the magnitude of El Nino are still not accurate, at least more than half a year in advance. Here, we introduce a new forecasting index based on network links representing the similarity of low frequency temporal temperature anomaly variations between different sites in the El Nino 3.4 region. We find that significant upward trends and peaks in this index forecast with high accuracy both the onset and magnitude of El Nino approximately 1 year ahead. The forecasting procedure we developed improves in particular the prediction of the magnitude of El Nino and is validated based on several, up to more than a century long, datasets.
  • PDF
    In this paper, we will introduce a new heterogeneous fast multipole method (H-FMM) for 2-D Helmholtz equation in layered media. To illustrate the main algorithm ideas, we focus on the case of two and three layers in this work. The key compression step in the H-FMM is based on a fact that the multipole expansion for the sources of the free-space Green's function can be used also to compress the far field of the sources of the layered-media or domain Green's function, and a similar result exists for the translation operators for the multipole and local expansions. The mathematical error analysis is shown rigorously by an image representation of the Sommerfeld spectral form of the domain Green's function. As a result, in the H-FMM algorithm, both the "multipole-to-multipole" and "local-to-local" translation operators are the same as those in the free-space case, allowing easy adaptation of existing free-space FMM. All the spatially variant information of the domain Green's function are collected into the "multipole-to-local" translations and therefore the FMM becomes "heterogeneous". The compressed representation further reduces the cost of evaluating the domain Green's function when computing the local direct interactions. Preliminary numerical experiments are presented to demonstrate the efficiency and accuracy of the algorithm with much improved performance over some existing methods for inhomogeneous media. Furthermore, we also show that, due to the equivalence between the complex line image representation and Sommerfeld integral representation of layered media Green's function, the new algorithm can be generalized to multi-layered media with minor modification where details for compression formulas, translation operators, and bookkeeping strategies will be addressed in a subsequent paper.
  • PDF
    This paper continues the previous studies in two papers of Huang-Yin [HY3-4] on the flattening problem of a CR singular point of real codimension two sitting in a submanifold in ${\mathbb C}^{n+1}$ with $n+1\ge 3$, whose CR points are non-minimal. Partially based on the geometric approach initiated in [HY3] and a formal theory approach used in [HY4], we are able to provide a very general flattening theorem for a non-degenerate CR singular point. As an application, we provide a solution to the local complex Plateau problem and obtain the analyticity of the local hull of holomorphy near a real analytic definite CR singular point in a general setting.
  • PDF
    A physically plausible Lemaı̂tre-Tolman-Bondi collapse in the marginally bound case is considered. By "physically plausible" we mean that the corresponding metric is ${\cal C}^1$ matched at the collapsing star surface and further that its \em intrinsic energy is, as due, stationary and finite. It is proved for this Lemaı̂tre-Tolman-Bondi collapse, for some parameter values, that its intrinsic central singularity is globally naked, thus violating the cosmic censorship conjecture with, for each direction, one photon, or perhaps a pencil of photons, leaving the singularity and reaching the null infinity. Our result is discussed in relation to some other cases in the current literature on the subject in which some of the central singularities are globally naked too.
  • PDF
    The Atacama Large Millimeter/submilimeter Array (ALMA) recently revealed a set of nearly concentric gaps in the protoplanetary disk surrounding the young star HL Tau. If these are carved by forming gas giants, this provides the first set of orbital initial conditions for planets as they emerge from their birth disks. Using N-body integrations, we have followed the evolution of the system for 5 Gyr to explore the possible outcomes. We find that HL Tau initial conditions scaled down to the size of typically observed exoplanet orbits naturally produce several populations in the observed exoplanet sample. First, for a plausible range of planetary masses, we can match the observed eccentricity distribution of dynamically excited radial velocity giant planets with eccentricities $>$ 0.2. Second, we roughly obtain the observed rate of hot Jupiters around FGK stars. Finally, we obtain a large efficiency of planetary ejections of $\approx 2$ per HL Tau-like system, but the small fraction of stars observed to host giant planets makes it hard to match the rate of free-floating planets inferred from microlensing observations. In view of upcoming GAIA results, we also provide predictions for the expected mutual inclination distribution, which is significantly broader than the absolute inclination distributions typically considered by previous studies.
  • PDF
    We present a single domain Galerkin-Collocation method to calculate puncture initial data sets for single and binary, either in the trumpet or wormhole geometries. The combination of aspects belonging to the Galerkin and the Collocation methods together with the adoption of spherical coordinates in all cases show to be very effective. We have proposed a unified expression for the conformal factor to describe trumpet and spinning black holes. In particular, for the spinning trumpet black holes, we have exhibited the deformation of the limit surface due to the spin from a sphere to an oblate spheroid. We have also revisited the energy content in the trumpet and wormhole puncture data sets. The algorithm can be extended to describe binary black holes.
  • PDF
    The strong and radiative decay properties of the low-lying $\Omega_c$ states are studied in a constituent quark model. We find that the newly observed $\Omega_c$ states by the LHCb Collaboration can fit in well the decay patterns. Thus, their spin-parity can be possibly assigned as the following: (i) The $\Omega_c(3000)$ and $\Omega_c(3090)$ can be assigned to be two $J^P=1/2^-$ states, $|^2P_{\lambda}\frac{1}{2}^-\rangle$ and $|^4P_{\lambda}\frac{1}{2}^-\rangle$, respectively. (ii) The $\Omega_c(3050)$ most likely corresponds to the $J^P=3/2^-$ state, i.e. either $|^2P_{\lambda}\frac{3}{2}^-\rangle$ or $|^4P_{\lambda}\frac{3}{2}^-\rangle$. (iii) The $\Omega_c(3066)$ can be assigned as the $|^4P_{\lambda}\frac{5}{2}^-\rangle$ state with $J^P=5/2^-$. (iv) The $\Omega_c(3119)$ might correspond to one of the two $2S$ states of the first radial excitations, i.e. $|2^2S_{\lambda\lambda}\frac{1}{2}^+\rangle$ or $|2^4S_{\lambda\lambda}\frac{3}{2}^+\rangle$.
  • PDF
    In this Article, a fast numerical numerical algorithm for pricing discrete double barrier option is presented. According to Black-Scholes model, the price of option in each monitoring date can be evaluated by a recursive formula upon the heat equation solution. These recursive solutions are approximated by using Legendre multiwavelets as orthonormal basis functions and expressed in operational matrix form. The most important feature of this method is that its CPU time is nearly invariant when monitoring dates increase. Besides, the rate of convergence of presented algorithm was obtained. The numerical results verify the validity and efficiency of the numerical method.
  • PDF
    I propose a scenario where the majority of the progenitors of type IIb supernovae (SNe IIb) lose most of their hydrogen-rich envelope during a grazing envelope evolution (GEE). In the GEE the orbital radius of the binary system is about equal to the radius of the giant star, and the more compact companion accretes mass through an accretion disk. The accretion disk is assumed to launch two opposite jets that efficiently remove gas from the envelope along the orbit of the companion. The efficient envelope removal by jets prevents the binary system from entering a common envelope evolution, at least for part of the time. The GEE might be continuous or intermittent. I crudely estimate the total GEE time period to be in the range of about hundreds of years, for a continuous GEE, and up to few tens of thousands of years for intermittent GEE. The key new point is that the removal of envelope gas by jets during the GEE prevents the system from entering a common envelope evolution, and by that substantially increases the volume of the stellar binary parameter space that leads to SNe IIb, both to lower secondary masses and to closer orbital separations.

Recent comments

Laura Mančinska Mar 28 2017 13:09 UTC

Great result!

For those familiar with I_3322, William here gives an example of a nonlocal game exhibiting a behaviour that many of us suspected (but couldn't prove) to be possessed by I_3322.

gae spedalieri Mar 13 2017 14:13 UTC

1) Sorry but this is false.

1a) That analysis is specifically for reducing QECC protocol to an entanglement distillation protocol over certain class of discrete variable channels. Exactly as in BDSW96. Task of the protocol is changed in the reduction.

1b) The simulation is not via a general LOCC b

...(continued)
Siddhartha Das Mar 13 2017 13:22 UTC

We feel that we have cited and credited previous works appropriately in our paper. To clarify:

1) The LOCC simulation of a channel and the corresponding adaptive reduction can be found worked out in full generality in the 2012 Master's thesis of Muller-Hermes. We have cited the original paper BD

...(continued)
gae spedalieri Mar 13 2017 08:56 UTC

This is one of those papers where the contribution of previous literature is omitted and not fairly represented.

1- the LOCC simulation of quantum channels (not necessarily teleportation based) and the corresponding general reduction of adaptive protocols was developed in PLOB15 (https://arxiv.org/

...(continued)
Noon van der Silk Mar 08 2017 04:45 UTC

I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent

...(continued)
Qian Wang Mar 07 2017 17:21 UTC

"To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics."
Can anyone explain a bit about this?

Christopher Chamberland Mar 02 2017 18:48 UTC

A good paper for learning about exRec's is this one https://arxiv.org/abs/quant-ph/0504218. Also, rigorous threshold lower bounds are obtained using an adversarial noise model approach.

Anirudh Krishna Mar 02 2017 18:40 UTC

Here's a link to a lecture from Dan Gottesman's course at PI about exRecs.
http://pirsa.org/displayFlash.php?id=07020028

You can find all the lectures here:
http://www.perimeterinstitute.ca/personal/dgottesman/QECC2007/index.html

Ben Criger Mar 02 2017 08:58 UTC

Good point, I wish I knew more about ExRecs.

Robin Blume-Kohout Feb 28 2017 09:55 UTC

I totally agree -- that part is confusing. It's not clear whether "arbitrary good precision ... using a limited amount of hardware" is supposed to mean that arbitrarily low error rates can be achieved with codes of fixed size (clearly wrong) or just that the resources required to achieve arbitraril

...(continued)