# Top arXiv papers

• In the colliding-wind region of massive binaries, non-thermal radio emission occurs. This non-thermal radio emission (due to synchrotron radiation) has so far been observed at centimetre wavelengths. At millimetre wavelengths, the stellar winds and the colliding-wind region emit more thermal free-free radiation, and it is expected that any non-thermal contribution will be difficult or impossible to detect. We aim to determine if the material in the colliding-wind region contributes substantially to the observed millimetre fluxes of a colliding-wind binary. We also try to distinguish the synchrotron emission from the free-free emission. We monitored the massive binary Cyg OB2 #8A at 3 mm with the NOrthern Extended Millimeter Array (NOEMA) interferometer of the Institut de Radioastronomie Millimetrique (IRAM). The data were collected in 14 separate observing runs (in 2014 and 2016), and provide good coverage of the orbital period. The observed millimetre fluxes range between 1.1 and 2.3 mJy, and show phase-locked variability, clearly indicating that a large part of the emission is due to the colliding-wind region. A simple synchrotron model gives fluxes with the correct order of magnitude, but with a maximum that is phase-shifted with respect to the observations. Qualitatively this phase shift can be explained by our neglect of orbital motion on the shape of the colliding-wind region. A model using only free-free emission results in only a slightly worse explanation of the observations. Additionally, on the map of our observations we also detect the O6.5 III star Cyg OB2 #8B, for which we determine a 3 mm flux of 0.21 +- 0.033 mJy. The question of whether synchrotron radiation or free-free emission dominates the millimetre fluxes of Cyg OB2 #8A remains open. More detailed modelling of this system, based on solving the hydrodynamical equations, is required to give a definite answer.
• In this paper we first present a Birman-Murakami-Wenzl type algebra for every Coxeter system of rank 2 (corresponding to dihedral groups). We prove they have semisimple for generic parameters, and having natural cellular structures. And classcify their irreducible representations. Among them there is one serving as a generalization of the Lawrence-Krammer representation with quite neat shape and the "correct" dimension. We conjecture they are isomorphic to the generalized Lawrence-Krammer representaions defined by I.Marin as monodromy of certain KZ connections. We prove these representations are irreducible for generic parameters, and find a quite neat invariant bilinear form on them. Based on above constructions for rank 2, we introduce a Birman-Murakami-Wenzl type algebra for an arbitrary Coxeter system. For every Coxeter system, the introduced algebra is a quotient of group algebra of the Artin group (associated with this Coxeter system), having the corresponding Hecke algebra as a quotient. The simple generators of the Artin group have degree 3 annihiating polynomials in this algebra.
• Aug 17 2017 math.DS arXiv:1708.04832v1
Suppose $X$ is a finite discrete space with at least two elements, $\Gamma$ is a nonempty countable set, and consider self--map $\varphi:\Gamma\to\Gamma$. We prove that the generalized shift $\sigma_\varphi:X^\Gamma\to X^\Gamma$ with $\sigma_\varphi((x_\alpha)_{\alpha\in\Gamma})=(x_{\varphi(\alpha)})_{\alpha\in\Gamma}$ (for $(x_\alpha)_{\alpha\in\Gamma}\in X^\Gamma$) is: $\bullet$ distributional chaotic (uniform, type 1, type 2) if and only if $\varphi:\Gamma\to\Gamma$ has at least a non-quasi-periodic point, $\bullet$ $\omega-$chaotic if and only if $\varphi:\Gamma\to\Gamma$ has at least a non-quasi-periodic point, $\bullet$ dense distributional chaotic if and only if $\varphi:\Gamma\to\Gamma$ does not have any periodic point, $\bullet$ transitive distributional chaotic if and only if $\varphi:\Gamma\to\Gamma$ is one--to--one without any periodic point. We complete the text by counterexamples.
• Sco X-1 is the brightest persistent X-ray in the sky. It is generally believed that Sco X-1 is a low-mass X-ray binary containing a neutron star accreting from a low-mass donor star where mass transfer is driven by the magnetic braking. However, the mass transfer rate predicted by the standard magnetic braking model is at least one order of magnitude lower than the one inferred by X-ray luminosity. In this work, we investigate whether this source could evolved from an intermediate-mass X-ray binary including Ap/Bp stars with a slightly strong magnetic field of 300 - 1000 G. The coupling between the magnetic field and an irradiation-driven wind induced by the X-ray flux from the accretor can yield a strong magnetic braking, which could give rise to a relatively high mass transfer rate. According to the observed orbital period, the mass transfer rate, the mass ratio, and the donor star spectral type, the progenitor of Sco X-1 should be an intermediate-mass X-ray binary including a 1.6 $-$ 1.8 $\rm M_{\odot}$ Ap/Bp donor star in a 1.3 $-$ 1.5 day orbit. Therefore, we propose that anomalous magnetic braking of Ap/Bp stars provides an alternative evolutionary channel to a part of luminous X-ray sources.
• This study deals with the problem of pricing compound options when the underlying asset follows a mixed fractional Brownian motion with jumps. An analytic formula for compound options is derived under the risk neutral measure. Then, these results are applied to value extendible options. Moreover, some special cases of the formula are discussed and numerical results are provided.
• We provide sufficient conditions on an initial curve for the area preserving and the length preserving curvature flows of curves in a plane, to develop a singularity at some finite time or converge to an $m$-fold circle as time goes to infinity. For the area-preserving flow, the positivity of the enclosed algebraic area determines whether the curvature blows up in finite time or not, while for the length-preserving flow, it is the positivity of an energy associated with initial curve that plays such a role.
• The optimal parameters for nuclear excitation by electron capture in plasma environments generated by the interaction of ultra-strong optical lasers with solid matter are investigated theoretically. As a case study we consider a 4.85 keV nuclear transition starting from the long-lived $^{93\mathrm{m}}$Mo isomer that can lead to the release of the stored 2.4 MeV excitation energy. We find that due to the complex plasma dynamics, the nuclear excitation rate and the actual number of excited nuclei do not reach their maximum at the same laser parameters. The nuclear excitation achievable with a high-power optical laser is up to six orders of magnitude larger than the values predicted for direct resonant or secondary nuclear excitation at the x-ray free electron laser. Our results show that the experimental observation of the nuclear excitation of $^{93\mathrm{m}}$Mo and the subsequent release of stored energy should be possible at laser facilities available today.
• We carry out a numerical investigation of the asymptotic expansion of the so-called Wright function ${}_p\Psi_q(z)$ (a generalised hypergeometric function) in the case when exponentially small terms are present. This situation is covered by two theorems of Wright and Braaksma. We demonstrate that a more precise understanding of the behaviour of ${}_p\Psi_q(z)$ is obtained by taking into account the Stokes phenomenon.
• The instantaneous underdetermined audio source separation problem of K-sensors, L-sources mixing scenario (where K < L) has been addressed by many different approaches, provided the sources remain quite distinct in the virtual positioning space spanned by the sensors. This problem can be tackled as a directional clustering problem along the source position angles in the mixture. The use of Generalised Directional Laplacian Densities (DLD) in the MDCT domain for underdetermined source separation has been proposed before. Here, we derive weighted mixtures of DLDs in a sparser representation of the data in the STFT domain to perform separation. The proposed approach yields improved results compared to our previous offering and compares favourably with the state-of-the-art.
• We present in this paper a generic and parameter-free algorithm to efficiently build a wide variety of optical components, such as mirrors or lenses, that satisfy some light energy constraints. In all of our problems, one is given a collimated or point light source and a desired illumination after reflection or refraction and the goal is to design the geometry of a mirror or lens which transports exactly the light emitted by the source onto the target. We first propose a general framework and show that eight different optical component design problems amount to solving a Light Energy Conservation equation that involves the computation of Visibility diagrams. We show that these diagrams all have the same structure and can be obtained by intersecting a 3D Power Diagram with a planar or spherical domain. This allows us to propose an efficient and fully generic algorithm capable to solve the eight optical component design problems. Our solutions can satisfy design constraints such as convexity or concavity and are always graphs over the plane or the sphere. We show the effectiveness of our algorithm on numerous simulated examples.
• Hydroxymethylcytosine (5hmC) methylation is known to be a possible epigenetic mark impacting genome stability. In this paper, we address the existing 5hmC measure $\Delta \beta$ and discuss its properties both analytically and empirically on real data. Then we introduce several alternative hydroxymethylation measures and compare their properties with those of $\Delta \beta$. All results will be illustrated by means of real data analyses.
• We consider how breakdown of the quasistatic approximation for attractors can lead to rate-dependent tipping, where a qualitative change in tracking/tipping behaviour of trajectories can be characterised in terms of a critical rate. Associated with rate-dependent tipping (where tracking of a branch of quasistatic attractors breaks down) we find a new phenomenon for attractors that are not simply equilibria: partial tipping of the pullback attractor where certain phases of the periodic attractor tip and others track the quasistatic attractor. For a specific model system with a parameter shift between two asymptotically autonomous systems with periodic attractors we characterise thresholds of rate-dependent tipping to partial and total tipping. We show these thresholds can be found in terms of certain periodic-to-periodic (PtoP) and periodic-to-equilibrium (PtoE) connections that we determine using Lin's method for an augmented system.
• We used data from the Fermi Large Area Telescope obtained during the last 7 years and spanning two passages of $\eta$ Carinae at periastron and compared them with the predictions of particle acceleration in hydrodynamic simulations. Two emission components can be distinguished. The low-energy component cuts off below 10 GeV and its flux, modulated by the orbital motion, varies by a factor less than 2. Short-term variability occurs at periastron. The flux of the high energy component varies by a factor 3-4 but differently during the two periastrons. The variabilities observed at low-energy, including some details of them, and these observed at high-energy during the first half of the observations, do match the prediction of the simulation, assuming a surface magnetic field in the range 0.4-1 kG. The high-energy component and the thermal X-ray emission were weaker than expected around the second periastron suggesting a modification of the wind density in the inner wind collision zone. Diffuse shock acceleration in the complex geometry of the wind collision zone of $\eta$ Carinae provides a convincing match to the observations and new diagnostic tools to probe the geometry and energetics of the system.
• Estimating the tail index parameter is one of the primal objectives in extreme value theory. For heavy-tailed distributions the Hill estimator is the most popular way to estimate the tail index parameter. Improving the Hill estimator was aimed by recent works with different methods, for example by using bootstrap, or Kolmogorov-Smirnov metric. These methods are asymptotically consistent, but for tail index $\xi >1$ and smaller sample sizes the estimation fails to approach the theoretical value for realistic sample sizes. In this paper, we introduce a new empirical method, which can estimate high tail index parameters well and might also be useful for relatively small sample sizes.
• The Continuous Spontaneous Localization (CSL) model is the best known and studied among collapse models, which modify quantum mechanics and identify the fundamental reasons behind the unobservability of quantum superpositions at the macroscopic scale. Albeit several tests were performed during the last decade, up to date the CSL parameter space still exhibits a vast unexplored region. Here, we study and propose an unattempted non-interferometric test aimed to fill this gap. We show that the angular momentum diffusion predicted by CSL heavily constrains the parametric values of the model when applied to a macroscopic cylinder, eventually allowing to cover the unexplored region of the parameter space.
• We present new viscosity measurements of a synthetic silicate system considered an analogue for the lava erupted on the surface of Mercury. In particular, we focus on the northern volcanic plains (NVP), which correspond to the largest lava flows on Mercury and possibly in the Solar System. High-temperature viscosity measurements were performed at both superliquidus (up to 1736 K) and subliquidus conditions (1569-1502 K) to constrain the viscosity variations as a function of crystallinity (from 0 to 28\%) and shear rate (from 0.1 to 5 s 1). Melt viscosity shows moderate variations (4-16 Pa s) in the temperature range of 1736-1600 K. Experiments performed below the liquidus temperature show an increase in viscosity as shear rate decreases from 5 to 0.1 s 1, resulting in a shear thinning behavior, with a decrease in viscosity of 1 log unit. The low viscosity of the studied composition may explain the ability of NVP lavas to cover long distances, on the order of hundreds of kilometers in a turbulent flow regime. Using our experimental data we estimate that lava flows with thickness of 1, 5, and 10 m are likely to have velocities of 4.8, 6.5, and 7.2 m/s, respectively, on a 5 degree ground slope. Numerical modeling incorporating both the heat loss of the lavas and its possible crystallization during emplacement allows us to infer that high effusion rates (>10,000 m3/s) are necessary to cover the large distances indicated by satellite data from the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft.
• Symmetry breaking is a fundamental concept in condensed matter physics whose presence often heralds new phases of matter. For instance, the breaking of time reversal symmetry is traditionally linked to magnetic phases in a material, while the breaking of gauge symmetry can lead to superfluidity/superconductivity. Nematic phases are phases in which rotational symmetry is broken while maintaining translational symme- try, and are traditionally associated with liquid crystals. Electronic nematic states where the or- thogonal in-plane crystal directions have different electronic properties have garnered a great deal of attention after their discovery in Sr$_3$Ru$_2$O$_7$, multiple iron based superconductors, and in the superconducting state of CuBiSe. Here we demonstrate the existence of an electronic ne- matic phase in the two-dimensional carrier gas that forms at the (111) LaAlO$_3$ (LAO)/SrTiO$_3$ (STO) interface that onsets at low temperatures, and is tunable by an electric field.
• The computation of light scattering by the superposition T-matrix scheme has been so far restricted to systems made of particles that are either sparsely distributed or of near-spherical shape. In this work, we extend the range of applicability of the T-matrix method by accounting for the coupling of scattered fields between highly non-spherical particles in close vicinity. This is achieved using an alternative formulation of the translation operator for spherical vector wave functions, based on a plane wave expansion of the particle's scattered electromagnetic field. The accuracy and versatility of the present approach is demonstrated by simulating arbitrarily oriented and densely packed spheroids, for both, dielectric and metallic particles.
• Liquid marbles are microlitre droplets of liquid, encapsulated by self-organised hydrophobic particles at the liquid/air interface. They offer an efficient approach for manipulating liquid droplets and compartmentalising reactions in droplets. Digital fluidic devices employing liquid marbles might benefit from having embedded computing circuits without electronics and moving mechanical parts (apart from the marbles). We present an experimental implementation of a collision gate with liquid marbles. Mechanics of the gate follows principles of Margolus' soft-sphere collision gate. Boolean values of the inputs are given by the absence (FALSE) or presence (TRUE) of a liquid marble. There are three outputs: two outputs are trajectories of undisturbed marbles (they only report TRUE when just one marble is present at one of the inputs), one output is represented by trajectories of colliding marbles (when two marbles collide they lose their horizontal momentum and fall), this output reports TRUE only when two marbles are present at inputs. Thus the gate implements AND and AND-NOT logical functions. We speculate that by merging trajectories representing AND-NOT output into a single channel one can produce a one-bit half-adder. Potential design of a one-bit full-adder is discussed, and the synthesis of both a pure nickel metal and hybrid nickel/polymer liquid marble is reported.
• Aug 17 2017 cs.AI arXiv:1708.04806v1
This paper continues the research that considers a new cognitive model. In particular, it considers the neural binding structure of an earlier paper. To help with this, the paper describes some new methods in the areas of image processing and high-level behaviour simulation. The work is all based on earlier research by the author and the new additions are intended to fit in with the overall design. For image processing, a grid-like structure is used with 'full linking'. Each cell in the classifier grid stores a list of all other cells it gets associated with and this is used as the learned image that new input is compared to. For the behaviour metric, a new prediction equation is suggested, as part of a simulation, that uses feedback and history to dynamically determine its course of action. While the new methods are from widely different topics, both can be compared with the binary-analog type of interface that is the main focus of the paper. It is suggested that the simplest of linking between a tree and ensemble can explain neural binding and variable signal strengths.
• The local electronic and magnetic properties of superconducting FeSe have been investigated by K$\beta$ x-ray emission and simultaneous x-ray absorption spectroscopy at the Fe K-edge under extreme conditions at high pressure and low temperature. Our results indicate a sluggish decrease of the local Fe spin moment under pressure up to 6-7 GPa in line with previous reports followed by a sudden increase at higher pressure that has gone unnoticed up to now. The magnetic surge coincides with an abrupt change of the Fe local structure adopting a more centrosymmetric environment as observed by the decrease of the Fe-K pre-edge region intensity and confirmed by simulations of the absorption spectra.
• We prove that Gabor systems generated by certain scaled B-splines can be considered as perturbations of the Gabor systems generated by the Gaussian, with a deviation within an arbitrary small tolerance whenever the order $N$ of the B-spline is sufficiently large. As a consequence we show that for any choice of translation/modulation parameters $a,b>0$ with $ab<1,$ the scaled version of $B_N$ generates Gabor frames for $N$ sufficiently large. Considering the Gabor frame decomposition generated by the Gaussian and a dual window, the results lead to estimates of the deviation from perfect reconstruction that arise when the Gaussian is replaced by a scaled B-spline, or when the dual window of the Gaussian is replaced by certain explicitly given and compactly supported linear combinations of the B-splines. In particular, this leads to a family of approximate dual windows of a very simple form, leading to "almost perfect reconstruction" within any desired error tolerance whenever the product $ab$ is sufficiently small. In contrast, the known (exact) dual windows have a very complicated form. A similar analysis is sketched with the scaled B-splines replaced by certain truncations of the Gaussian. As a consequence of the approach we prove (mostly known) convergence results for the considered scaled B-splines to the Gaussian in the $L^p$-spaces, as well in the time-domain as in the frequency domain.
• We prove Bergman's theorem on centralizers by using generic matrices and Kontsevich's quantization method. For any field $\textbf{k}$ of positive characteristics, set $A=\textbf{k} \langle x_1,\dots,x_s\rangle$ be a free associative algebra, then any centralizer $\mathcal{C}(f)$ of nontrivial element $f\in A\backslash \textbf{k}$ is a ring of polynomials on a single variable. We also prove that there is no commutative subalgebra with transcendent degree $\geq 2$ of $A$.
• Stochastic gradient descent (SGD) is a popular stochastic optimization method in machine learning. Traditional parallel SGD algorithms, e.g., SimuParallel SGD, often require all nodes to have the same performance or to consume equal quantities of data. However, these requirements are difficult to satisfy when the parallel SGD algorithms run in a heterogeneous computing environment; low-performance nodes will exert a negative influence on the final result. In this paper, we propose an algorithm called weighted parallel SGD (WP-SGD). WP-SGD combines weighted model parameters from different nodes in the system to produce the final output. WP-SGD makes use of the reduction in standard deviation to compensate for the loss from the inconsistency in performance of nodes in the cluster, which means that WP-SGD does not require that all nodes consume equal quantities of data. We also analyze the theoretical feasibility of running two other parallel SGD algorithms combined with WP-SGD in a heterogeneous environment. The experimental results show that WP-SGD significantly outperforms the traditional parallel SGD algorithms on distributed training systems with an unbalanced workload.
• Aug 17 2017 math.NT arXiv:1708.04800v1
Let $\mathbb{K}$ be a number field of degree $k$ and let $\mathcal{O}$ be an order in $\mathbb{K}$. A generalized number system over $\mathcal{O}$ (GNS for short) is a pair $(p,\mathcal{D})$ where $p \in \mathcal{O}[x]$ is monic and $\mathcal{D}\subset\mathcal{O}$ is a complete residue system modulo $p(0)$. If each $a \in \mathcal{O}[x]$ admits a representation of the form $a \equiv \sum_{j =0}^{\ell-1} d_j x^j \pmod{p}$ with $\ell\in\mathbb{N}$ and $d_0,\ldots, d_{\ell-1}\in\mathcal{D}$ then the GNS $(p,\mathcal{D})$ is said to have the finiteness property. Using fundamental domains $\mathcal{F}$ of the action of $\mathbb{Z}^k$ on $\mathbb{R}^k$ we define classes $\mathcal{G}_\mathcal{F} := \{ (p, D_\mathcal{F}) \;:\; p \in \mathcal{O}[x] \}$ of GNS whose digit sets $D_\mathcal{F}$ are defined in terms of $\mathcal{F}$ in a natural way. We are able to prove general results on the finiteness property of GNS in $\mathcal{G}_\mathcal{F}$ by giving an abstract version of the well-known "dominant condition" on the absolute coefficient $p(0)$ of $p$. In particular, depending on mild conditions on the topology of $\mathcal{F}$ we characterize the finiteness property of $(p(x\pm m), D_\mathcal{F})$ for fixed $p$ and large $m\in\mathbb{N}$. Using our new theory, we are able to give general results on the connection between power integral bases of number fields and GNS.
• Recent technological advancements have led to the generation of huge amounts of data over the web, such as text, image, audio and video. Most of this data is high dimensional and sparse, for e.g., the bag-of-words representation used for representing text. Often, an efficient search for similar data points needs to be performed in many applications like clustering, nearest neighbour search, ranking and indexing. Even though there have been significant increases in computational power, a simple brute-force similarity-search on such datasets is inefficient and at times impossible. Thus, it is desirable to get a compressed representation which preserves the similarity between data points. In this work, we consider the data points as sets and use Jaccard similarity as the similarity measure. Compression techniques are generally evaluated on the following parameters --1) Randomness required for compression, 2) Time required for compression, 3) Dimension of the data after compression, and 4) Space required to store the compressed data. Ideally, the compressed representation of the data should be such, that the similarity between each pair of data points is preserved, while keeping the time and the randomness required for compression as low as possible. We show that the compression technique suggested by Pratap and Kulkarni also works well for Jaccard similarity. We present a theoretical proof of the same and complement it with rigorous experimentations on synthetic as well as real-world datasets. We also compare our results with the state-of-the-art "min-wise independent permutation", and show that our compression algorithm achieves almost equal accuracy while significantly reducing the compression time and the randomness.
• In a software system it is possible to quantify the amount of information that is leaked or corrupted by analysing the flows of information present in the source code. In a cyber-physical system, information flows are not only present at the digital level, but also at a physical level, and to and fro the two levels. In this work, we provide a methodology to formally analyse a Cyber-Physical System composite model (combining physics and control) using an information flow-theoretic approach. We use this approach to quantify the level of vulnerability of a system with respect to attackers with different capabilities. We illustrate our approach by means of a water distribution case study.
• Aug 17 2017 cs.SY arXiv:1708.04797v1
In this paper we consider the problem of computing control invariant sets for linear controlled systems with constraints on the input and on the states. We focus in particular on the complexity of the computation of the N-step operator, given by the Minkowski addition of sets, that is the basis of many of the iterative procedures for obtaining control invariant sets. Set inclusions conditions for control invariance are presented that involve the N-step sets and are posed in form of linear programming problems. Such conditions are employed in algorithms based on LP problems that allow to overcome the complexity limitation inherent to the set addition and can be applied also to high dimensional systems. The efficiency and scalability of the method are illustrated by computing in less than two seconds an approximation of the maximal control invariant set, based on the 15-step operator, for a system whose state and input dimensions are 20 and 10 respectively.
• In this paper, we generalize a source generative model in a state-of-the-art blind source separation (BSS), independent low-rank matrix analysis (ILRMA). ILRMA is a unified method of frequency-domain independent component analysis and nonnegative matrix factorization and can provide better performance for audio BSS tasks. To further improve the performance and stability of the separation, we introduce an isotropic complex Student's $t$-distribution as a source generative model, which includes the isotropic complex Gaussian distribution used in conventional ILRMA. Experiments are conducted using both music and speech BSS tasks, and the results show the validity of the proposed method.
• In this work, we prove the existence of local convex solution to the degenerate Hessian equation
• Although there is an extensive statistical literature showing the disadvantages of discretizing continuous variables, categorization is a common practice in clinical research which results in substantial loss of information. A large collection of methods in cancer phase I clinical trial design establishes dose of a new agent as a discrete variable. A noteworthy exception is the Escalation With Overdose Control (EWOC) design, where doses can be defined either as continuous or as a grid of discrete doses. A Monte Carlo simulation study was performed to compare the operating characteristics of continuous and discrete dose EWOC designs. Four equally spaced grids with different interval lengths were considered. The loss of information was measured by several operating characteristics easier for clinicians to interpret, in addition to the usual statistical measures of bias and mean squared error. Based on the simulations, if there is not enough knowledge about the true MTD value as commonly happens in phase I clinical trials, continuous dose scheme arises as an attractive option.
• Charge symmetry breaking (CSB) is particularly strong in the A=4 mirror hypernuclei $_{\Lambda}^4$H--$_{\Lambda}^4$He. Recent four-body no-core shell model calculations that confront this CSB by introducing $\Lambda$-$\Sigma^0$ mixing to leading-order chiral effective field theory hyperon-nucleon potentials are reviewed, and a shell-model approach to CSB in p-shell $\Lambda$ hypernuclei is outlined.
• In this study we evaluated human-robot collaboration models in an integrated human-robot operational system. An integrated work cell which includes a robotic arm working collaboratively with a human worker was specially designed for executing a real-time assembly task. Eighty industrial engineering students aged 22-27 participated in experiments in which timing and sensor based models were compared to an adaptive model developed within this framework. Performance measures included total assembly time and total idle time. The results showed conclusively that the adaptive system improved the examined parameters and provided an improvement of 7% in total assembly time and 60% in total idle time when compared to timing and sensory based models.
• Cu(pyz)(NO3)2 is a quasi one-dimensional molecular antiferromagnet that exhibits three dimensional long-range magnetic order below TN=110 mK due to the presence of weak inter-chain exchange couplings. Here we compare calculations of the three largest exchange coupling constants in this system using two techniques based on plane-wave basis-set density functional theory: (i) a dimer fragment approach and (ii) an approach using periodic boundary conditions. The calculated values of the large intrachain coupling constant are found to be consistent with experiment, showing the expected level of variation between different techniques and implementations. However, the interchain coupling constants are found to be smaller than the current limits on the resolution of the calculations. This is due to the computational limitations on convergence of absolute energy differences with respect to basis set, which are larger than the inter-chain couplings themselves. Our results imply that errors resulting from such limitations are inherent in the evaluation of small exchange constants in systems of this sort, and that many previously reported results should therefore be treated with caution.
• We present a simple closed expression for determining the condition for Coherent Perfect Absorption derived through electromagnetic wave treatment and interface boundary conditions. Apart from providing physical insight this expression can be used to estimate the values of various parameters required for observation of coherent perfect absorption in a given medium characterized by a complex dielectric constant. The results of the theoretical expression are found to be in good agreement with those obtained through numerical simulations.
• The effective Lagrangian formalism provides a way to study the new physics effects at the electroweak scale. We study the Higgs pair production via the vector-boson fusion (VBF) at the Large Hadron Collider within the framework of the effective field theory. The effects from the dimension-six operators involved in the VBF Higgs pair production, particularly $\mathcal{O}_{\Phi,2}$ and $\mathcal{O}_{\Phi,3}$ which are relevant to the triple Higgs self-coupling, on the integrated cross section and various kinematic distributions are investigated. We find that the distributions of Higgs pair invariant mass, Higgs transverse momentum and rapidity are significantly altered by the operators $\mathcal{O}_{\Phi,2}$ and $\mathcal{O}_{\Phi,3}$. These features are helpful in disentangling the contributions from the operators $\mathcal{O}_{\Phi,2}$ and $\mathcal{O}_{\Phi,3}$ in triple Higgs self-coupling. We also provide the 5$\sigma$ discovery and 3$\sigma$ exclusion limits for the coefficients of $\mathcal{O}_{\Phi,2}$ and $\mathcal{O}_{\Phi,3}$ by measuring the VBF Higgs pair production process including the sequential $H \rightarrow b\bar{b}$ decays at the $14~ {\rm TeV}$ LHC.
• We discuss to what extent the local techniques of resolution of singularities over fields of characteristic zero can be applied to improve singularities in general. For certain interesting classes of singularities, this leads to an embedded resolution via blowing ups in regular centers. We illustrate this for generic determinantal varieties. The article is partially expository and is addressed to non-experts who aim to construct resolutions for other special classes of singularities in positive or mixed characteristic.
• Aug 17 2017 math.OC arXiv:1708.04783v1
We investigate a projection free method, namely conditional gradient sliding on batched, stochastic and finite-sum non-convex problem. CGS is a smart combination of Nesterov's accelerated gradient method and Frank-Wolfe (FW) method, and outperforms FW in the convex setting by saving gradient computations. However, the study of CGS in the non-convex setting is limited. In this paper, we propose the non-convex conditional gradient sliding (NCGS) which surpasses the non-convex Frank-Wolfe method in batched, stochastic and finite-sum setting.
• Thompson sampling has impressive empirical performance for many multi-armed bandit problems. But current algorithms for Thompson sampling only work for the case of conjugate priors since these algorithms require to infer the posterior, which is often computationally intractable when the prior is not conjugate. In this paper, we propose a novel algorithm for Thompson sampling which only requires to draw samples from a tractable distribution, so our algorithm is efficient even when the prior is non-conjugate. To do this, we reformulate Thompson sampling as an optimization problem via the Gumbel-Max trick. After that we construct a set of random variables and our goal is to identify the one with highest mean. Finally, we solve it with techniques in best arm identification.
• We use meromorphic quadratic differentials with higher order poles to parametrize the Teichmüller space of crowned hyperbolic surfaces. Such a surface is obtained on uniformizing a compact Riemann surface with marked points on its boundary components, and has non-compact ends with boundary cusps. This extends Wolf's parametrization of the Teichmüller space of a closed surface using holomorphic quadratic differentials. Our proof involves showing the existence of a harmonic map from a punctured Riemann surface to a crowned hyperbolic surface, with prescribed principal parts of its Hopf differential which determine the geometry of the map near the punctures.
• Template directed replication of nucleic acids is at the essence of all living beings and a major milestone for any origin of life scenario. We here present an idealized model of prebiotic sequence replication, where binary polymers act as templates for their autocatalytic replication, thereby serving as each others reactants and products in an intertwined molecular ecology. Our model demonstrates how autocatalysis alters the qualitative and quantitative system dynamics in counter-intuitive ways. Most notably, numerical simulations reveal a very strong intrinsic selection mechanism that favours the appearance of a few population structures with highly ordered and repetitive sequence patterns when starting from a pool of monomers. We demonstrate both analytically and through simulation how this "selection of the dullest" is caused by continued symmetry breaking through random fluctuations in the transient dynamics that are amplified by autocatalysis and eventually propagate to the population level. The impact of these observations on related prebiotic mathematical models is discussed.
• Let $G$ be a finite group. In this paper, we study $G$-categories equipped with an ordinary symmetric monoidal structure, together with a set of specified norm maps. We give an example and explain how the Hill-Hopkins-Ravenel norm functors arise from it, and then we generalize the Kelly-Mac Lane coherence theorem to include the present structures. As an application, we obtain finite presentations of $N_\infty$-$G$-categories for any $G$-indexing system.
• Aug 17 2017 math.DG arXiv:1708.04775v1
The standard Laplace operator is a generalization of the Hodge Laplace operator on differential forms to arbitrary geometric vector bundles, alternatively it can be seen as generalization of the Casimir operator acting on sections of homogeneous vector bundles over symmetric spaces to general Riemannian manifolds. Stressing the functorial aspects of the standard Laplace operator $\Delta$ with respect to the category of geometric vector bundles we show that the standard Laplace operator commutes not only with all homomorphisms, but also with a large class of natural first order differential operators between geometric vector bundles. Several examples are included to highlight the conclusions of this article.
• A novel method and protocol establishing common secrecy based on physical parameters between two users is proposed. The four physical parameters of users are their clock frequencies, their relative clock phases and the distance between them. The protocol proposed between two users is backed by theoretical model for the measurements. Further, estimators are proposed to estimate secret physical parameters. Physically exchanged parameters are shown to be secure by virtue of their non-observability to adversaries. Under a simplified analysis based on a testbed settings, it is shown that 38 bits of common secrecy can be derived for one run of the proposed protocol among users. The method proposed is also robust against various kinds of active timing attacks and active impersonating adversaries.
• Aug 17 2017 math.CO cs.CG arXiv:1708.04773v1
This paper studies questions about duality between crossings and non-crossings in graph drawings via the notions of thickness and antithickness. The "thickness' of a graph $G$ is the minimum integer $k$ such that in some drawing of $G$, the edges can be partitioned into $k$ noncrossing subgraphs. The "antithickness" of a graph $G$ is the minimum integer $k$ such that in some drawing of $G$, the edges can be partitioned into $k$ thrackles, where a "thrackle" is a set of edges, each pair of which intersect exactly once. So thickness is a measure of how close a graph is to being planar, whereas antithickness is a measure of how close a graph is to being a thrackle. This paper explores the relationship between the thickness and antithickness of a graph, under various graph drawing models, with an emphasis on extremal questions.
• We developed a sharp interface level-set approach for two-phase immiscible flow with moving contact lines. The Cox-Voinov model is used to describe the moving contact line. A piecewise linear interface method is used to construct the signed distance function and to implement the contact angle boundary condition. Spurious currents are studied in the case of static and moving fluid interfaces, which show convergence behavior. Pressure and the surface tension force are balanced up to machine precision for static parabolic interfaces, while the velocity error decreases steadily with grid refinement when the interface is advected in a uniform flow field. The moving contact line problem is studied and validated through comparison with theory and experiments for an advancing interface and capillary rise in a tube.
• Let $(R,\mathfrak{m})$ be a local Noetherian ring with residue field $k$. While much is known about the generating sets of reductions of ideals of $R$ if $k$ is infinite, the case in which $k$ is finite is less well understood. We investigate the existence (or lack thereof) of proper reductions of an ideal of $R$ and the number of generators needed for a reduction in the case $k$ is a finite field. When $R$ is one-dimensional, we give a formula for the smallest integer $n$ for which every ideal has an $n$-generated reduction. It follows that in a one-dimensional local Noetherian ring every ideal has a principal reduction if and only if the number of maximal ideals in the normalization of the reduced quotient of $R$ is at most $|k|$. In higher dimensions, we show that for any positive integer, there exists an ideal of $R$ that does not have an $n$-generated reduction and that if $n \geq \dim R$ this ideal can be chosen to be $\mathfrak{m}$-primary.

Māris Ozols Aug 03 2017 09:34 UTC

If I'm not mistaken, what you describe here is equivalent to the [QR decomposition][1]. The matrices $R_{ij}$ that act non-trivially only in a two-dimensional subspace are known as [Givens rotations][2]. The fact that any $n \times n$ unitary can be decomposed as a sequence of Givens rotations is ex

...(continued)
gae Jul 26 2017 21:19 UTC

For those interested in the literature on teleportation simulation of quantum channels, a detailed and *comprehensive* review is provided in Supplementary Note 8 of https://images.nature.com/original/nature-assets/ncomms/2017/170426/ncomms15043/extref/ncomms15043-s1.pdf
The note describes well the t

...(continued)
Maciej Malinowski Jul 26 2017 15:56 UTC

In what sense is the ground state for large detuning ordered and antiferromagnetic? I understand that there is symmetry breaking, but other than that, what is the fundamental difference between ground states for large negative and large positive detunings? It seems to be they both exhibit some order

...(continued)
Stefano Pirandola Jul 26 2017 15:28 UTC

The performance of the memory assisted MDI-QKD with "quasi-EPR" sources is remarkable. It improves the key rate by 5 orders of magnitude over the PLOB bound at about 600 km (take a look at Figure 4).

Māris Ozols Jul 26 2017 11:07 UTC

Conway's list still has four other \$1000 problems left:

https://oeis.org/A248380/a248380.pdf

SHUAI ZHANG Jul 26 2017 00:20 UTC

I am still working on improving this survey. If you have any suggestions, questions or find any mistakes, please do not hesitate to contact me: shuai.zhang@student.unsw.edu.au.

Alvaro M. Alhambra Jul 24 2017 16:10 UTC

This paper has just been updated and we thought it would be a good
idea to advertise it here. It was originally submitted a year ago, and
it has now been essentially rewritten, with two new authors added.

We have fixed some of the original results and now we:
-Show how some fundamental theorem

...(continued)
Steve Flammia Jul 21 2017 13:43 UTC

Actually, there is even earlier work that shows this result. In [arXiv:1109.6887][1], Magesan, Gambetta, and Emerson showed that for any Pauli channel the diamond distance to the identity is equal to the trace distance between the associated Choi states. They prefer to phrase their results in terms

...(continued)
Stefano Pirandola Jul 21 2017 09:43 UTC

This is very interesting. In my reading list!