# Top arXiv papers

• We present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame, from which the unmodulated signal descends. Then we perform a mathematically-sound profile likelihood analysis in which we profile the likelihood over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysis to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.
• This paper studies physical layer security in a wireless ad hoc network with numerous legitimate transmitter-receiver pairs and eavesdroppers. A hybrid full-/half-duplex receiver deployment strategy is proposed to secure legitimate transmissions, by letting a fraction of legitimate receivers work in the full-duplex (FD) mode sending jamming signals to confuse eavesdroppers upon their information receptions, and letting the other receivers work in the half-duplex mode just receiving their desired signals. The objective of this paper is to choose properly the fraction of FD receivers for achieving the optimal network security performance. Both accurate expressions and tractable approximations for the connection outage probability and the secrecy outage probability of an arbitrary legitimate link are derived, based on which the area secure link number, network-wide secrecy throughput and network-wide secrecy energy efficiency are optimized respectively. Various insights into the optimal fraction are further developed and its closed-form expressions are also derived under perfect self-interference cancellation or in a dense network. It is concluded that the fraction of FD receivers triggers a non-trivial trade-off between reliability and secrecy, and the proposed strategy can significantly enhance the network security performance.
• The edit distance between two rooted ordered trees with $n$ nodes labeled from an alphabet~$\Sigma$ is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. Tree edit distance is a well known generalization of string edit distance. The fastest known algorithm for tree edit distance runs in cubic $O(n^3)$ time and is based on a similar dynamic programming solution as string edit distance. In this paper we show that a truly subcubic $O(n^{3-\varepsilon})$ time algorithm for tree edit distance is unlikely: For $|\Sigma| = \Omega(n)$, a truly subcubic algorithm for tree edit distance implies a truly subcubic algorithm for the all pairs shortest paths problem. For $|\Sigma| = O(1)$, a truly subcubic algorithm for tree edit distance implies an $O(n^{k-\varepsilon})$ algorithm for finding a maximum weight $k$-clique. Thus, while in terms of upper bounds string edit distance and tree edit distance are highly related, in terms of lower bounds string edit distance exhibits the hardness of the strong exponential time hypothesis [Backurs, Indyk STOC'15] whereas tree edit distance exhibits the hardness of all pairs shortest paths. Our result provides a matching conditional lower bound for one of the last remaining classic dynamic programming problems.
• We consider the Cauchy problem for the damped wave equation under the initial state that the sum of an initial position and an initial velocity vanishes. When the initial position is non-zero, non-negative and compactly supported, we study the large time behavior of the spatial null, critical, maximum and minimum sets of the solution. The spatial null set becomes a smooth hyper-surface homeomorphic to a sphere after a large enough time. The spatial critical set has at least three points after a large enough time. The set of spatial maximum points escapes from the convex hull of the support of the initial position. The set of spatial minimum points consists of one point after a large time, and the unique spatial minimum point converges to the centroid of the initial position at time infinity.
• We investigate topologically enhanced localization and optical switching in the one-dimensional (1D) periodically driven Shockley model, theoretically and numerically. Transport properties of the model, arranged as a 1D photonic array of waveguides, are discussed. We find that light beam propagating in such an array can be well localized under both periodic and open boundary conditions, thanks to the zero-energy and edge states that depend on the topological structure of quasi-energy. Topological protection of the localization due to the edge state is demonstrated, based on which an optical switch with high efficiency is proposed.
• Existing strategies for finite-armed stochastic bandits mostly depend on a parameter of scale that must be known in advance. Sometimes this is in the form of a bound on the payoffs, or the knowledge of a variance or subgaussian parameter. The notable exceptions are the analysis of Gaussian bandits with unknown mean and variance by Cowan and Katehakis [2015] and of uniform distributions with unknown support [Cowan and Katehakis, 2015]. The results derived in these specialised cases are generalised here to the non-parametric setup, where the learner knows only a bound on the kurtosis of the noise, which is a scale free measure of the extremity of outliers.
• We consider previous models of Timed, Probabilistic and Stochastic Timed Automata, we introduce our model of Timed Automata with Polynomial Delay and we characterize the expressiveness of these models relative to each other.
• This paper introduces a Continuously Variable Series Reactor (CVSR) to the transmission expansion planning (TEP) problem. The CVSR is a FACTS-like device which has the capability of controlling the overall impedance of the transmission line. However, the cost of the CVSR is about one tenth of a similar rated FACTS device which potentially allows large numbers of devices to be installed. The multi-stage TEP with the CVSR considering the $N-1$ security constraints is formulated as a mixed integer linear programming model. The nonlinear part of the power flow introduced by the variable reactance is linearized by a reformulation technique. To reduce the computational burden for a practical large scale system, a decomposition approach is proposed. The detailed simulation results on the IEEE 24-bus and a more practical Polish 2383-bus system demonstrate the effectiveness of the approach. Moreover, the appropriately allocated CVSRs add flexibility to the TEP problem and allow reduced planning costs. Although the proposed decomposition approach cannot guarantee global optimality, a high level picture of how the network can be planned reliably and economically considering CVSR is achieved.
• In discrete modelling of biological network, Boolean variables are mainly used and many tools and theorems are available for analysis of Boolean models. However, multilevel variables are often required to account for threshold effects, in which knowledge of the Boolean case does not generalise straightforwardly. This motivated the development of conversion methods for multilevel to Boolean models. In particular, Van Ham's method has been shown to yield a one-to-one, neighbour and regulation preserving dynamics, making it the de facto standard approach to the problem. However, Van Ham's method has several drawbacks: most notably, it introduces vast regions of "non-admissible" states that have no counterpart in the multilevel, original model. This raises special difficulties for the analysis of interaction between variables and circuit functionality, which is believed to be central to the understanding of dynamic properties of logical models. Here, we propose a new multilevel to Boolean conversion method, with software implementation. Contrary to Van Ham's, our method doesn't yield a one-to-one transposition of multilevel trajectories, however, it maps each and every Boolean state to a specific multilevel state, thus getting rid of the non-admissible regions and, at the expense of (apparently) more complicated, "parallel" trajectories. One of the prominent features of our method is that it preserves dynamics and interaction of variables in a certain manner. We apply our method to construct a new Boolean counter-example to the conjecture that a local negative circuit is necessary to generate sustained oscillations. Although other Boolean counter-examples have been found recently, this result illustrates the general relevance of our method for the study of multilevel logical models.
• Using 1-D non-Local-Thermodynamic-Equilibrium time-dependent radiative-transfer simulations, we study the ejecta properties required to match the early and late-time photometric and spectroscopic properties of supernovae (SNe) associated with long-duration gamma-ray bursts (LGRBs). To match the short rise time, narrow light curve peak, and extremely broad spectral lines of SN1998bw requires a model with <3Msun ejecta but a high explosion energy of a few 1e52erg and 0.5Msun of Ni56. However the relatively high luminosity, the presence of narrow spectral lines of intermediate mass elements, and the low ionization at the nebular stage are matched with a more standard C-rich Wolf-Rayet (WR) star explosion, with an ejecta of >10Msun, an explosion energy >1e51erg, and only 0.1Msun of Ni56. As the two models are mutually exclusive, the breaking of spherical symmetry is essential to match the early/late photometric/spectroscopic properties of SN1998bw. This conclusion confirms the notion that the ejecta of SN1998bw is aspherical on large scales. More generally, with asphericity, the energetics and Ni56 mass of LGRB/SNe are reduced and their ejecta mass is increased, favoring a massive fast-rotating Wolf-Rayet star progenitor. Contrary to persisting claims in favor of the proto-magnetar model for LGRB/SNe, such progenitor/ejecta properties are compatible with collapsar formation. Ejecta properties of LGRB/SNe inferred from 1D radiative-transfer modeling are fundamentally flawed.
• Identifying palindromes in sequences has been an interesting line of research in combinatorics on words and also in computational biology, after the discovery of the relation of palindromes in the DNA sequence with the HIV virus. Efficient algorithms for the factorization of sequences into palindromes and maximal palindromes have been devised in recent years. We extend these studies by allowing gaps in decompositions and errors in palindromes, and also imposing a lower bound to the length of acceptable palindromes. We first present an algorithm for obtaining a palindromic decomposition of a string of length n with the minimal total gap length in time O(n log n * g) and space O(n g), where g is the number of allowed gaps in the decomposition. We then consider a decomposition of the string in maximal \delta-palindromes (i.e. palindromes with \delta errors under the edit or Hamming distance) and g allowed gaps. We present an algorithm to obtain such a decomposition with the minimal total gap length in time O(n (g + \delta)) and space O(n g).
• Ambiguity and noise in natural language instructions create a significant barrier towards adopting autonomous systems into safety critical workflows involving humans and machines. In this paper, we propose to build on recent advances in electrophysiological monitoring methods and augmented reality technologies, to develop alternative modes of communication between humans and robots involved in large-scale proximal collaborative tasks. We will first introduce augmented reality techniques for projecting a robot's intentions to its human teammate, who can interact with these cues to engage in real-time collaborative plan execution with the robot. We will then look at how electroencephalographic (EEG) feedback can be used to monitor human response to both discrete events, as well as longer term affective states while execution of a plan. These signals can be used by a learning agent, a.k.a an affective robot, to modify its policy. We will present an end-to-end system capable of demonstrating these modalities of interaction. We hope that the proposed system will inspire research in augmenting human-robot interactions with alternative forms of communications in the interests of safety, productivity, and fluency of teaming, particularly in engineered settings such as the factory floor or the assembly line in the manufacturing industry where the use of such wearables can be enforced.
• We present a detailed analysis of the orbital stability of the Pais-Uhlenbeck oscillator, using Lie-Deprit series and Hamiltonian normal form theories. In particular, we explicitly describe the reduced phase space for this Hamiltonian system and give a proof for the existence of stable orbits for a certain class of self-interaction, found numerically in previous works.
• In the classic cake-cutting problem (Steinhaus, 1948), a heterogeneous resource has to be divided among n agents with different valuations in a proportional way --- giving each agent a piece with a value of at least 1/n of the total. In many applications, such as dividing a land-estate or a time-interval, it is also important that the pieces are connected. We propose two additional requirements: resource-monotonicity (RM) and population-monotonicity (PM). When either the cake or the set of agents changes and the cake is re-divided using the same rule, the utility of all remaining agents must change in the same direction. Classic cake-cutting protocols are neither RM nor PM. Moreover, we prove that no Pareto-optimal proportional division rule can be either RM or PM. Motivated by this negative result, we search for division rules that are weakly-Pareto-optimal --- no other division is strictly better for all agents. We present two such rules. The relative-equitable rule, which assigns the maximum possible relative value equal for all agents, is proportional and PM. The so-called rightmost-mark rule, which is an improved version of the Cut and Choose protocol, is proportional and RM for two agents.
• Examples of "enhanced ultraviolet cancellations" with no known standard-symmetry explanation have been found in a variety of supergravity theories. By examining one- and two-loop examples in four- and five-dimensional half-maximal supergravity, we argue that enhanced cancellations in general cannot be exhibited prior to integration. In light of this, we explore reorganizations of integrands into parts that are manifestly finite and parts that have poor power counting but integrate to zero due to integral identities. At two loops we find that in the large loop-momentum limit the required integral identities follow from Lorentz and SL(2) relabeling symmetry. We carry out a nontrivial check at four loops showing that the identities generated in this way are a complete set. We propose that at $L$ loops the combination of Lorentz and SL($L$) symmetry is sufficient for displaying enhanced cancellations when they happen, whenever the theory is known to be ultraviolet finite up to $(L-1)$ loops.
• The work is necessitated by search for new materials to detect ionizing radiation. The rare-earth ions doped with ternary alkali earth-halide systems are promising scintillators showing high efficiency and energy resolution. Some aspects of crystal growth and data on the structural and luminescence properties of BaBrI and BaClI doped with low concentrations of $\mathrm{Eu^{2+}}$ ions are reported. The crystals are grown by the vertical Bridgman method in sealed quartz ampoule. New crystallography data for BaClI single crystal obtained by single crystal X-ray diffraction method are presented in this paper. Emission, excitation and optical absorption spectra as well as luminescence decay kinetics are studied under excitation by X-ray, vacuum ultraviolet and ultraviolet radiation. The energies of the first 4f-5d transition in $\mathrm{Eu^{2+}}$ and band gap of the crystals have been obtained. We have calculated the electronic band structure of the crystals using density functional theory as implemented in the \latinAb Initio. Calculated band gap energies are in accord with the experimental estimates. The energy of gaps between the occupied Eu$^{2+}$ 4f level and the valence band top are predicted. In addition, positions of lanthanide energy levels in relation to valence band have been constructed using the chemical shift model.
• Fringes often appear in a CCD frame, especially when a thin CCD chip and a R or I filter is used. 88 CCD frames of the two open clusters NGC 2324 and NGC 1664 with a Johnson I filter taken from the 2.4-m telescope at Yunnan Observatory are used to study the fringes' impacts to the astrometry and photometry of stars. A novel technique proposed by Snodgrass & Carry is applied to remove the fringes in each CCD frame. And an appraisal of this technique is performed to estimate fringes' effects on astrometry and photometry of stars. Our results show that the astrometric and photometric precisions of stars can be improved effectively after the removal of fringes, especially for faint stars.
• We survey the recent progress in the study of heat kernels for a class of non-symmetric non-local operators. We focus on the existence and sharp two-sided estimates of the heat kernels and their connection to jump diffusions.
• In the first section of the present work, we introduce the concept of pseudocomplementation for semirings and show semiring version of some known results in lattice theory. We also introduce semirings with pc-functions and prove some interesting results for minimal prime ideals of such semirings. In the second section, some classical results for minimal prime ideals in ring theory are generalized in the context of semiring theory.
• The doctrine of double effect ($\mathcal{DDE}$) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate $\mathcal{DDE}$. We briefly present $\mathcal{DDE}$, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build $\mathcal{DDE}$-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is $\mathcal{DDE}$-compliant, by applying a $\mathcal{DDE}$ layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the $\mathcal{DDE}$ layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our $\mathcal{DDE}$ layer to the STRIPS-style planning model, and to a modified POMDP model.
• Experiments using nuclei to probe new physics beyond the Standard Model, such as neutrinoless $\beta\beta$ decay searches testing whether neutrinos are their own antiparticle, and direct detection experiments aiming to identify the nature of dark matter, require accurate nuclear physics input for optimizing their discovery potential and for a correct interpretation of their results. This demands a detailed knowledge of the nuclear structure relevant for these processes. For instance, neutrinoless $\beta\beta$ decay nuclear matrix elements are very sensitive to the nuclear correlations in the initial and final nuclei, and the spin-dependent nuclear structure factors of dark matter scattering depend on the subtle distribution of the nuclear spin among all nucleons. In addition, nucleons are composite and strongly interacting, which implies that many-nucleon processes are necessary for a correct description of nuclei and their interactions. It is thus crucial that theoretical studies and experimental analyses consider $\beta$ decays and dark matter interactions with a coupling to two nucleons, called two-nucleon currents.
• In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
• We consider the question of how geometric structures of a Deligne-Mumford stack affect its Gromov-Witten invariants. The two geometric structures studied here are \em gerbes and \em root constructions. In both cases, we explain conjectures on Gromov-Witten theory for these stacks and survey some recent progress on these conjectures.
• Estimating output changes by input changes is the main task in causal analysis. In previous work, input and output Self-Organizing Maps (SOMs) were associated for causal analysis of multivariate and nonlinear data. Based on the association, a weight distribution of the output conditional on a given input was obtained over the output map space. Such a weighted SOM pattern of the output changes when the input changes. In order to analyze the change, it is important to measure the difference of the patterns. Many methods have been proposed for the dissimilarity measure of patterns. However, it remains a major challenge when attempting to measure how the patterns change. In this paper, we propose a visualization approach that simplifies the comparison of the difference in terms of the pattern property. Using this approach, the change can be analyzed by integrating colors and star glyph shapes representing the property dissimilarity. Ecological data is used to demonstrate the usefulness of our approach and the experimental results show that our approach provides the change information effectively.
• In this article, we reformulate the cobordism map of embedded contact homology, which is induced by exact symplectic cobordism and defined as direct limit of homomorphisms called filtered ECH cobordism map. The filtered ECH cobordism map is defined by counting embedded holomorphic curves with zero ECH index and we prove that it is independent on almost complex structure by Seiberg Witten theory. Moreover, our definition in fact is equivalent to the existing definition.
• The dynamics of a mosquito population depends heavily on climatic variables such as temperature and precipitation. Since climate change models predict that global warming will impact on the frequency and intensity of rainfall, it is important to understand how these variables affect the mosquito populations. We present a model of the dynamics of a \it Culex quinquefasciatus mosquito population that incorporates the effect of rainfall and use it to study the influence of the number of rainy days and the mean monthly precipitation on the maximum yearly abundance of mosquitoes $M_{max}$. Additionally, using a fracturing process, we investigate the influence of the variability in daily rainfall on $M_{max}$. We find that, given a constant value of monthly precipitation, there is an optimum number of rainy days for which $M_{max}$ is a maximum. On the other hand, we show that increasing daily rainfall variability reduces the dependence of $M_{max}$ on the number of rainy days, leading also to a higher abundance of mosquitoes for the case of low mean monthly precipitation. Finally, we explore the effect of the rainfall in the months preceding the wettest season, and we obtain that a regimen with high precipitations throughout the year and a higher variability tends to advance slightly the time at which the peak mosquito abundance occurs, but could significantly change the total mosquito abundance in a year.
• Morphological transformations of amphiphilic AB diblock copolymers in mixtures of a common solvent (S1) and a selective solvent (S2) for the B block are studied using the simulated annealing method. We focus on the morphological transformation depending on the fraction of the selective solvent CS2, the concentration of the polymer CP, and the polymer-solvent interactions \epsilonij (i = A, B; j = S1, S2). Morphology diagrams are constructed as functions of CP, CS2, and/or \epsilonAS2. The copolymer morphological sequence from dissolved -> sphere -> rod-> ring/cage-> vesicle is obtained upon increasing CS2 at a fixed CP. This morphology sequence is consistent with previous experimental observations. It is found that the selectivity of the selective solvent affects the self-assembled microstructure significantly. In particular, when the interaction \epsilonBS2 is negative, aggregates of stacked lamellae dominate the diagram. The mechanisms of aggregate transformation and the formation of stacked lamellar aggregates are discussed by analyzing variations of the average contact numbers of the A or B monomers with monomers and with molecules of the two types of solvent, as well as the mean square end-to-end distances of chains. It is found that the basic morphological sequence of spheres to rods to vesicles and the stacked lamellar aggregates result from competition between the interfacial energy and the chain conformational entropy. Analysis of the vesicle structure reveals that the vesicle size increases with increasing CP or with decreasing CS2, but remains almost unchanged with variations in \epsilonAS2.
• A light stop around the weak scale is a hopeful messenger of natural supersymmetry (SUSY), but it has not shown up at the current stage of LHC. Such a situation raises question on the fate of natural SUSY. Actually, a relatively lighter stop can be easily hidden in a compressed spectra such as mild mass degeneracy between stop and neutralino plus top quark. Searching for such a stop at the LHC is a challenge. On the other hand, in terms of the argument of natural SUSY, other members in the stop sector including a heavier stop $\tilde{t}_2$ and lighter bottom $\tilde{b}_1$ (both assumed to be left-handed-like) are also supposed to be relatively light and therefore searching them provide an alternative method to probe natural SUSY with a compressed spectra. In this paper we consider quasi natural SUSY which tolerates relatively heavy colored partners near the TeV scale, with a moderately large mass gap between the heavier members and the lightest stop. Then, $W/Z/h$ as companions of $\tilde{t}_2$ and $\tilde{b}_1$ decaying into $\tilde{t}_1$ generically are well boosted, and they, along with other visible particles from $\tilde{t}_1$ decay, are good probe to the compressed SUSY. We find that the resulting search strategy with boosted bosons can have better sensitivity than those utilizing multi-leptons.
• We theoretically study the interplay between bulk Weyl electrons and magnetic topological defects, including magnetic domains, domain walls, and $\mathbb{Z}_6$ vortex lines, in the antiferromagnetic Weyl semimetals \mnsn and Mn$_3$Ge with negative vector chirality. We argue that these materials possess a hierarchy of energies scales which allows a description of the spin structure and spin dynamics using a XY model with $\mathbb{Z}_6$ anisotropy. We propose a dynamical equation of motion for the XY order parameter, which implies the presence of $\mathbb{Z}_6$ vortex lines, the double-domain pattern in the presence of magnetic fields, and the ability to control domains with current. We also introduce a minimal electronic model which allows efficient calculation of the electronic structure in the antiferromagnetic configuration, unveiling Fermi arcs at domain walls, and sharp quasi-bound states at $\mathbb{Z}_6$ vortices. Moreover, we have shown how these materials may allow electronic-based imaging of antiferromagnetic microstructure, and propose a possible device based on domain-dependent anomalous Hall effect.
• The Dirac semimetal phase found in Cd$_{3}$As$_{2}$ is protected by a $C_{4}$ rotational symmetry derived from a corkscrew arrangement of systematic Cd vacancies in its complicated crystal structure. It is therefore surprising that no microscopic observation, direct or indirect, of these systematic vacancies has so far been described. To this end, we revisit the cleaved (112) surface of Cd$_{3}$As$_{2}$ using a combined approach of scanning tunneling microscopy and \textitab initio calculations. We determine the exact position of the (112) plane at which Cd$_{3}$As$_{2}$ naturally cleaves, and describe in detail a structural periodicity found at the reconstructed surface, consistent with that expected to arise from the systematic Cd vacancies. This reconciles the current state of microscopic surface observations with those of crystallographic and theoretical models, and demonstrates that this vacancy superstructure, central to the preservation of the Dirac semimetal phase, survives the cleavage process and retains order at the surface.
• We prove that for $n\geq 2$, the Lipschitz homotopy group $\pi_{n+1}^{\rm lip}(\mathbb{H}^n)\neq 0$ of the Heisenberg group $\mathbb{H}^n$ is nontrivial.
• The Baumslag-Solitar group is an example of an HNN extension. Spielberg showed that it has a natural positive cone, and that it is then a quasi-lattice ordered group in the sense of Nica. We give conditions for an HNN extension of a quasi-lattice ordered group $(G,P)$ to be quasi-lattice ordered. In that case, if $(G,P)$ is amenable as a quasi-lattice ordered group, then so is the HNN extension.
• In the last few decades, the development of miniature biological sensors that can detect and measure different phenomena at the nanoscale has led to transformative disease diagnosis and treatment techniques. Among others, biofunctional Raman nanoparticles have been utilized in vitro and in vivo for multiplexed diagnosis and detection of different biological agents. However, existing solutions require the use of bulky lasers to excite the nanoparticles and similarly bulky and expensive spectrometers to measure the scattered Raman signals, which limit the practicality and applications of this nano-biosensing technique. In addition, due to the high path loss of the intra-body environment, the received signals are usually very weak, which hampers the accuracy of the measurements. In this paper, the concept of cooperative Raman spectrum reconstruction for real-time in vivo nano-biosensing is presented for the first time. The fundamental idea is to replace the single excitation and measurement points (i.e., the laser and the spectrometer, respectively) by a network of interconnected nano-devices that can simultaneously excite and measure nano-biosensing particles. More specifically, in the proposed system a large number of nanosensors jointly and distributively collect the Raman response of nano-biofunctional nanoparticles (NBPs) traveling through the blood vessels. This paper presents a detailed description of the sensing system and, more importantly, proves its feasibility, by utilizing accurate models of optical signal propagation in intra-body environment and low-complexity estimation algorithms. The numerical results show that with a certain density of NBPs, the reconstructed Raman spectrum can be recovered and utilized to accurately extract the targeting intra-body information.
• We present WPaxos, a multileader wide area network (WAN) Paxos protocol, that achieves low-latency high-throughput consensus across WAN deployments. WPaxos dynamically partitions the global object-space across multiple concurrent leaders that are deployed strategically using flexible quorums. This partitioning and emphasis on local operations allow our protocol to significantly outperform leaderless approaches, such as EPaxos, while maintaining the same consistency guarantees. Unlike statically partitioned multiple Paxos deployments, WPaxos adapts dynamically to the changing access locality through adaptive object stealing. The ability to quickly react to changing access locality not only speeds up the protocol, but also enables support for mini-transactions. We implemented WPaxos and evaluated it across WAN deployments using the benchmarks introduced in the EPaxos work. Our results show that WPaxos achieves up to 18 times faster average request latency and 65 times faster median latency than EPaxos due to the reduction in WAN communication.
• We construct a form of swallowtail singularity in R^3 which uses coordinate transformation on the source and isometry on the target. As an application, we classify configurations of asymptotic curves and characteristic curves near swallowtail.
• We study the topological configurations of the lines of principal curvature, the asymptotic and characteristic curves on a cuspidal edge, in the domain of a parametrization of this surface as well as on the surface itself. Such configurations are determined by the 3-jets of a parametrization of the surface.
• One of the challenges of analyzing, testing and debugging Android apps is that the potential execution orders of callbacks are missing from the apps' source code. However, bugs, vulnerabilities and refactoring transformations have been found to be related to callback sequences. Existing work on control flow analysis of Android apps have mainly focused on analyzing GUI events. GUI events, although being a key part of determining control flow of Android apps, do not offer a complete picture. Our observation is that orthogonal to GUI events, the Android API calls also play an important role in determining the order of callbacks. In the past, such control flow information has been modeled manually. This paper presents a complementary solution of constructing program paths for Android apps. We proposed a specification technique, called Predicate Callback Summary (PCS), that represents the callback control flow information (including callback sequences as well as the conditions under which the callbacks are invoked) in Android API methods and developed static analysis techniques to automatically compute and apply such summaries to construct apps' callback sequences. Our experiments show that by applying PCSs, we are able to construct Android apps' control flow graphs, including inter-callback relations, and also to detect infeasible paths involving multiple callbacks. Such control flow information can help program analysis and testing tools to report more precise results. Our detailed experimental data is available at: http://goo.gl/NBPrKs
• We search type 1 AGNs among emission-line galaxies, that are typically classified as type 2 AGNs based on emission line flux ratios if a broad component in the H$\alpha$ line profile is not properly investigated. Using ~24,000 type 2 AGNs at z $<$0.1 initially selected from Sloan Digital Sky Survey Data Release 7 by Bae, et al. 2014, we identify a sample of 611 type 1 AGNs based on the spectral fitting results and visual inspection. These hidden type 1 AGNs have relatively low luminosity with a mean broad \Ha luminosity, log L$_{\rm H\alpha }$ $=$ 40.73$\pm$0.32 \ergs\,and low Eddington ratio with a mean log L$_{bol}$/L$_{\rm Edd}$ $=$ -2.04$\pm$0.34, while they do follow the black hole mass - stellar velocity dispersion relation defined by the inactive galaxies and the reverberation-mapped type 1 AGNs. We investigate ionized gas outflows based on the [OIII] $\lambda$5007 kinematics, which show relatively high velocity dispersion and velocity shift, indicating that the line-of-sight velocity and velocity dispersion of the ionized gas in type 1 AGNs is on average larger than that of type 2 AGNs.
• Coded caching scheme is a technique which reduce the load during peak traffic times in a wireless network system. Placement delivery array (PDA in short) was first introduced by Yan et al.. It can be used to design coded caching scheme. In this paper, we prove some lower bounds of PDA on the element and some lower bounds of PDA on the column. We also give some constructions for optimal PDA.
• Using a variational Monte Carlo method, we study competitions of strong electron-electron and electron-phonon interactions in the ground state of Holstein-Hubbard model on a square lattice. At half filling, an extended intermediate metallic or weakly superconducting (SC) phase emerges, sandwiched by antiferromagnetic (AF) and charge order (CO) insulating phases. By the carrier doping into the CO insulator, the SC order dramatically increases for strong electron-phonon couplings, but largely hampered by wide phase separation (PS) regions. Superconductivity is optimized at the border to the PS.
• This paper is devoted to distributed optimization problems using continuous algorithms with nonuniform convex constraint sets and nonuniform step-sizes for general differentiable convex objective functions. The communication graphs are not required to be strongly connected at any time, the gradients of the local objective functions are not required to be bounded when their independent variables tend to infinity, and the constraint sets are not required to be bounded. For continuous-time multi-agent systems, a distributed continuous-time algorithm is first introduced for the case where the step-sizes are nonuniform. Then, a distributed continuous-time algorithm is introduced for the case where nonuniform step-sizes and nonuniform convex constraint sets coexist. It is shown that all agents reach a consensus while minimizing the team objective function in the absence and presence of nonuniform convex constraint sets even when the constraint sets are unbounded. After that, the obtained results are extended to discrete-time multi-agent systems for the case without constraints and the case where each agent remains in a corresponding convex constraint set and the constraint sets might be unbounded. To ensure all agents to remain in a bounded region, a switching mechanism is introduced in the algorithms. It is shown that the distributed optimization problems can be solved, even though the discretization of the algorithms might deviate the convergence of the agents from the minimization of the objective functions. Numerical examples are included to illustrate the obtained theoretical results.
• In this paper, a distributed optimization problem with general differentiable convex objective functions is studied for single-integrator and double-integrator multi-agent systems. Two distributed adaptive optimization algorithm is introduced which uses the relative information to construct the gain of the interaction term. The analysis is performed based on the Lyapunov functions, the analysis of the system solution and the convexity of the local objective functions. It is shown that if the gradients of the convex objective functions are continuous, the team convex objective function can be minimized as time evolves for both single-integrator and double-integrator multi-agent systems. Numerical examples are included to show the obtained theoretical results.
• Number counts observations available with new surveys such as the Euclid mission will be an important source of information about the metric of the Universe. We compute the low red-shift expansion for the energy density and the density contrast using an exact spherically symmetric solution in presence of a cosmological constant. At low red-shift the expansion is more precise than linear perturbation theory prediction. We then use the local expansion to reconstruct the monopole component of the metric from the monopole of the density contrast. We test the inversion method using numerical calculations and find a good agreement within the regime of validity of the red-shift expansion. The method could be applied to observational data to reconstruct the metric of the local Universe with a level of precision higher than the one achievable using perturbation theory.
• In three dimensional spacetime with negative cosmology constant, the general relativity can be written as two copies of SO$(2,1)$ Chern-Simons theory. On a manifold with boundary the Chern-Simons theory induces a conformal field theory--WZW theory on the boundary. In this paper, it is show that with suitable boundary condition for BTZ black hole, the WZW theory can reduce to a massless scalar field on the horizon.
• We theoretically and experimentally investigate colloid-oil-water-interface interactions of charged, sterically stabilized, poly(methyl-methacrylate) colloidal particles dispersed in a low-polar oil (dielectric constant $\epsilon=5-10$) that is in contact with an adjacent water phase. In this model system, the colloidal particles cannot penetrate the oil-water interface due to repulsive van der Waals forces with the interface whereas the multiple salts that are dissolved in the oil are free to partition into the water phase. The sign and magnitude of the Donnan potential and/or the particle charge is affected by these salt concentrations such that the effective interaction potential can be highly tuned. Both the equilibrium effective colloid-interface interactions and the ion dynamics, within a Poisson-Nernst-Planck theory, are explored, and compared to experimental observations.
• In this paper, we estimate the shifted convolution sum $\sum_n\geqslant1\lambda_1(1,n)\lambda_2(n+h)V\Big(\fracnX\Big),$where $V$ is a smooth function with support in $[1,2]$, $1\leqslant|h|\leqslant X$, $\lambda_1(1,n)$ and $\lambda_2(n)$ are the $n$-th Fourier coefficients of $SL(3,\mathbf{Z})$ and $SL(2,\mathbf{Z})$ Hecke-Maass cusp forms, respectively. We prove an upper bound $O(X^{\frac{21}{22}+\varepsilon})$, updating a recent result of Munshi.
• We show the existence of a finite group $G$ having an irreducible character $\chi$ with Frobenius-Schur indicator $\nu_2(\chi){=}{+}1$ such that $\chi^2$ has an irreducible constituent $\varphi$ with $\nu_2(\varphi){=}{-}1$. This provides counterexamples to the positivity conjecture in rational CFT and a conjecture of Zhenghan Wang about pivotal fusion categories.
• Mar 28 2017 math.AG arXiv:1703.08889v1
We introduce and classify the objects that appear in the title of the paper
• An optimized interatomic potential has been constructed for silicon using a modified Tersoff model. The potential reproduces a wide range of properties of Si and improves over existing potentials with respect to point defect structures and energies, surface energies and reconstructions, thermal expansion, melting temperature and other properties. The proposed potential is compared with three other potentials from the literature. The potentials demonstrate reasonable agreement with first-principles binding energies of small Si clusters as well as single-layer and bilayer silicenes. The four potentials are used to evaluate the thermal stability of free-standing silicenes in the form of nano-ribbons, nano-flakes and nano-tubes. While single-layer silicene is mechanically stable at zero Kelvin, it is predicted to become unstable and collapse at room temperature. By contrast, the bilayer silicene demonstrates a larger bending rigidity and remains stable at and even above room temperature. The results suggest that bilayer silicene might exist in a free-standing form at ambient conditions.
• We present a framework to calculate large deviations for nonlinear functions of independent random variables supported on compact sets in Banach spaces, by extending the result in Chatterjee and Dembo [6]. Previous research on nonlinear large deviations has only focused on random variables supported on $\{-1,+1\}^{n}$, a small subset of random objects people usually study, thus it is of natural interest and need to research the corresponding theory for random variables with general distributions. Since our results put fewer constraints on the random variables, it has considerable flexibility in application. To show this, we provide examples with continuous and high dimensional random variables. Our framework could also be used to verify the mathematical rigor of the mean field approximation method; to demonstrate, we verify the mean field approximation for a class of spin vector models.

Laura Mančinska Mar 28 2017 13:09 UTC

Great result!

For those familiar with I_3322, William here gives an example of a nonlocal game exhibiting a behaviour that many of us suspected (but couldn't prove) to be possessed by I_3322.

gae spedalieri Mar 13 2017 14:13 UTC

1) Sorry but this is false.

1a) That analysis is specifically for reducing QECC protocol to an entanglement distillation protocol over certain class of discrete variable channels. Exactly as in BDSW96. Task of the protocol is changed in the reduction.

1b) The simulation is not via a general LOCC b

...(continued)
Siddhartha Das Mar 13 2017 13:22 UTC

We feel that we have cited and credited previous works appropriately in our paper. To clarify:

1) The LOCC simulation of a channel and the corresponding adaptive reduction can be found worked out in full generality in the 2012 Master's thesis of Muller-Hermes. We have cited the original paper BD

...(continued)
gae spedalieri Mar 13 2017 08:56 UTC

This is one of those papers where the contribution of previous literature is omitted and not fairly represented.

1- the LOCC simulation of quantum channels (not necessarily teleportation based) and the corresponding general reduction of adaptive protocols was developed in PLOB15 (https://arxiv.org/

...(continued)
Noon van der Silk Mar 08 2017 04:45 UTC

I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent

...(continued)
Qian Wang Mar 07 2017 17:21 UTC

"To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics."

Christopher Chamberland Mar 02 2017 18:48 UTC

A good paper for learning about exRec's is this one https://arxiv.org/abs/quant-ph/0504218. Also, rigorous threshold lower bounds are obtained using an adversarial noise model approach.

Anirudh Krishna Mar 02 2017 18:40 UTC

Here's a link to a lecture from Dan Gottesman's course at PI about exRecs.
http://pirsa.org/displayFlash.php?id=07020028

You can find all the lectures here:
http://www.perimeterinstitute.ca/personal/dgottesman/QECC2007/index.html

Ben Criger Mar 02 2017 08:58 UTC

Good point, I wish I knew more about ExRecs.

Robin Blume-Kohout Feb 28 2017 09:55 UTC

I totally agree -- that part is confusing. It's not clear whether "arbitrary good precision ... using a limited amount of hardware" is supposed to mean that arbitrarily low error rates can be achieved with codes of fixed size (clearly wrong) or just that the resources required to achieve arbitraril

...(continued)