# Top arXiv papers

• Interpretability has become an important issue as machine learning is increasingly used to inform consequential decisions. We propose an approach for interpreting a blackbox model by extracting a decision tree that approximates the model. Our model extraction algorithm avoids overfitting by leveraging blackbox model access to actively sample new training points. We prove that as the number of samples goes to infinity, the decision tree learned using our algorithm converges to the exact greedy decision tree. In our evaluation, we use our algorithm to interpret random forests and neural nets trained on several datasets from the UCI Machine Learning Repository, as well as control policies learned for three classical reinforcement learning problems. We show that our algorithm improves over a baseline based on CART on every problem instance. Furthermore, we show how an interpretation generated by our approach can be used to understand and debug these models.
• Selective classification techniques (also known as reject option) have not yet been considered in the context of deep neural networks (DNNs). These techniques can potentially significantly improve DNNs prediction performance by trading-off coverage. In this paper we propose a method to construct a selective classifier given a trained neural network. Our method allows a user to set a desired risk level. At test time, the classifier rejects instances as needed, to grant the desired risk (with high probability). Empirical results over CIFAR and ImageNet convincingly demonstrate the viability of our method, which opens up possibilities to operate DNNs in mission-critical applications. For example, using our method an unprecedented 2% error in top-5 ImageNet classification can be guaranteed with probability 99.9%, and almost 60% test coverage.
• We introduce the Prediction Advantage (PA), a novel performance measure for prediction functions under any loss function (e.g., classification or regression). The PA is defined as the performance advantage relative to the Bayesian risk restricted to knowing only the distribution of the labels. We derive the PA for well-known loss functions, including 0/1 loss, cross-entropy loss, absolute loss, and squared loss. In the latter case, the PA is identical to the well-known R-squared measure, widely used in statistics. The use of the PA ensures meaningful quantification of prediction performance, which is not guaranteed, for example, when dealing with noisy imbalanced classification problems. We argue that among several known alternative performance measures, PA is the best (and only) quantity ensuring meaningfulness for all noise and imbalance levels.
• Real-time prediction of clinical interventions remains a challenge within intensive care units (ICUs). This task is complicated by data sources that are noisy, sparse, heterogeneous and outcomes that are imbalanced. In this paper, we integrate data from all available ICU sources (vitals, labs, notes, demographics) and focus on learning rich representations of this data to predict onset and weaning of multiple invasive interventions. In particular, we compare both long short-term memory networks (LSTM) and convolutional neural networks (CNN) for prediction of five intervention tasks: invasive ventilation, non-invasive ventilation, vasopressors, colloid boluses, and crystalloid boluses. Our predictions are done in a forward-facing manner to enable "real-time" performance, and predictions are made with a six hour gap time to support clinically actionable planning. We achieve state-of-the-art results on our predictive tasks using deep architectures. We explore the use of feature occlusion to interpret LSTM models, and compare this to the interpretability gained from examining inputs that maximally activate CNN outputs. We show that our models are able to significantly outperform baselines in intervention prediction, and provide insight into model learning, which is crucial for the adoption of such models in practice.
• We study pool-based active learning with abstention feedbacks, where a labeler can abstain from labeling a queried example. We take a Bayesian approach to the problem and propose a general framework that learns both the target classification problem and the unknown abstention pattern at the same time. As specific instances of the framework, we develop two useful greedy algorithms with theoretical guarantees: they respectively achieve the ${(1-\frac{1}{e})}$ factor approximation of the optimal expected or worst-case value of a useful utility function. Our experiments show the algorithms perform well in various practical scenarios.
• Recurrent Neural Networks architectures excel at processing sequences by modelling dependencies over different timescales. The recently introduced Recurrent Weighted Average (RWA) unit captures long term dependencies far better than an LSTM on several challenging tasks. The RWA achieves this by applying attention to each input and computing a weighted average over the full history of its computations. Unfortunately, the RWA cannot change the attention it has assigned to previous timesteps, and so struggles with carrying out consecutive tasks or tasks with changing requirements. We present the Recurrent Discounted Attention (RDA) unit that builds on the RWA by additionally allowing the discounting of the past. We empirically compare our model to RWA, LSTM and GRU units on several challenging tasks. On tasks with a single output the RWA, RDA and GRU units learn much quicker than the LSTM and with better performance. On the multiple sequence copy task our RDA unit learns the task three times as quickly as the LSTM or GRU units while the RWA fails to learn at all. On the Wikipedia character prediction task the LSTM performs best but it followed closely by our RDA unit. Overall our RDA unit performs well and is sample efficient on a large variety of sequence tasks.
• This paper introduces a new architectural framework, known as input fast-forwarding, that can enhance the performance of deep networks. The main idea is to incorporate a parallel path that sends representations of input values forward to deeper network layers. This scheme is substantially different from "deep supervision" in which the loss layer is re-introduced to earlier layers. The parallel path provided by fast-forwarding enhances the training process in two ways. First, it enables the individual layers to combine higher-level information (from the standard processing path) with lower-level information (from the fast-forward path). Second, this new architecture reduces the problem of vanishing gradients substantially because the fast-forwarding path provides a shorter route for gradient backpropagation. In order to evaluate the utility of the proposed technique, a Fast-Forward Network (FFNet), with 20 convolutional layers along with parallel fast-forward paths, has been created and tested. The paper presents empirical results that demonstrate improved learning capacity of FFNet due to fast-forwarding, as compared to GoogLeNet (with deep supervision) and CaffeNet, which are 4x and 18x larger in size, respectively. All of the source code and deep learning models described in this paper will be made available to the entire research community
• We study transient behaviour in the dynamics of complex systems described by a set of non-linear ODE's. Destabilizing nature of transient trajectories is discussed and its connection with the eigenvalue-based linearization procedure. The complexity is realized as a random matrix drawn from a modified May-Wigner model. Based on the initial response of the system, we identify a novel stable-transient regime. We calculate exact abundances of typical and extreme transient trajectories finding both Gaussian and Tracy-Widom distributions known in extreme value statistics. We identify degrees of freedom driving transient behaviour as connected to the eigenvectors and encoded in a non-orthogonality matrix $T_0$. We accordingly extend the May-Wigner model to contain a phase with typical transient trajectories present. An exact norm of the trajectory is obtained in the vanishing $T_0$ limit where it describes a normal matrix.
• May 25 2017 cs.CY arXiv:1705.08514v1
Future health ecosystems demand the integration of emerging data technology with an increased focus on preventive medicine. Cybernetics extracts the full potential of data to serve the spectrum of health care, from acute to chronic problems. Building actionable cybernetic navigation tools can greatly empower optimal health decisions, especially by quantifying lifestyle and environmental data. This data to decisions transformation is powered by intuitive event analysis to offer the best semantic abstraction of dynamic living systems. Achieving the goal of preventive health systems in the cybernetic model occurs through the flow of several components. From personalized models we can predict health status using perpetual sensing and data streams. Given these predictions, we give precise recommendations to best suit the prediction for that individual. To enact these recommendations we use persuasive technology in order to deliver and execute targeted interventions.
• Asynchronous-parallel algorithms have the potential to vastly speed up algorithms by eliminating costly synchronization. However, our understanding to these algorithms is limited because the current convergence of asynchronous (block) coordinate descent algorithms are based on somewhat unrealistic assumptions. In particular, the age of the shared optimization variables being used to update a block is assumed to be independent of the block being updated. Also, it is assumed that the updates are applied to randomly chosen blocks. In this paper, we argue that these assumptions either fail to hold or will imply less efficient implementations. We then prove the convergence of asynchronous-parallel block coordinate descent under more realistic assumptions, in particular, always without the independence assumption. The analysis permits both the deterministic (essentially) cyclic and random rules for block choices. Because a bound on the asynchronous delays may or may not be available, we establish convergence for both bounded delays and unbounded delays. The analysis also covers nonconvex, weakly convex, and strongly convex functions. We construct Lyapunov functions that directly model both objective progress and delays, so delays are not treated errors or noise. A continuous-time ODE is provided to explain the construction at a high level.
• May 25 2017 cs.CY arXiv:1705.08502v1
I point to a deep and unjustly ignored relation between culture and computation. I first establish interpretations of Piaget's and Vygotsky's theories of child development with the language of theoretical computer science. Using these interpretations, I argue that the two different possible routes to Piagetian disequilibrium -- a tendency to overaccommodate, and a tendency to overassimilate -- are equivalent to the two distinct cultural tendencies, collectivistism and individualism. I argue that this simple characterization of overaccommodation versus overassimilation provides a satisfying explanation as to why the two cultural tendencies differ in the way they empirically do. All such notions are grounded on a firm mathematical framework for those who prefer the computable, and grounded on my personal history for those who prefer the uncomputable.
• This paper is presented a human gait data collection for analysis and activity recognition consisting of continues recordings of combined activities, such as walking, running, taking stairs up and down, sitting down, and so on; and the data recorded are segmented and annotated. Data were collected from a body sensor network consisting of six wearable inertial sensors (accelerometer and gyroscope) located on the right and left thighs, shins, and feet. Additionally, two electromyography sensors were used on the quadriceps (front thigh) to measure muscle activity. This database can be used not only for activity recognition but also for studying how activities are performed and how the parts of the legs move relative to each other. Therefore, the data can be used (a) to perform health-care-related studies, such as in walking rehabilitation or Parkinson's disease recognition, (b) in virtual reality and gaming for simulating humanoid motion, or (c) for humanoid robotics to model humanoid walking. This dataset is the first of its kind which provides data about human gait in great detail. The database is available free of charge https://github.com/romanchereshnev/HuGaDB.
• Security surveillance is one of the most important issues in smart cities, especially in an era of terrorism. Deploying a number of (video) cameras is a common surveillance approach. Given the never-ending power offered by vehicles to metropolises, exploiting vehicle traffic to design camera placement strategies could potentially facilitate security surveillance. This article constitutes the first effort toward building the linkage between vehicle traffic and security surveillance, which is a critical problem for smart cities. We expect our study could influence the decision making of surveillance camera placement, and foster more research of principled ways of security surveillance beneficial to our physical-world life.
• A closed four dimensional manifold cannot possess a non-flat Ricci soliton metric with arbitrarily small $L^2$-norm of the curvature. In this paper, we localize this fact in the case of shrinking Ricci solitons by proving an $\varepsilon$-regularity theorem, thus confirming a conjecture of Cheeger-Tian. As applications, we will also derive structural results concerning the degeneration of the metrics on a family of complete non-compact four dimensional shrinking Ricci solitons \emphwithout a uniform entropy lower bound. In the appendix, we provide a detailed account of the equivariant good chopping theorem when collapsing with locally bounded curvature happens.
• May 25 2017 cs.CY arXiv:1705.08884v1
In 2002, the European Union (EU) introduced the ePrivacy Directive to regulate the usage of online tracking technologies. Its aim is to make tracking mechanisms explicit while increasing privacy awareness in users. It mandates websites to ask for explicit consent before using any kind of profiling methodology, e.g., cookies. Starting from 2013 the Directive is mandatory, and now most of European websites embed a "Cookie Bar" to explicitly ask user's consent. To the best of our knowledge, no study focused in checking whether a website respects the Directive. For this, we engineer CookieCheck, a simple tool that makes this check automatic. We use it to run a measure- ment campaign on more than 35,000 websites. Results depict a dramatic picture: 65% of websites do not respect the Directive and install tracking cookies before the user is even offered the accept button. In few words, we testify the failure of the ePrivacy Directive. Among motivations, we identify the absence of rules enabling systematic auditing procedures, the lack of tools to verify its implementation by the deputed agencies, and the technical difficulties of webmasters in implementing it.
• Understanding the flow of incompressible fluids through porous media plays a crucial role in many technological applications such as enhanced oil recovery and geological carbon-dioxide sequestration. Several natural and synthetic porous materials exhibit multiple pore-networks, and the classical Darcy equations are not adequate to describe the flow of fluids in these porous materials. Mathematical models that adequately describe the flow of fluids in porous media with multiple pore-networks have been proposed in the literature. But these models are analytically intractable, especially for realistic problems. In this paper, a stabilized mixed four-field finite element formulation is presented to study the flow of an incompressible fluid in porous media exhibiting double porosity/permeability. The stabilization terms and the stabilization parameters are derived in a mathematically consistent manner. Under the proposed mixed formulation, the equal-order interpolation for all the field variables, which is computationally the most convenient, is shown to be stable. A systematic error analysis is also performed on the resulting stabilized weak formulation. Several representative problems, patch tests and numerical convergence analysis are performed to illustrate the performance and convergence behavior of the proposed mixed formulation in the discrete setting. The accuracy of numerical solutions is assessed using the mathematical properties satisfied by the solutions of the double porosity/permeability model (e.g., reciprocal relation). Moreover, it is shown that the proposed framework can nicely perform under the transient condition and it can capture well-known instabilities in the fluid mechanics such as viscous fingering.
• Graph bootstrap percolation is a variation of bootstrap percolation introduced by Bollobás. Let $H$ be a graph. Edges are added to an initial graph $G=(V,E)$ if they are in a copy of $H$ minus an edge, until no further edges can be added. If eventually the complete graph on $V$ is obtained, $G$ is said to $H$-percolate. We identify the sharp threshold for $K_4$-percolation on the Erdős-Rényi graph ${\cal G}_{n,p}$. This refines a result of Balogh, Bollobás and Morris, which bounds the threshold up to multiplicative constants.
• We develop the asymptotic behavior for the solutions to the stationary Navier-Stokes equation in the exterior domain of the 2D hyperbolic space. More precisely, given the finite Dirichlet norm of the velocity, we show the velocity decays to $0$ at infinity. We also address the decay rate for the vorticity and the behavior of the pressure.
• Specific heat and magnetization measurements of the compound [Cu2(apyhist)2Cl2](ClO4)2, where apyhist = (4-imidazolyl)ethylene-2-amino-1-ethylpyridine), were used to identify a magnetic-field-induced long-range antiferromagnetic ordered phase at low temperatures (T < 0.36 K) and magnetic fields (1.6 T < H < 5.3 T). This system consists of a Schiff base copper(II) complex, containing chloro-bridges between adjacent copper ions in a dinuclear arrangement, with an antiferromagnetic intradimer interaction |Jintra|/kB = 3.65 K linked by an antiferromagnetic coupling |Jinter|z/kB = 2.7 K. The magnetic-field-induced ordering behavior was analyzed using the mean field approximation and Monte Carlo simulation results. The obtained physical properties of the system are consistent with the description of the ordered phase as a Bose-Einstein Condensation (BEC) of magnetic excitations. We present the phase diagram of this compound, which shows one of the lowest critical magnetic field among all known members of the family of BEC quantum magnets.
• Here we report small-angle neutron scattering (SANS) measurements and theoretical modeling of U$_3$Al$_2$Ge$_3$. Analysis of the SANS data reveals a phase transition to sinusoidally modulated magnetic order, at $T_N=63$~K to be second order, and a Lifshitz phase transition to ferromagnetic order at $T_c=48$~K. Within the sinusoidally modulated magnetic phase ($T_c < T < T_N$), we uncover a dramatic change in the ordering wave-vector as a function of temperature. These observations all indicate that U$_3$Al$_2$Ge$_3$ is a close realization of the three-dimensional Axial Next-Nearest-Neighbor Ising model, a prototypical framework for describing commensurate to incommensurate phase transitions in frustrated magnets.
• Let $M_g$ be the moduli space of smooth genus $g$ curves. We define a notion of Chow groups of $M_g$ with coefficients in a representation of $Sp(2g)$, and we define a subgroup of tautological classes in these Chow groups with twisted coefficients. Studying the tautological groups of $M_g$ with twisted coefficients is equivalent to studying the tautological rings of all fibered powers $C_g^n$ of the universal curve $C_g \to M_g$ simultaneously. By taking the direct sum over all irreducible representations of the symplectic group in fixed genus, one obtains the structure of a twisted commutative algebra on the tautological classes. We obtain some structural results for this twisted commutative algebra, and we are able to calculate it explicitly when $g \leq 4$. Thus we determine $R^\ast(C_g^n)$ completely, for all $n$, in these genera. We also give some applications to the Faber conjecture.
• We used Gemini Multi-Object Spectrograph (GMOS) in the Integral Field Unit mode to map the stellar population, emission line flux distributions and gas kinematics in the inner kpc of NGC 5044. From the stellar populations synthesis we found that the continuum emission is dominated by old high metallicity stars ($\sim$13 Gyr, 2.5Z$\odot$). Also, its nuclear emission is diluted by a non thermal emission, which we attribute to the presence of a weak active galactic nuclei (AGN). In addition, we report for the first time a broad component (FWHM$\sim$ 3000km$s^{-1}$) in the H$\alpha$ emission line in the nuclear region of NGC 5044. By using emission line ratio diagnostic diagrams we found that two dominant ionization processes coexist, while the nuclear region (inner 200 pc) is ionized by a low luminosity AGN, the filamentary structures are consistent with being excited by shocks. The H$\alpha$ velocity field shows evidence of a rotating disk, which has a velocity amplitude of $\sim$240kms$^{-1}$ at $\sim$ 136 pc from the nucleus. Assuming a Keplerian approach we estimated that the mass inside this radius is $1.9\times10^9$ $M_{\odot}$, which is in agreement with the value obtained through the M-$\sigma$ relation, $M_{SMBH}=1.8\pm1.6\times10^{9}M_{\odot}$. Modelling the ionized gas velocity field by a rotating disk component plus inflows towards the nucleus along filamentary structures, we obtain a mass inflow rate of $\sim$0.4 M$_\odot$. This inflow rate is enough to power the central AGN in NGC 5044.
• We study the invariance of stochastic differential equations under random diffeomorphisms, and establish the determining equations for random Lie-point symmetries of stochastic differential equations, both in Ito and in Stratonovich form. We also discuss relations with previous results in the literature.
• We consider the initial value problem to the Isobe-Kakinuma model for water waves and the structure of the model. The Isobe-Kakinuma model is the Euler-Lagrange equations for an approximate Lagrangian which is derived from Luke's Lagrangian for water waves by approximating the velocity potential in the Lagrangian. The Isobe-Kakinuma model is a system of second order partial differential equations and is classified into a system of nonlinear dispersive equations. Since the hypersurface $t=0$ is characteristic for the Isobe-Kakinuma model, the initial data have to be restricted in an infinite dimensional manifold for the existence of the solution. Under this necessary condition and a sign condition, which corresponds to a generalized Rayleigh-Taylor sign condition for water waves, on the initial data, we show that the initial value problem is solvable locally in time in Sobolev spaces. We also discuss the linear dispersion relation to the model.
• Based on BCS model with the external pair potential formulated in a work \emphK.V. Grigorishin arXiv:1605.07080, analogous model with electron-phonon coupling and Coulomb coupling is proposed. The generalized Eliashberg equations in the regime of renormalization of the order parameter are obtained. High temperature asymptotics and influence of Coulomb pseudopotential on them are investigated: as in the BCS model the order parameter asymptotically tends to zero as temperature rises, but the accounting of the Coulomb pseudopotential leads to existence of critical temperature. The effective Ginzburg-Landau theory is formulated for such model, where the temperature dependencies near $T_{c}$ of the basic characteristics of a superconductor (coherence length, magnetic penetration depth, GL parameter, the thermodynamical critical field, the first and the second critical fields) recovers to the temperature dependencies as in the ordinary GL theory after the BCS model with the external pair potential.
• The increasing integration of distributed energy resources (DERs) calls for new planning and operational tools. However, such tools depend on system topology and line parameters, which may be missing or inaccurate in distribution grids. With abundant data, one idea is to use linear regression to find line parameters, based on which topology can be identified. Unfortunately, the linear regression method is accurate only if there is no noise in both the input measurements (e.g., voltage magnitude and phase angle) and output measurements (e.g., active and reactive power). For topology estimation, even with a small error in measurements, the regression-based method is incapable of finding the topology using non-zero line parameters with a proper metric. To model input and output measurement errors simultaneously, we propose the error-in-variables (EIV) model in a maximum likelihood estimation (MLE) framework for joint line parameter and topology estimation. While directly solving the problem is NP-hard, we successfully adapt the problem into a generalized low-rank approximation problem via variable transformation and noise decorrelation. For accurate topology estimation, we let it interact with parameter estimation in a fashion that is similar to expectation-maximization fashion in machine learning. The proposed PaToPa approach does not require a radial network setting and works for mesh networks. We demonstrate the superior performance in accuracy for our method on IEEE test cases with actual feeder data from South California Edison.
• We calculate the A.C. optical response to circularly polarized light of a Weyl semimetal (WSM) with varying amounts of tilt of the Dirac cones. Both type-I and II (overtilted) WSM are considered in a continuum model with broken time reversal (TR) symmetry. The Weyl nodes appear in pairs of equal energies but of opposite momentum and chirality. For type-I the response of a particular node to right (RHP) and left (LHP) hand polarized light are distinct only in a limited range of photon energy $\Omega$, $\frac{2}{1+C_{2}/v}<\frac{\Omega}{\mu}<\frac{2}{1-C_{2}/v}$ with $\mu$ the chemical potential and $C_{2}$ the tilt associated with the positive chirality node assuming the two nodes are oppositely tilted. For the over tilted case (type-II) the same lower bound applies but there is no upper bound. If the tilt is reversed the RHP and LHP response are also reversed. We present corresponding results for the Hall angle.
• We use a Kubo formalism to calculate both A.C. conductivity and D.C. transport properties of a dirty nodal loop semimetal. The optical conductivity as a function of photon energy $\Omega$, exhibits an extended flat background $\sigma^{BG}$ as in graphene provided the scattering rate $\Gamma$ is small as compared to the radius of the nodal ring $b$ (in energy units). Modifications to the constant background arise for $\Omega\le \Gamma$ and the minimum D.C. conductivity $\sigma^{DC}$ which is approached as $\Omega^2/\Gamma^2$ as $\Omega\rightarrow0$, is found to be proportional to $\frac{\sqrt{\Gamma^2+b^2}}{v_{F}}$ with $v_{F}$ the Fermi velocity. For $b=0$ we recover the known three-dimensional point node Dirac result $\sigma^{DC}\sim \frac{\Gamma}{v_{F}}$ while for $b>\Gamma$, $\sigma^{DC}$ becomes independent of $\Gamma$ (universal) and the ratio $\frac{\sigma^{DC}}{\sigma^{BG}}=\frac{8}{\pi^2}$ where all reference to material parameters has dropped out. As $b$ is reduced and becomes of the order $\Gamma$, the flat background is lost as the optical response evolves towards that of a three-dimensional point node Dirac semimetal which is linear in $\Omega$ for the clean limit. For finite $\Gamma$ there are modifications from linearity in the photon region $\Omega\le \Gamma$. When the chemical potential $\mu$ (temperature $T$) is nonzero the D.C. conductivity increases as $\mu^2/\Gamma^2$($T^2/\Gamma^2$) for $\mu/\Gamma$ $(T/\Gamma)\le 1$. For larger values of $\mu>\Gamma$ away from the nodal region the conductivity shows a Drude like contribution about $\Omega\approxeq 0$ which is followed by a dip in the Pauli blocked region $\Omega \le 2\mu$ after which it increases to merge with the flat background (two-dimensional graphene like) for $\mu< b$ and to the quasilinear (three-dimensional point node Dirac) law for $\mu> b$.
• Growing interest in automatic speaker verification (ASV)systems has lead to significant quality improvement of spoofing attackson them. Many research works confirm that despite the low equal er-ror rate (EER) ASV systems are still vulnerable to spoofing attacks. Inthis work we overview different acoustic feature spaces and classifiersto determine reliable and robust countermeasures against spoofing at-tacks. We compared several spoofing detection systems, presented so far,on the development and evaluation datasets of the Automatic SpeakerVerification Spoofing and Countermeasures (ASVspoof) Challenge 2015.Experimental results presented in this paper demonstrate that the useof magnitude and phase information combination provides a substantialinput into the efficiency of the spoofing detection systems. Also wavelet-based features show impressive results in terms of equal error rate. Inour overview we compare spoofing performance for systems based on dif-ferent classifiers. Comparison results demonstrate that the linear SVMclassifier outperforms the conventional GMM approach. However, manyresearchers inspired by the great success of deep neural networks (DNN)approaches in the automatic speech recognition, applied DNN in thespoofing detection task and obtained quite low EER for known and un-known type of spoofing attacks.
• In this paper, we have compared two different accretion mechanisms of dark matter particles by a canonical neutron star with $M=1.4~M_{\odot}$ and $R=10~{\rm km}$, and shown the effects of dark matter heating on the surface temperature of star. We should take into account the Bondi accretion of dark matter by neutron stars rather than the accretion mechanism of Kouvaris (2008) \citepKouvaris08, once the dark matter density is higher than $\sim3.81~\rm GeV/cm^3$. Based on the Bondi accretion mechanism and heating of dark matter annihilation, the surface temperature platform of star can appear at $\sim 10^{6.5}$ year and arrive $\sim 1.12\times10^5$ K for the dark matter density of $3.81~\rm GeV/cm^3$, which is one order of magnitude higher than the case of Kouvaris (2008) with dark matter density of $30~\rm GeV/cm^3$.
• Combined HERA data on charm production in deep-inelastic scattering have previously been used to determine the charm-quark running mass $m_c(m_c)$ in the MSbar renormalisation scheme. Here, the same data are used as a function of the photon virtuality $Q^2$ to evaluate the charm-quark running mass at different scales to one-loop order, in the context of a next-to-leading order QCD analysis. The scale dependence of the mass is found to be consistent with QCD expectations.
• We show Fujita's spectrum conjecture for $\epsilon$-log canonical pairs and Fujita's log spectrum conjecture for log canonical pairs. Then, we generalize the pseudo-effective threshold of a single divisor to multiple divisors and establish the analogous finiteness and the DCC properties.
• Improved Phantom cell is a new scenario which has been introduced recently to enhance the capacity of Heterogeneous Networks (HetNets). The main trait of this scenario is that, besides maximizing the total network capacity in both indoor and outdoor environments, it claims to reduce the handover number compared to the conventional scenarios. In this paper, by a comprehensive review of the Improved Phantom cells structure, an appropriate algorithm will be introduced for the handover procedure of this scenario. To reduce the number of handover in the proposed algorithm, various parameters such as the received Signal to Interference plus Noise Ratio (SINR) at the user equipment (UE), users access conditions to the phantom cells, and users staying time in the target cell based on its velocity, has been considered. Theoretical analyses and simulation results show that applying the suggested algorithm the improved phantom cell structure has a much better performance than conventional HetNets in terms of the number of handover.
• We prove that for Anosov maps of the $3$-torus if the Lyapunov exponents of absolutely continuous measures in every direction are equal to the geometric growth of the invariant foliations then $f$ is $C^1$ conjugated to his linear part.
• The transition matrix of a graph $G$ corresponding to the adjacency matrix $A$ is defined by $H(t):=\exp{\left(-itA\right)},$ where $t\in\mathbb{R}$. The graph is said to exhibit pretty good state transfer between a pair of vertices $u$ and $v$ if there exists a sequence $\left\lbrace t_k\right\rbrace$ of real numbers such that $\lim\limits_{k\rightarrow\infty} H(t_k) {\bf e}_u=\gamma {\bf e}_v$, where $\gamma$ is a complex number of unit modulus. We classify some circulant graphs exhibiting or not exhibiting pretty good state transfer. This generalize several pre-existing results on circulant graphs admitting pretty good state transfer.
• This paper presents the Speech Technology Center (STC) replay attack detection systems proposed for Automatic Speaker Verification Spoofing and Countermeasures Challenge 2017. In this study we focused on comparison of different spoofing detection approaches. These were GMM based methods, high level features extraction with simple classifier and deep learning frameworks. Experiments performed on the development and evaluation parts of the challenge dataset demonstrated stable efficiency of deep learning approaches in case of changing acoustic conditions. At the same time SVM classifier with high level features provided a substantial input in the efficiency of the resulting STC systems according to the fusion systems results.
• We prove a stochastic averaging theorem for stochastic differential equations in which the slow and the fast variables interact. The approximate Markov fast motion is a family of Markov process with generator ${\mathcal L}_x$. The theorem is proved under the assumption that ${\mathcal L}$ satisfies Hörmander's bracket conditions, or more generally ${\mathcal L}$ is a family of Fredholm operators with sub-elliptic estimates. On the other hand a conservation law of a dynamical system can be used as a tool for separating the scales in singular perturbation problems. We discuss a number of motivating examples from mathematical physics and from geometry where we use non-linear conservation laws to deduce slow-fast systems of stochastic differential equations.
• Medium effects on the production of high-$p_{\rm T}$ particles in nucleus-nucleus (AA) collisions are generally quantified by the nuclear modification factor ($R_{\rm AA}$), defined to be unity in absence of nuclear effects. Modeling particle production including a nucleon-nucleon impact parameter dependence, we demonstrate that $R_{\rm AA}$ at midrapidity in peripheral AA collisions can be significantly affected by event selection and geometry biases. Even without jet quenching and shadowing, these biases cause an apparent suppression for $R_{\rm AA}$ in peripheral collisions, and are relevant for all types of hard probes and all collision energies. Our studies indicate that calculations of jet quenching in peripheral AA collisions should account for the biases, or else they will overestimate the relevance of parton energy loss. Similarly, expectations of parton energy loss in light-heavy collision systems based on comparison with apparent suppression seen in peripheral $R_{\rm AA}$ should be revised. Our interpretation of the peripheral $R_{\rm AA}$ data would unify observations for lighter collision systems or lower energies where significant values of elliptic flow are observed despite the absence of strong jet quenching.
• We investigate the expected distance to the power $b$ between two identical general random processes As an application to sensor network we derive the optimal transportation cost to the power $b>0$ of the maximal random bicolored matching.
• This paper extends the results from arXiv:1702.04569 about sharp $A_2$-$A_\infty$ estimates with matrix weights to the non-homogeneous situation.
• Observations of the solar butterfly diagram from sunspot records suggest persistent fluctuation in parity, away from the overall, approximately dipolar structure. We use a simple mean-field dynamo model with a solar-like rotation law, and perturb the $\alpha$-effect. We find that the parity of the magnetic field with respect to the rotational equator can demonstrate what we describe as resonant behaviour, while the magnetic energy behaves in a more or less expected way. We discuss possible applications of the phenomena in the context of various deviations of the solar magnetic field from dipolar symmetry, as reported from analysis of archival sunspot data. We deduce that our model produces fluctuations in field parity, and hence in the butterfly diagram, that are consistent with observed fluctaions in solar behaviour.
• Holonomic quantum computation is a quantum computation strategy that promises some built-in noise-resilience features. Here, we propose a scheme for nonadiabatic holonomic quantum computation with nitrogen-vacancy center electron spins, which are characterized by the fast quantum gates and long coherence times of the qubits. By varying the detuning, amplitudes and phase difference of the lasers applied to a nitrogen-vacancy center, one can directly realize arbitrary single-qubit holonomic quantum gate on the spin. Meanwhile, with the help of an optical whispering gallery cavity, nontrivial two-qubit holonomic quantum gate can also be induced. The distinct merit of the present scheme is that all the geometric quantum gates are obtained with all-optical manipulation of the solid-state spins. Therefore, our scheme opens the possibility for robust quantum computation on solid-state spins in an all-optical way.
• This paper introduces an explicit residual-based a posteriori error analysis for the symmetric mixed finite element method in linear elasticity after Arnold-Winther with pointwise symmetric and H(div)-conforming stress approximation. Opposed to a previous publication, the residual-based a posteriori error estimator of this paper is reliable and efficient and truly explicit in that it solely depends on the symmetric stress and does neither need any additional information of some skew symmetric part of the gradient nor any efficient approximation thereof. Hence it is straightforward to implement an adaptive mesh-refining algorithm obligatory in practical computations. Numerical experiments verify the proven reliability and efficiency of the new a posteriori error estimator and illustrate the improved convergence rate in comparison to uniform mesh-refining. A higher convergence rates for piecewise affine data is observed in the L2 stress error and reproduced in non-smooth situations by the adaptive mesh-refining strategy.
• This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function $f$ in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the joint feature/label space distributions of the two domain $\mathcal{P}_s$ and $\mathcal{P}_t$. We propose a solution of this problem with optimal transport, that allows to recover an estimated target $\mathcal{P}^f_t=(X,f(X))$ by optimizing simultaneously the optimal coupling and $f$. We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.
• We report the results of a first experimental search for lepton number violation by four units in the neutrinoless quadruple-$\beta$ decay of $^{150}$Nd using a total exposure of $0.19$ kg$\cdot$y recorded with the NEMO-3 detector at the Modane Underground Laboratory (LSM). We find no evidence of this decay and set lower limits on the half-life in the range $T_{1/2}>(1.1-3.2)\times10^{21}$ y at the $90\%$ CL, depending on the model used for the kinematic distributions of the emitted electrons.
• In this paper, we study the $\mu$-ordinary locus of a Shimura variety with parahoric level structure. Under the Axioms in \citeHR, we show that $\mu$-ordinary locus is a union of some maximal Ekedahl-Kottwitz-Oort-Rapoport strata introduced in \citeHR and we give criteria on the density of the $\mu$-ordinary locus.
• We use the LHCb data on the forward production of $D$ mesons at 7, 13 and 5 TeV to make a direct determination of the gluon distribution, $xg$, at NLO in the $10^{-5} \lesssim x \lesssim 10^{-4}$ region. We first use a simple parametrization of the gluon density in each of the four transverse momentum intervals of the detected $D$ mesons. Second, we use a double log parametrization to make a combined fit to all the LHCb $D$ meson data. The values found for $xg$ in the above $x$ domain are of the similar magnitude (or a bit larger) than the central values of extrapolations of the gluons obtained by the global PDF analyses into this small $x$ region. However, in contrast, we find $xg$ has a weak dependence on $x$.
• In this paper, we design robust and efficient block preconditioners for the two-field formulation of Biot's consolidation model, where stabilized finite-element discretizations are used. The proposed block preconditioners are based on the well-posedness of the discrete linear systems. Block diagonal (norm-equivalent) and block triangular preconditioners are developed, and we prove that these methods are robust with respect to both physical and discretization parameters. Numerical results are presented to support the theoretical results.
• A Grand-canonical Monte-Carlo simulation method extended to simulate a mixture of salts is presented. Due to charge neutrality requirement of electrolyte solutions, ions must be added to or removed from the system in groups. This leads to some complications compared to regular Grand Canonical simulation. Here, a recipe for simulation of electrolyte solution of salt mixture is presented. It is then implemented to simulate solution of 1:1, 2:1 and 2:2 salts or their mixtures at different concentrations using the primitive ion model. The osmotic pressures of the electrolyte solutions are calculated and shown to depend linearly on the salt concentrations within the concentration range simulated. We also show that at the same concentration of divalent anions, the presence of divalent cations make it easier to insert monovalent cations into the system. This can explain some quantitative differences observed in experiments of the MgCl$_2$ salt mixture and MgSO$_4$ salt mixture.
• We explain the properties and clarify the meaning of quantum weak values using only the basic notions of elementary quantum mechanics.

Felix Leditzky May 24 2017 20:43 UTC

Yes, that's right, thanks!

For (5), you use the Cauchy-Schwarz inequality $\left| \operatorname{tr}(X^\dagger Y) \right| \leq \sqrt{\operatorname{tr}(X^\dagger X)} \sqrt{\operatorname{tr}(Y^\dagger Y)}$ for the Hilbert-Schmidt inner product $\langle X,Y\rangle := \operatorname{tr}(X^\dagger Y)$ wi

...(continued)
Michael Tolan May 24 2017 20:27 UTC

Just reading over Eq (5) on P5 concerning the diamond norm.

Should the last $\sigma_1$ on the 4th line be replaced with a $\sigma_2$? I think I can see how the proof is working but not entirely certain.

Noon van der Silk May 23 2017 11:15 UTC

I think this thread has reached it's end.

I've locked further comments, and I hope that the quantum computing community can thoughtfully find an approach to language that is inclusive to all and recognises the diverse background of all researchers, current and future.

...(continued)
Varun Narasimhachar May 23 2017 02:14 UTC

While I would never want to antagonize my peers or to allow myself to assume they were acting irrationally, I do share your concerns to an extent. I worry about the association of social justice and inclusivity with linguistic engineering, virtual lynching, censorship, etc. (the latter phenomena sta

...(continued)
Aram Harrow May 23 2017 01:30 UTC

I think you are just complaining about issues that arise from living with other people in the same society. If you disagree with their values, well, then some of them might have a negative opinion about you. If you express yourself in an aggressive way, and use words like "lynch" to mean having pe

...(continued)
Steve Flammia May 23 2017 01:04 UTC

I agree with Noon that the discussion is becoming largely off topic for SciRate, but that it might still be of interest to the community to discuss this. I invite people to post thoughtful and respectful comments over at [my earlier Quantum Pontiff post][1]. Further comments here on SciRate will be

...(continued)
Noon van der Silk May 23 2017 00:59 UTC

I've moderated a few comments on this post because I believe it has gone past useful discussion, and I'll continue to remove comments that I believe don't add anything of substantial value.

Thanks.

Aram Harrow May 22 2017 23:13 UTC

The problem with your argument is that no one is forcing anyone to say anything, or banning anything.

If the terms really were offensive or exclusionary or had other bad side effects, then it's reasonable to discuss as a community whether to keep them, and possibly decide to stop using them. Ther

...(continued)
stan May 22 2017 22:53 UTC

Fair enough. At the end of the day I think most of us are concerned with the strength of the result not the particular language used to describe it.

VeteranVandal May 22 2017 22:41 UTC

But how obvious is ancilla? To me it is not even remotely obvious (nor clear as a term, but as the literature used it so much, I see such word in much the same way as I see auxiliary, in fact - now if you want to take offense with auxiliary, what can I say? I won't invent words just to please you).

...(continued)