# Top arXiv papers

• Jan 19 2018 quant-ph arXiv:1801.06121v1
Randomized benchmarking provides a tool for obtaining precise quantitative estimates of the average error rate of a physical quantum channel. Here we define real randomized benchmarking, which enables a separate determination of the average error rate in the real and complex parts of the channel. This provides more fine-grained information about average error rates with approximately the same cost as the standard protocol. The protocol requires only averaging over the real Clifford group, a subgroup of the full Clifford group, and makes use of the fact that it forms an orthogonal 2-design. Our results are therefore especially useful when considering quantum computations on rebits (or real encodings of complex computations), in which case the real Clifford group now plays the role of the complex Clifford group when studying stabilizer circuits.
• We explore the possibility of efficient classical simulation of linear optics experiments under the effect of particle losses. Specifically, we investigate the canonical boson sampling scenario in which an $n$-particle Fock input state propagates through a linear-optical network and is subsequently measured by particle-number detectors in the $m$ output modes. We examine two models of losses. In the first model a fixed number of particles is lost. We prove that in this scenario the output statistics can be well approximated by an efficient classical simulation, provided that the number of photons that is left grows slower than $\sqrt{n}$. In the second loss model, every time a photon passes through a beamsplitter in the network, it has some probability of being lost. For this model the relevant parameter is $s$, the smallest number of beamsplitters that any photon traverses as it propagates through the network. We prove that it is possible to approximately simulate the output statistics already if $s$ grows logarithmically with $m$, regardless of the geometry of the network. The latter result is obtained by proving that it is always possible to commute $s$ layers of uniform losses to the input of the network regardless of its geometry, which could be a result of independent interest. We believe that our findings put strong limitations on future experimental realizations of quantum computational supremacy proposals based on boson sampling.
• We extend the concept of strange correlators, defined for symmetry-protected phases in [You et al., Phys. Rev. Lett. 112, 247202 (2014)], to topological phases of matter by taking the inner product between string-net ground states and product states. The resulting two-dimensional partition functions are shown to be either critical or symmetry broken, as the corresponding transfer matrices inherit all matrix product operator symmetries of the string-net states. For the case of critical systems, those non-local matrix product operator symmetries are the lattice remnants of topological conformal defects in the field theory description. Following [Aasen et al., J. Phys. A 49, 354001 (2016)], we argue that the different conformal boundary conditions can be obtained by applying the strange correlator concept to the different topological sectors of the string-net obtained from Ocneanu's tube algebra. This is demonstrated by calculating the conformal field theory spectra on the lattice in the different topological sectors for the Fibonacci/hard-hexagon and Ising string-net. Additionally, we provide a complementary perspective on symmetry-preserving real-space renormalization by showing how known tensor network renormalization methods can be understood as the approximate truncation of an exactly coarse-grained strange correlator.
• We introduce a driven-dissipative two-mode bosonic system whose reservoir causes simultaneous loss of two photons in each mode and whose steady states are superpositions of pair-coherent/Barut-Girardello coherent states. We show how quantum information encoded in a steady-state subspace of this system is exponentially immune to phase drifts (cavity dephasing) in both modes. Additionally, it is possible to protect information from arbitrary photon loss in either (but not simultaneously both) of the modes by continuously monitoring the difference between the expected photon numbers of the logical states. Despite employing more resources, the two-mode scheme enjoys two advantages over its one-mode counterpart with regards to implementation using current circuit QED technology. First, monitoring the photon number difference can be done without turning off the currently implementable dissipative stabilizing process. Second, a lower average photon number per mode is required to enjoy a level of protection at least as good as that of the cat-codes. We discuss circuit QED proposals to stabilize the code states, perform gates, and protect against photon loss via either active syndrome measurement or an autonomous procedure. We introduce quasiprobability distributions allowing us to represent two-mode states of fixed photon number difference in a two-dimensional complex plane, instead of the full four-dimensional two-mode phase space. The two-mode codes are generalized to multiple modes in an extension of the stabilizer formalism to non-diagonalizable stabilizers. The $M$-mode codes can protect against either arbitrary photon losses in up to $M-1$ modes or arbitrary losses or gains in any one mode.
• This is a very brief introduction to quantum computing and quantum information theory, primarily aimed at geometers. Beyond basic definitions and examples, I emphasize aspects of interest to geometers, especially connections with asymptotic representation theory. Proofs of most statements can be found in standard references.
• In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification. We consider a standard stochastic gradient descent (SGD) method with a fixed, large step size and propose a novel assumption on the objective function, under which this method has the improved convergence rates (to a neighborhood of the optimal solutions). We then empirically demonstrate that these assumptions hold for logistic regression and standard deep neural networks on classical data sets. Thus our analysis helps to explain when efficient behavior can be expected from the SGD method in training classification models and deep neural networks.
• An efficient variational approach is developed to solve in- and out-of-equilibrium problems of generic quantum spin-impurity systems. Employing the discrete symmetry hidden in spin-impurity models, we present a new canonical transformation that completely decouples the impurity and bath degrees of freedom. Combining it with Gaussian states, we present a family of many-body states to efficiently encode nontrivial impurity-bath correlations. We demonstrate its successful application to the anisotropic and two-lead Kondo models by studying their spatiotemporal dynamics, universal nonperturbative scaling and transport phenomena, and compare to other analytical and numerical approaches. In particular, we apply our method to study new types of nonequilibrium phenomena that have not been studied by other methods, such as long-time crossover in the ferromagnetic easy-plane Kondo model. Our results can be tested in experiments with mesoscopic electron systems and ultracold atoms in optical lattices.
• We show that braidings on a fusion category $\mathcal{C}$ correspond to certain fusion subcategories of the center of $\mathcal{C}$ transversal to the canonical Lagrangian algebra. This allows to classify braidings on non-degenerate and group-theoretical fusion categories.
• Jan 19 2018 cs.LG arXiv:1801.05927v1
In this paper we try to organize machine teaching as a coherent set of ideas. Each idea is presented as varying along a dimension. The collection of dimensions then form the problem space of machine teaching, such that existing teaching problems can be characterized in this space. We hope this organization allows us to gain deeper understanding of individual teaching problems, discover connections among them, and identify gaps in the field.
• We construct examples of translationally invariant solvable models of strongly-correlated metals, composed of lattices of Sachdev-Ye-Kitaev dots with identical local interactions. These models display crossovers as a function of temperature into regimes with local quantum criticality and marginal-Fermi liquid behavior. In the marginal Fermi liquid regime, the dc resistivity increases linearly with temperature over a broad range of temperatures. By generalizing the form of interactions, we also construct examples of non-Fermi liquids with critical Fermi-surfaces. The self energy has a singular frequency dependence, but lacks momentum dependence, reminiscent of a dynamical mean field theory-like behavior but in dimensions $d<\infty$. In the low temperature and strong-coupling limit, a heavy Fermi liquid is formed. The critical Fermi-surface in the non-Fermi liquid regime gives rise to quantum oscillations in the magnetization as a function of an external magnetic field in the absence of quasiparticle excitations. We discuss the implications of these results for local quantum criticality and for fundamental bounds on relaxation rates. Drawing on the lessons from these models, we formulate conjectures on coarse grained descriptions of a class of intermediate scale non-fermi liquid behavior in generic correlated metals.
• We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel---we here consider the so-called Werner channel. To characterize our results, we introduce a two-dimensional space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.
• Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs. This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs. We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates. We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks. The RINs demonstrate competitive performance and converge faster in all tasks. Notably, small RIN models produce 12%--67% higher accuracy on the Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset.
• To implement convolutional neural networks (CNN) in hardware, the state-of-the-art CNN accelerators pipeline computation and data transfer stages using an off-chip memory and simultaneously execute them on the same timeline. However, since a large amount of feature maps generated during the operation should be transmitted to the off-chip memory, the pipeline stage length is determined by the off-chip data transfer stage. Fusion architectures that can fuse multiple layers have been proposed to solve this problem, but applications such as super-resolution (SR) require a large amount of an on-chip memory because of the high resolution of the feature maps. In this paper, we propose a novel on-chip CNN accelerator for SR to optimize the CNN dataflow in the on-chip memory. First, the convolution loop optimization technique is proposed to prevent using a frame buffer. Second, we develop a combined convolutional layer processor to reduce the buffer size used to store the feature maps. Third, we explore how to perform low-cost multiply-and-accumulate operations in the deconvolutional layer used in SR. Finally, we propose a two-stage quantization algorithm to select the optimized hardware size for the limited number of DSPs to implement the on-chip CNN accelerator. We evaluate our proposed accelerator with FSRCNN, which is most popular as the CNN-based SR algorithm. Experimental results show that the proposed accelerator requires 9.21ms to achieve an output image with the 2560x1440 pixel resolution, which is 36 times faster than the conventional method. In addition, we reduce the on-chip memory usage and DSP usage by 4 times and 1.44 times, respectively, compared to the conventional methods.
• We obtain analytic solutions to various models of dissipation of the quantum harmonic oscillator, employing a simple method in the Wigner function Fourier transform description of the system; and study as an exemplification, the driven open quantum harmonic oscillator. The environmental models we use are based on optical master equations for the zero and finite temperature bath and whose open dynamics are described by a Lindblad master equation, and also we use the Caldeira-Leggett model for the high temperature limit, in the the under damped an the over damped case. Under the Wigner Fourier transform or chord function as it has been called, it becomes particularly simple to solve the dynamics of the open oscillator in the sense that the dynamics of the system are reduced to the application of an evolution matrix related to the damped motion of the oscillator.
• Factor analysis, a classical multivariate statistical technique is popularly used as a fundamental tool for dimensionality reduction in statistics, econometrics and data science. Estimation is often carried out via the Maximum Likelihood (ML) principle, which seeks to maximize the likelihood under the assumption that the positive definite covariance matrix can be decomposed as the sum of a low rank positive semidefinite matrix and a diagonal matrix with nonnegative entries. This leads to a challenging rank constrained nonconvex optimization problem. We reformulate the low rank ML Factor Analysis problem as a nonlinear nonsmooth semidefinite optimization problem, study various structural properties of this reformulation and propose fast and scalable algorithms based on difference of convex (DC) optimization. Our approach has computational guarantees, gracefully scales to large problems, is applicable to situations where the sample covariance matrix is rank deficient and adapts to variants of the ML problem with additional constraints on the problem parameters. Our numerical experiments demonstrate the significant usefulness of our approach over existing state-of-the-art approaches.
• Now a days, the major challenge in machine learning is the `Big~Data' challenge. The big data problems due to large number of data points or large number of features in each data point, or both, the training of models have become very slow. The training time has two major components: Time to access the data and time to process the data. In this paper, we have proposed one possible solution to handle the big data problems in machine learning. The focus is on reducing the training time through reducing data access time by proposing systematic sampling and cyclic/sequential sampling to select mini-batches from the dataset. To prove the effectiveness of proposed sampling techniques, we have used Empirical Risk Minimization, which is commonly used machine learning problem, for strongly convex and smooth case. The problem has been solved using SAG, SAGA, SVRG, SAAG-II and MBSGD (Mini-batched SGD), each using two step determination techniques, namely, constant step size and backtracking line search method. Theoretical results prove the same convergence for systematic sampling, cyclic sampling and the widely used random sampling technique, in expectation. Experimental results with bench marked datasets prove the efficacy of the proposed sampling techniques.
• Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.
• Rapport, the close and harmonious relationship in which interaction partners are "in sync" with each other, was shown to result in smoother social interactions, improved collaboration, and improved interpersonal outcomes. In this work, we are first to investigate automatic prediction of low rapport during natural interactions within small groups. This task is challenging given that rapport only manifests in subtle non-verbal signals that are, in addition, subject to influences of group dynamics as well as inter-personal idiosyncrasies. We record videos of unscripted discussions of three to four people using a multi-view camera system and microphones. We analyse a rich set of non-verbal signals for rapport detection, namely facial expressions, hand motion, gaze, speaker turns, and speech prosody. Using facial features, we can detect low rapport with an average precision of 0.7 (chance level at 0.25), while incorporating prior knowledge of participants' personalities can even achieve early prediction without a drop in performance. We further provide a detailed analysis of different feature sets and the amount of information contained in different temporal segments of the interactions.
• Quantum spin liquids (QSL) are exotic phases of matter that host fractionalized excitations. Since the underlying physics is root in long-ranged quantum entanglement, local probes are hardly capable of characterizing them whereas quantum entanglement can serve as a diagnostic tool due to its non-locality. The kagome antiferromagnetic Heisenberg model is one of the most studied and experimentally relevant models for QSL, but its solution remains under debate. Here, we perform a numerical Aharonov-Bohm experiment on this model and uncover universal features of the entanglement entropy. By means of the density-matrix renormalization group, we reveal the entanglement signatures of emergent Dirac spinons, which are the fractionalized excitations of the QSL. This scheme provides qualitative insights into the nature of kagome QSL, and can be used to study other quantum states of matter. As a concrete example, we also benchmark our methods on an interacting quantum critical point between a Dirac semimetal and a charge ordered phase.
• Training a task-completion dialogue agent with real users via reinforcement learning (RL) could be prohibitively expensive, because it requires many interactions with users. One alternative is to resort to a user simulator, while the discrepancy of between simulated and real users makes the learned policy unreliable in practice. This paper addresses these challenges by integrating planning into the dialogue policy learning based on Dyna-Q framework, and provides a more sample-efficient approach to learn the dialogue polices. The proposed agent consists of a planner trained on-line with limited real user experience that can generate large amounts of simulated experience to supplement with limited real user experience, and a policy model trained on these hybrid experiences. The effectiveness of our approach is validated on a movie-booking task in both a simulation setting and a human-in-the-loop setting.
• Magnetic reconnection in curved spacetime is studied by adopting a general relativistic magnetohydrodynamic model that retains collisionless effects for both electron-ion and pair plasmas. A simple generalization of the standard Sweet-Parker model allows us to obtain the first order effects of the gravitational field of a rotating black hole. It is shown that the black hole rotation acts as to increase the length of azimuthal reconnection layers, per se leading to a decrease of the reconnection rate. However, when coupled to collisionless thermal-inertial effects, the net reconnection rate is enhanced with respect to what would happen in a purely collisional plasma due to a broadening of the reconnection layer. These findings identify an underlying interaction between gravity and collisionless magnetic reconnection in the vicinity of compact objects.
• While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence). Factorization Machine provides a possible approach to learning element-wise interaction for recommender systems, but they are not directly applicable to our task due to the inability to model contexts and word sequences. In this work, we develop two Position-aware Factorization Machines which consider word interaction, context and position information. Such information is jointly encoded in a set of sentiment-oriented word interaction vectors. Compared to traditional word embeddings, SWI vectors explicitly capture sentiment-oriented word interaction and simplify the parameter learning. Experimental results show that while they have comparable performance with state-of-the-art methods for document-level classification, they benefit the snippet/sentence-level sentiment analysis.
• In the two-component Fermi gas with contact interactions a pseudogap regime, in which pairing correlations are present without superfluidity, can exist at temperatures between the superfluid critical temperature $T_c$ and a temperature $T^* > T_c$. However, the existence of a pseudogap in the unitary limit of infinite scattering length is debated. To help address this issue, we have used finite-temperature auxiliary-field quantum Monte Carlo (AFMC) methods to study the thermodynamics of the spin-balanced homogeneous unitary Fermi gas on a lattice. We present results for the thermal energy, heat capacity, condensate fraction, a model-independent pairing gap, and spin susceptibility, and compare them to experimental data when available. Our model space consists of the complete first Brillouin zone of the lattice, and our calculations are performed in the canonical ensemble of fixed particle number. We find that the energy-staggering pairing gap vanishes above $T_c$ and that $T^*$ at unitarity, as determined from the spin susceptibility, is lower than previously reported in AFMC simulations.
• Isothermal incompressible two-phase flows in a capillary are modeled with and without phase transition in the presence of gravity, employing Darcy's law for the velocity field. It is shown that the resulting systems are thermodynamically consistent in the sense that the available energy is a strict Lyapunov functional. In both cases, the equilibria with flat interface are identified. It is shown that the problems are well-posed in an $L_p$-setting and generate local semiflows in the proper state manifolds. The main result concerns the stability of equilibria with flat interface, i.e. the Rayleigh-Taylor instability.
• Jan 19 2018 cond-mat.soft arXiv:1801.06150v1
There are two main classes of physics-based models for two-dimensional cellular materials: packings of repulsive disks and the vertex model. These models have several disadvantages. For example, disk interactions are typically a function of particle overlap, yet the model assumes that the disks remain circular during overlap. The shapes of the cells can vary in the vertex model, however, the packing fraction is fixed at $\phi=1$. Here, we describe the deformable particle model (DPM), where each particle is a polygon composed of a large number of vertices. The total energy includes three terms: two quadratic terms to penalize deviations from the preferred particle area $a_0$ and perimeter $p_0$ and a repulsive interaction between DPM polygons that penalizes overlaps. We performed simulations to study the onset of jamming in packings of DPM polygons as a function of asphericity, ${\cal A} = p_0^2/4\pi a_0$. We show that the packing fraction at jamming onset $\phi_J({\cal A})$ grows with increasing ${\cal A}$, reaching confluence at ${\cal A} \approx 1.16$. ${\cal A}^*$ corresponds to the value at which DPM polygons completely fill the cells obtained from a surface-Voronoi tessellation. Further, we show that DPM polygons develop invaginations for ${\cal A} > {\cal A}^*$ with excess perimeter that grows linearly with ${\cal A}-{\cal A}^*$. We confirm that packings of DPM polygons are solid-like over the full range of ${\cal A}$ by showing that the shear modulus is nonzero.
• Transfer learning has revolutionized computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Fine-tuned Language Models (FitLaM), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a state-of-the-art language model. Our method significantly outperforms the state-of-the-art on five text classification tasks, reducing the error by 18-24% on the majority of datasets. We open-source our pretrained models and code to enable adoption by the community.
• Unsupervised word translation from non-parallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Our simple linear method is able to achieve better or equal performance to recent state-of-the-art deep adversarial approaches and typically does a little better than the supervised baseline. Our method is also efficient, easy to parallelize and interpretable.
• We propose to search for biosignatures in the spectra of reflected light from about 100 Earth-sized planets that are already known to be orbiting in their habitable zones (HZ). For a sample of G and K type hosts, most of these planets will be between 25 and 50 milli-arcsec (mas) from their host star and 1 billion to 10 billion times fainter. To separate the planet's image from that of its host star at the wavelength (763nm) of the oxygen biosignature we need a telescope with an aperture of 16 metres. Furthermore, the intensity of the light from the host star at the position in the image of the exoplanet must be suppressed otherwise the exoplanet will be lost in the glare. This presents huge technical challenges. The Earth's atmosphere is turbulent which makes it impossible to achieve the required contrast from the ground at 763nm. The telescope therefore needs to be in space and to fit the telescope in the rocket fairing it must be a factor of 4 or more times smaller when folded than when operational. To obtain spectroscopy of the planet's biosignature at 763nm we need to use an integral field spectrometer (IFS) with a field of view (FOV) of 1000 x 1000 milli-arcsec (mas) and a spectral resolution of 100. This is a device that simultaneously takes many pictures of the exoplanet each at a slightly different wavelength which are then recorded as a data cube with two spatial dimensions and one wavelength dimension. In every data cube wavelength slice, the background light from the host star at the location of the planet image must be minimised. This is achieved via a coronagraph which blocks the light from the host star and active/adaptive optics techniques which continuously maintain very high accuracy optical alignment to make the images as sharp as possible. These are the technical challenges to be addressed in a design study.
• The satellites of Jupiter are thought to form in a circumplanetary disc. Here we address their formation and orbital evolution with a population synthesis approach, by varying the dust-to-gas ratio, the disc dispersal timescale and the dust refilling timescale. The circumplanetary disc initial conditions (density and temperature) are directly drawn from the results of 3D radiative hydrodynamical simulations. The disc evolution is taken into account within the population synthesis. The satellitesimals were assumed to grow via streaming instability. We find that the moons form fast, often within $10^4$ years, due to the short orbital timescales in the circumplanetary disc. They form in sequence, and many are lost into the planet due to fast type I migration, polluting Jupiter's envelope with typically $0.3$ Earth-masses of metals, and up to $10$ Earth-masses in some cases. The last generation of moons can form very late in the evolution of the giant planet, when the disc has already lost more than the $99\%$ of its mass. The late circumplanetary disc is cold enough to sustain water ice, hence not surprisingly the $85\%$ of the moon population has icy composition. The distribution of the satellite-masses is peaking slightly above Galilean masses, up until a few Earth-masses, in a regime which is observable with the current instrumentation around Jupiter-analog exoplanets orbiting $1$ AU away from their host stars. We also find that systems with Galilean-like masses occur in $20\%$ of the cases and they are more likely when discs have long dispersion timescales and high dust-to-gas ratios.
• Jan 19 2018 nlin.PS arXiv:1801.06086v1
We present a theoretical study of extreme events occurring in phononic lattices. In particular, we focus on the formation of rogue or freak waves, which are characterized by their localization in both spatial and temporal domains. We consider two examples. The first one is the prototypical nonlinear mass-spring system in the form of a homogeneous Fermi-Pasta-Ulam-Tsingou (FPUT) lattice with a polynomial potential. By deriving an approximation based on the nonlinear Schroedinger (NLS) equation, we are able to initialize the FPUT model using a suitably transformed Peregrine soliton solution of the NLS, obtaining dynamics that resembles a rogue wave on the FPUT lattice. We also show that Gaussian initial data can lead to dynamics featuring rogue wave for sufficiently wide Gaussians. The second example is a diatomic granular crystal exhibiting rogue wave like dynamics, which we also obtain through an NLS reduction and numerical simulations. The granular crystal (a chain of particles that interact elastically) is a widely studied system that lends itself to experimental studies. This study serves to illustrate the potential of such dynamical lattices towards the experimental observation of acoustic rogue waves.
• Mediation analysis seeks to infer how much of the effect of an exposure on an outcome can be attributed to specific pathways via intermediate variables or mediators. This requires identification of so-called path-specific effects. These express how a change in exposure affects those intermediate variables (along certain pathways), and how the resulting changes in those variables in turn affect the outcome (along subsequent pathways). However, unlike identification of total effects, adjustment for confounding is insufficient for identification of path-specific effects because their magnitude is also determined by the extent to which individuals who experience large exposure effects on the mediator, tend to experience relatively small or large mediator effects on the outcome. This chapter therefore provides an accessible review of identification strategies under general nonparametric structural equation models (with possibly unmeasured variables), which rule out certain such dependencies. In particular, it is shown which path-specific effects can be identified under such models, and how this can be done.
• Let $S$ be an excellent regular scheme and let $X$ be a scheme separated and of finite type over $S$. Let $K_c(X, \mathbb{F}_{\lambda})$ be the Grothendieck ring of $\mathbb{F}_{\lambda}$-constructible sheaves on $X$, where $\mathbb{F}_{\lambda}$ is the finite field with $\lambda$ elements. Given an index set $I$ and for certain $\mathbb{Q}$-vector subspaces $V\subset \prod_{i\in I}\mathbb{Q}_{\lambda_i}$, we define wildly compatible systems of virtual constructible sheaves on $X$. The main result is that for $\dim S \leq 1$, wildly compatible systems are preserved by Grothendieck's six operations and Verdier's duality, with further assumption that $V$ is a sub-algebra for derived Hom and tensor product. Finally, when $X$ is a curve over a finite field we prove all $\ell$-adic compatible systems will give wildly compatible systems.
• Spatiotemporal patterns such as traveling waves are frequently observed in recordings of neural activity. The mechanisms underlying the generation of such patterns are largely unknown. Previous studies have investigated the existence and uniqueness of different types of waves or bumps of activity using neural-field models, phenomenological coarse-grained descriptions of neural-network dynamics. But it remains unclear how these insights can be transferred to more biologically realistic networks of spiking neurons, where individual neurons fire irregularly. Here, we employ mean-field theory to reduce a microscopic model of leaky integrate-and-fire (LIF) neurons with distance-dependent connectivity to an effective neural-field model. In contrast to existing phenomenological descriptions, the dynamics in this neural-field model depends on the mean and the variance in the synaptic input, both determining the amplitude and the temporal structure of the resulting effective coupling kernel. For the neural-field model we derive conditions for the existence of spatial and temporal oscillations and periodic traveling waves using linear stability analysis. We first prove that periodic traveling waves cannot occur in a single homogeneous population of neurons, irrespective of the form of distance dependence of the connection probability. Compatible with the architecture of cortical neural networks, traveling waves emerge in two-population networks of excitatory and inhibitory neurons as a combination of delay-induced temporal oscillations and spatial oscillations due to distance-dependent connectivity profiles. Finally, we demonstrate quantitative agreement between predictions of the analytically tractable neural-field model and numerical simulations of both networks of nonlinear rate-based units and networks of LIF neurons.
• Jan 19 2018 gr-qc math.DG arXiv:1801.06037v1
We consider a class of globally hyperbolic space-times with "expanding singularities". Under suitable assumptions we show that no $C^0$-extensions across a compact boundary exist, while the boundary must be null wherever differentiable (which is almost everywhere) in the non-compact case.
• One of the important features of quantum mechanics is that non-orthogonal quantum states cannot be perfectly discriminated. Therefore, a fundamental problem in quantum mechanics is to design a optimal measurements to discriminate a collection of non-orthogonal quantum states. We prove that the geometric coherence of a quantum state is the minimal error probability to discrimination a set of linear independent pure states, which provides an operational interpretation for geometric coherence. Moreover, the closest incoherent states are given in terms of the corresponding optimal von Neumann measurements. Based on this idea, the explicitly expression of geometric coherence are given for a class of states. On the converse, we show that, any discrimination task for a collection of linear independent pure states can be also regarded as the problem of calculating the geometric coherence for a quantum state, and the optimal measurement can be obtain through the corresponding closest incoherent state.
• We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations. The representations change significantly when translation and part-of-speech decoders are added. The more decoders a model employs, the better it clusters sentences according to their syntactic similarity, as the representation space becomes less entangled. We explore the structure of the representation space by interpolating between sentences, which yields interesting pseudo-English sentences, many of which have recognizable syntactic structure. Lastly, we point out an interesting property of our models: The difference-vector between two sentences can be added to change a third sentence with similar features in a meaningful way.
• The article is addressing a possibility of implementation of spin network states on adiabatic quantum computer. The discussion is focused on application of currently available technologies and analyzes a concrete example of D-Wave machine. A class of simple spin network states which can be implemented on the Chimera graph architecture of the D-Wave quantum processor is introduced. However, extension beyond the currently available quantum processor topologies is required to simulate more sophisticated spin network states, which may inspire development of new generations of adiabatic quantum computers. A possibility of simulating Loop Quantum Gravity is discussed and a method of solving a graph non-changing scalar (Hamiltonian) constraint with the use of adiabatic quantum computations is proposed.
• Photoacoustic imaging (PAI) is an emerging biomedical imaging modality capable of providing both high contrast and high resolution of optical and UltraSound (US) imaging. When a short duration laser pulse illuminates the tissue as a target of imaging, tissue induces US waves and detected waves can be used to reconstruct optical absorption distribution. Since receiving part of PA consists of US waves, a large number of beamforming algorithms in US imaging can be applied on PA imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in US imaging. However, make use of DAS beamformer leads to low resolution images and large scale of off-axis signals contribution. To address these problems a new paradigm namely Delay-Multiply-and-Sum (DMAS), which was used as a reconstruction algorithm in confocal microwave imaging for breast cancer detection, was introduced for US imaging. Consequently, DMAS was used in PA imaging systems and it was shown this algorithm results in resolution enhancement and sidelobe degrading. However, in presence of high level of noise the reconstructed image still suffers from high contribution of noise. In this paper, a modified version of DMAS beamforming algorithm is proposed based on DAS inside DMAS formula expansion. The quantitative and qualitative results show that proposed method results in more noise reduction and resolution enhancement in expense of contrast degrading. For the simulation, two-point target, along with lateral variation in two depths of imaging are employed and it is evaluated under high level of noise in imaging medium. Proposed algorithm in compare to DMAS, results in reduction of lateral valley for about 19 dB followed by more distinguished two-point target. Moreover, levels of sidelobe are reduced for about 25 dB.
• A Hamburger moment sequence (s_n) is characterized by positive definiteness of the infinite Hankel matrix \mathcal H={s_m+n}. In the indeterminate case, where different measures have the same moments, there exists an infinite symmetric matrix \mathcal A={a_j,k} given by the reproducing kernel K(z,w)=\sum_n=0^∞P_n(z)P_n(w)=\sum_j,k=0^∞a_j,kz^jw^k, defined in terms of the orthonormal polynomials P_n(z). We say that the matrix product \mathcal A\mathcal H is absolutely convergent, if all elements of \mathcal A\mathcal H are defined by absolutely convergent series. We study the question if the matrix product \mathcal A\mathcal H is absolutely convergent and yields the identity matrix \mathcal I. We prove that this is always the case, when the moment problem is symmetric and xP_n(x)=b_nP_n+1(x)+b_n-1P_n-1(x) for a sequence (b_n) such that b_n-1/b_n\le q<1 for n sufficiently large, hence in particular for eventually log-convex sequences (b_n) such that ∑1/b_n<∞. It also holds for certain eventually log-concave sequences (b_n) including b_n=(n+1)^c, c>3/2. The latter is based on new estimates for the symmetrized version of a cubic birth-and-death process studied by Valent and co-authors, and in this case \mathcal A\mathcal H is not absolutely convergent. The general results of the paper are based on a study of two scale invariant sequences (U_n) and (V_n). Here, U_n=s_2n/(b_0b_1\ldots b_n-1)^2 is defined for any moment problem, while V_n=b_0b_1\ldots b_n-1c_n only makes sense for indeterminate moment problems, because c_n:=\sqrta_n,n. A major result is that (U_n) and (V_n) are bounded for symmetric indeterminate moment problems for which b_n-1/b_n\le q<1 for n sufficiently large.
• Users' visual attention is highly fragmented during mobile interactions but the erratic nature of these attention shifts currently limits attentive user interfaces to adapt after the fact, i.e. after shifts have already happened, thereby severely limiting the adaptation capabilities and user experience. To address these limitations, we study attention forecasting -- the challenging task of predicting whether users' overt visual attention (gaze) will shift between a mobile device and environment in the near future or how long users' attention will stay in a given location. To facilitate the development and evaluation of methods for attention forecasting, we present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). As a first step towards a fully-fledged attention forecasting interface, we further propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users' visual scene. We demonstrate the feasibility of forecasting bidirectional attention shifts between the device and the environment as well as for predicting the first and total attention span on the device and environment using our method. We further study the impact of different sensors and feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.
• We extend results of Colliot-Thélène and Raskind on the $\mathcal{K}_2$-cohomology of smooth projective varieties over a separably closed field $k$ to the étale motivic cohomology of smooth, not necessarily projective, varieties over $k$. Some consequences are drawn, such as the degeneration of the Bloch-Lichtenbaum spectral sequence for any field containing $k$.
• We review recent development of short uniform random walks, with a focus on its connection to (zeta) Mahler measures and modular parametrisation of the density functions. Furthermore, we extend available "probabilistic" techniques to cover a variation of random walks and reduce some three-variable Mahler measures, which are conjectured to evaluate in terms of $L$-values of modular forms, to hypergeometric form.
• It is well known that the classical and Sobolev wave fronts were extended into non-equivalent global versions by the use of the short-time Fourier transform. In this very short paper we give complete characterisations of initial wave front sets via the short-time Fourier transform.
• Suppose that $\Omega \subset \mathbb{R}^{n+1}$, $n \ge 2$, is an open set satisfying the corkscrew condition with an $n$-dimensional ADR boundary, $\partial \Omega$. In this note, we show that if harmonic functions are $\varepsilon$-approximable in $L^p$ for any $p > n/(n-1)$, then $\partial \Omega$ is uniformly rectifiable. Combining our results with those in [HT] (Hofmann-Tapiola) gives us a new characterization of uniform rectifiability which complements the recent results in [HMM] (Hofmann-Martell-Mayboroda), [GMT] (Garnett-Mourgoglou-Tolsa) and [AGMT] (Azzam-Garnett-Mourgoglou-Tolsa).
• In wireless communications, the cooperative communication (CC) technology promises performance gains compared to traditional Single-Input Single Output (SISO) techniques. Therefore, the CC technique is one of the nominees for 5G networks. In the Decode-and-Forward (DF) relaying scheme which is one of the CC techniques, determination of the threshold value at the relay has a key role for the system performance and power usage. In this paper, we propose prediction of the optimal threshold values for the best relay selection scheme in cooperative communications, based on Artificial Neural Networks (ANNs) for the first time in literature. The average link qualities and number of relays have been used as inputs in the prediction of optimal threshold values using Artificial Neural Networks (ANNs): Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) networks. The MLP network has better performance from the RBF network on the prediction of optimal threshold value when the same number of neurons is used at the hidden layer for both networks. Besides, the optimal threshold values obtained using ANNs are verified by the optimal threshold values obtained numerically using the closed form expression derived for the system. The results show that the optimal threshold values obtained by ANNs on the best relay selection scheme provide a minimum Bit-Error-Rate (BER) because of the reduction of the probability that error propagation may occur. Also, for the same BER performance goal, prediction of optimal threshold values provides 2dB less power usage, which is great gain in terms of green communicationBER performance goal, prediction of optimal threshold values provides 2dB less power usage, which is great gain in terms of green communication.
• Fluid queues are mathematical models frequently used in stochastic modelling. Their stationary distributions involve a key matrix recording the conditional probabilities of returning to an initial level from above, often known in the literature as the matrix $\Psi$. Here, we present a probabilistic interpretation of the family of algorithms known as \emphdoubling, which are currently the most effective algorithms for computing the return probability matrix $\Psi$. To this end, we first revisit the links described in \citeram99, soares02 between fluid queues and Quasi-Birth-Death processes; in particular, we give new probabilistic interpretations for these connections. We generalize this framework to give a probabilistic meaning for the initial step of doubling algorithms, and include also an interpretation for the iterative step of these algorithms. Our work is the first probabilistic interpretation available for doubling algorithms.
• Privacy-preserving data splitting is a technique that aims to protect data privacy by storing different fragments of data in different locations. In this work we give a new combinatorial formulation to the data splitting problem. We see the data splitting problem as a purely combinatorial problem, in which we have to split data attributes into different fragments in a way that satisfies certain combinatorial properties derived from processing and privacy constraints. Using this formulation, we develop new combinatorial and algebraic techniques to obtain solutions to the data splitting problem. We present an algebraic method which builds an optimal data splitting solution by using Gröbner bases. Since this method is not efficient in general, we also develop a greedy algorithm for finding solutions that are not necessarily minimal sized.
• Tools for quadrotor trajectory design have enabled single videographers to create complex aerial video shots that previously required dedicated hardware and several operators. We build on this prior work by studying film-maker's working practices which informed a system design that brings expert workflows closer to end-users. For this purpose, we propose WYFIWYG, a new quadrotor camera tool which (i) allows to design a video solely via specifying its frames, (ii) encourages the exploration of the scene prior to filming and (iii) allows to continuously frame a camera target according to compositional intentions. Furthermore, we propose extensions to an existing algorithm, generating more intuitive angular camera motions and producing spatially and temporally smooth trajectories. Finally, we conduct a user study where we evaluate how end-users work with current videography tools. We conclude by summarizing the findings of work as implications for the design of UIs and algorithms of quadrotor camera tools.
• Computer-aided early diagnosis of Alzheimers Disease (AD) and its prodromal form, Mild Cognitive Impairment (MCI), has been the subject of extensive research in recent years. Some recent studies have shown promising results in the AD and MCI determination using structural and functional Magnetic Resonance Imaging (sMRI, fMRI), Positron Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) modalities. Furthermore, fusion of imaging modalities in a supervised machine learning framework has shown promising direction of research. In this paper we first review major trends in automatic classification methods such as feature extraction based methods as well as deep learning approaches in medical image analysis applied to the field of Alzheimer's Disease diagnostics. Then we propose our own algorithm for Alzheimer's Disease diagnostics based on a convolutional neural network and sMRI and DTI modalities fusion on hippocampal ROI using data from the Alzheimers Disease Neuroimaging Initiative (ADNI) database (http://adni.loni.usc.edu). Comparison with a single modality approach shows promising results. We also propose our own method of data augmentation for balancing classes of different size and analyze the impact of the ROI size on the classification results as well.
• The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability. Formal verification can address these concerns by guaranteeing that a deep learning system operates as intended, but the state-of-the-art is limited to small systems. In this work-in-progress report we give an overview of our work on mitigating this difficulty, by pursuing two complementary directions: devising scalable verification techniques, and identifying design choices that result in deep learning systems that are more amenable to verification.

Ludovico Lami Jan 19 2018 00:08 UTC

Very nice work, congratulations! I just want to point out that the "index of separability" had already been defined in arXiv:1411.2517, where it was called "entanglement-breaking index" and studied in some detail. The channels that have a finite index of separability had been dubbed "entanglement-sa

...(continued)
Blake Stacey Jan 17 2018 20:06 UTC

Eq. (14) defines the sum negativity as $\sum_u |W_u| - 1$, but there should be an overall factor of $1/2$ (see arXiv:1307.7171, definition 10). For both the Strange states and the Norrell states, the sum negativity should be $1/3$: The Strange states (a.

...(continued)
serfati philippe Jan 11 2018 18:49 UTC

on (x,t) localized regularities for the nd euler eqs, my "Presque-localité de l'équation d'Euler incompressible sur Rn et domaines de propagation non lineaire et semi lineaire" 1995 (https://www.researchgate.net/profile/Philippe_Serfati) and eg "Local regularity criterion of the Beale-Kato-Majda typ

...(continued)
serfati philippe Jan 11 2018 18:49 UTC

on (x,t) localized regularities for the nd euler eqs, my "Presque-localité de l'équation d'Euler incompressible sur Rn et domaines de propagation non lineaire et semi lineaire" 1995 (https://www.researchgate.net/profile/Philippe_Serfati) and eg "Local regularity criterion of the Beale-Kato-Majda typ

...(continued)
serfati philippe Jan 08 2018 11:27 UTC

.Approximated positive vortex dirac for euler2d, (very) weak L°° initial bounds and non-trivial x-diameters of t-expansions = about 3/ "Euler evolution of a concentrated vortex in planar bounded domains" Daomin Cao, Guodong Wang (Submitted on 5 Jan 2018) https://arxiv.org/abs/1801.01629v1, see my 2

...(continued)
Steve Flammia Dec 18 2017 20:59 UTC

It splits into even and odd cases, actually. I was originally sloppy about the distinction between integer and polynomial division, but it's fixed now. There is a little room left in the case $d=3$ now though, but it's still proven in every other dimension.

Aram Harrow Dec 18 2017 19:30 UTC

whoa, awesome! But why do you get that $d^3-d$ must be a divisor instead of $(d^3-d)/2$?

Steve Flammia Dec 17 2017 20:25 UTC

The following observation resolves in the affirmative a decade-old open conjecture from this paper, except for $d=3$.

The Conjecture asks if any unitary 2-design must have cardinality at least $d^4 - d^2$, a value which is achievable by a Clifford group. This is true for any group unitary 2-design

...(continued)
Andrew W Simmons Dec 14 2017 11:40 UTC

Hi Māris, you might well be right! Stabiliser QM with more qubits, I think, is also a good candidate for further investigation to see if we can close the gap a bit more between the analytical upper bound and the example-based lower bound.