# Top arXiv papers

• This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple matrix inequality can be extended to more than three matrices. Finally, we carefully discuss the properties of approximate quantum Markov chains and their implications.
• We show that DNF formulae can be quantum PAC-learned in polynomial time under product distributions using a quantum example oracle. The best classical algorithm (without access to membership queries) runs in superpolynomial time. Our result extends the work by Bshouty and Jackson (1998) that proved that DNF formulae are efficiently learnable under the uniform distribution using a quantum example oracle. Our proof is based on a new quantum algorithm that efficiently samples the coefficients of a \mu-biased Fourier transform.
• In [arXiv:1712.03219] the existence of a strongly (pointwise) converging sequence of quantum channels that can not be represented as a reduction of a sequence of unitary channels strongly converging to a unitary channel is shown. In this work we give a simple characterization of sequences of quantum channels that have the above representation. The corresponding convergence is called the $*$-strong convergence, since it relates to the convergence of selective Stinespring isometries of quantum channels in the $*$-strong operator topology. Some properties of the $*$-strong convergence of quantum channels are considered.
• Many fractional quantum Hall states can be expressed as a correlator of a given conformal field theory used to describe their edge physics. As a consequence, these states admit an economical representation as an exact Matrix Product States (MPS) that was extensively studied for the systems without any spin or any other internal degrees of freedom. In that case, the correlators are built from a single electronic operator, which is primary with respect to the underlying conformal field theory. We generalize this construction to the archetype of Abelian multicomponent fractional quantum Hall wavefunctions, the Halperin states. These latest can be written as conformal blocks involving multiple electronic operators and we explicitly derive their exact MPS representation. In particular, we deal with the caveat of the full wavefunction symmetry and show that any additional SU(2) symmetry is preserved by the natural MPS truncation scheme provided by the conformal dimension. We use our method to characterize the topological order of the Halperin states by extracting the topological entanglement entropy. We also evaluate their bulk correlation length which are compared to plasma analogy arguments.
• We study a one-dimensional system of strongly-correlated bosons interacting with a dynamical lattice. A minimal model describing the latter is provided by extending the standard Bose-Hubbard Hamiltonian to include extra degrees of freedom on the bonds of the lattice. We show that this model is capable of reproducing phenomena similar to those present in usual fermion-phonon models. In particular, we discover a bosonic analog of the Peierls transition, where the translational symmetry of the underlying lattice is spontaneously broken. The latter provides a dynamical mechanism to obtain a topological insulator in the presence of interactions, analogous to the Su-Schrieffer-Heeger (SSH) model for electrons. We numerically characterize the phase diagram of the model, which includes different types of bond order waves and topological solitons. Finally, we study the possibility of implementing the model experimentally using atomic systems.
• Non-Markovian quantum effects are typically observed in systems interacting with structured reservoirs. Discrete-time quantum walks are prime example of such systems in which, quantum memory arises due to the controlled interaction between the coin and position degrees of freedom. Here we show that the information backflow that quantifies memory effects can be enhanced when the particle is subjected to uncorrelated static or dynamic disorder. The presence of disorder in the system leads to localization effects in 1-dimensional quantum walks. We shown that it is possible to infer about the nature of localization in position space by monitoring the information backflow in the reduced system. Further, we study other useful properties of the reduced system such as entanglement, interference and its connection to quantum non-Markovianity.
• Optical cavities are one of the best ways to increase atom-light coupling and will be a key ingredient for future quantum technologies that rely on light-matter interfaces. We demonstrate that traveling-wave "ring" cavities can achieve a greatly reduced mode waist $w$, leading to larger atom-cavity coupling strength, relative to conventional standing-wave cavities for given mirror separation and stability. Additionally, ring cavities can achieve arbitrary transverse-mode spacing simultaneously with the large mode-waist reductions. Following these principles, we build a parabolic atom-ring cavity system that achieves strong collective coupling $NC = 15(1)$ between $N=10^3$ Rb atoms and a ring cavity with a single-atom cooperativity $C$ that is a factor of $35(5)$ times greater than what could be achieved with a near-confocal standing-wave cavity with the same mirror separation and finesse. By using parabolic mirrors, we eliminate astigmatism--which can otherwise preclude stable operation--and increase optical access to the atoms. Cavities based on these principles, with enhanced coupling and large mirror separation, will be particularly useful for achieving strong coupling with ions, Rydberg atoms, or other strongly interacting particles, which often have undesirable interactions with nearby surfaces.
• We analyze the tailored coupled-cluster (TCC) method, which is a multi-reference formalism that combines the single-reference coupled-cluster (CC) approach with a full configuration interaction (FCI) solution covering the static correlation. This covers in particular the high efficiency coupled-cluster method tailored by tensor-network states (TNS-TCC). For statically correlated systems, we introduce the conceptually new CAS-ext-gap assumption for multi-reference problems which replaces the unreasonable HOMO-LUMO gap. We characterize the TCC function and show local strong monotonicity and Lipschitz continuity such that Zarantonello's Theorem yields locally unique solutions fulfilling a quasi-optimal error bound for the TCC method. We perform an energy error analysis revealing the mathematical complexity of the TCC-method. Due to the basis-splitting nature of the TCC formalism, the error decomposes into several parts. Using the Aubin-Nitsche-duality method we derive a quadratic (Newton type) error bound valid for the linear-tensor-network TCC scheme DMRG-TCC and other TNS-TCC methods.
• The 1-D Anderson model possesses a completely localized spectrum of eigenstates for all values of the disorder. We consider the effect of projecting the Hamiltonian to a truncated Hilbert space, destroying time reversal symmetry. We analyze the ensuing eigenstates using different measures such as inverse participation ratio and sample-averaged moments of the position operator. In addition, we examine amplitude fluctuations in detail to detect the possibility of multifractal behavior (characteristic of mobility edges) that may arise as a result of the truncation procedure.
• The core problem in optimal control theory applied to quantum systems is to determine the temporal shape of an applied field in order to maximize the expectation of value of some physical observable. The functional which maps the control field into a given value of the observable defines a Quantum Control Landscape (QCL). Studying the topological and structural features of these landscapes is of critical importance for understanding the process of finding the optimal fields required to effectively control the system, specially when external constraints are placed on both the field $\epsilon(t)$ and the available control duration $T$. In this work we analyze the rich structure of the $QCL$ of the paradigmatic Landau-Zener two-level model, studying several features of the optimized solutions, such as their abundance, spatial distribution and fidelities. We also inspect the optimization trajectories in parameter space. We are able rationalize several geometrical and topological aspects of the QCL of this simple model and the effects produced by the constraints. Our study opens the door for a deeper understanding of the QCL of general quantum systems.
• Uniquely among the sciences, quantum cryptography has driven both foundational research as well as practical real-life applications. We review the progress of quantum cryptography in the last decade, covering quantum key distribution and other applications.
• In this paper we study continuous parametrized families of dissipative flows, which are those flows having a global attractor. The main motivation for this study comes from the observation that, in general, global attractors are not robust, in the sense that small perturbations of the flow can destroy their globality. We give a necessary and sufficient condition for a global attractor to be continued to a global attractor. We also study, using shape theoretical methods and the Conley index, the bifurcation global to non-global.
• Let f be local diffeomorphism between real Banach spaces. We prove that if the locally Lipschitz functional F(x)=1/2|f(x)-y|^2 satisfies the Chang Palais-Smale condition for all y in the target space of f, then f is a norm-coercive global diffeomorphism. We also give a version of this fact for a weighted Chang Palais-Smale condition. Finally, we study the relationship of this criterion to some classical global inversion conditions.
• We propose the Roe C*-algebra from coarse geometry as a model for topological phases of disordered materials. We explain the robustness of this C*-algebra and formulate the bulk-edge correspondence in this framework. We describe the map from the K-theory of the group C*-algebra of Z^d to the K-theory of the Roe C*-algebra, both for real and complex K-theory.
• An $n \times n$ matrix $A$ with real entries is said to be Schur stable if all the eigenvalues of $A$ are inside the open unit disc. We investigate the structure of linear maps on $M_n(\mathbb{R})$ that preserve the collection $\mathcal{S}$ of Schur stable matrices. We prove that if $L$ is a linear map such that $L(\mathcal{S}) \subseteq \mathcal{S}$, then $\rho(L)$ (the spectral radius of $L$) is at most $1$ and when $L(\mathcal{S}) = \mathcal{S}$, we have $\rho(L) = 1$. In the latter case, the map $L$ preserves the spectral radius function and using this, we characterize such maps on both $M_n(\mathbb{R})$ as well as on $\mathcal{S}^n$.
• We provide a characterization of $\mathrm{BMO}$ in terms of endpoint boundedness of commutators of singular integrals. In particular, in one dimension, we show that $\|b\|_{\mathrm{BMO}}\eqsim B$, where $B$ is the best constant in the endpoint $L\log L$ modular estimate for the commutator $[H,b]$. We provide a similar characterization of the space $\mathrm{BMO}$ in terms of endpoint boundedness of higher order commutators of the Hilbert transform. In higher dimension we give the corresponding characterization of $\mathrm{BMO}$ in terms of the first order commutators of the Riesz transforms. We also show that these characterizations can be given in terms of commutators of more general singular integral operators of convolution type.
• The Gram spectrahedron $\text{Gram}(f)$ of a form $f$ with real coefficients parametrizes the sum of squares decompositions of $f$, modulo orthogonal equivalence. For $f$ a sufficiently general positive binary form of arbitrary degree, we show that $\text{Gram}(f)$ has extreme points of all ranks in the Pataki range. This is the first example of a family of spectrahedra of arbitrarily large dimensions with this property. We also calculate the dimension of the set of rank $r$ extreme points, for any $r$. Moreover, we determine the pairs of rank two extreme points for which the connecting line segment is an edge of $\text{Gram}(f)$.
• Stochastic growth processes in dimension $(2+1)$ were conjectured by D. Wolf, on the basis of renormalization-group arguments, to fall into two distinct universality classes, according to whether the Hessian $H_\rho$ of the speed of growth $v(\rho)$ as a function of the average slope $\rho$ satisfies $\det H_\rho>0$ ("isotropic KPZ class") or $\det H_\rho\le 0$ ("anisotropic KPZ (AKPZ)" class). The former is characterized by strictly positive growth and roughness exponents, while in the AKPZ class fluctuations are logarithmic in time and space. It is natural to ask (a) if one can exhibit interesting growth models with "rigid" stationary states, i.e., with $O(1)$ fluctuations (instead of logarithmically or power-like growing, as in Wolf's picture) and (b) what new phenomena arise when $v(\cdot)$ is not smooth, so that $H_\rho$ is not defined. The two questions are actually related and here we provide an answer to both, in a specific framework. We define a $(2+1)$-dimensional interface growth process, based on the so-called shuffling algorithm for domino tilings. The stationary, non-reversible measures are translation-invariant Gibbs measures on perfect matchings of $\mathbb Z^2$, with $2$-periodic weights. If $\rho\ne0$, fluctuations are known to grow logarithmically in space and to behave like a two-dimensional GFF. We prove that fluctuations grow at most logarithmically in time and that $\det H_\rho<0$: the model belongs to the AKPZ class. When $\rho=0$, instead, the stationary state is "rigid", with correlations uniformly bounded in space and time; correspondingly, $v(\cdot)$ is not differentiable at $\rho=0$ and we extract the singularity of the eigenvalues of $H_\rho$ for $\rho\sim 0$.
• In a previous paper we derived equivalence relations for pseudo-Wronskian determinants of Hermite polynomials. In this paper we obtain the analogous result for Laguerre and Jacobi polynomials. The equivalence formulas are richer in this case since rational Darboux transformations can be defined for four families of seed functions, as opposed to only two families in the Hermite case. The pseudo-Wronskian determinants of Laguerre and Jacobi type will thus depend on two Maya diagrams, while Hermite pseudo-Wronskians depend on just one Maya diagram. We show that these equivalence relations can be interpreted as the general transcription of shape invariance and specific discrete symmetries acting on the parameters of the isotonic oscillator and Darboux-Poschl-Teller potential.
• We show how the iterative decoding threshold of tailbiting spatially coupled (SC) low-density parity-check (LDPC) code ensembles can be improved over the binary input additive white Gaussian noise channel by allowing the use of different transmission energies for the codeword bits. We refer to the proposed approach as energy shaping. We focus on the special case where the transmission energy of a bit is selected among two values, and where a contiguous portion of the codeword is transmitted with the largest one. Given these constraints, an optimal energy boosting policy is derived by means of protograph extrinsic information transfer analysis. We show that the threshold of tailbiting SC-LDPC code ensembles can be made close to that of terminated code ensembles while avoiding the rate loss (due to termination). The analysis is complemented by Monte Carlo simulations, which confirm the viability of the approach.
• We study the Boussinesq approximation for rapidly rotating stably-stratified fluids in a three dimensional infinite layer with either stress-free or periodic boundary conditions in the vertical direction. For initial conditions satisfying a certain quasi-geostrophic smallness condition, we use dispersive estimates and the large rotation limit to prove global-in-time existence of solutions. We then use self-similar variable techniques to show that the barotropic vorticity converges to an Oseen vortex, while other components decay to zero. We finally use algebraically weighted spaces to determine leading order asymptotics. In particular we show that the barotropic vorticity approaches the Oseen vortex with algebraic rate while the barotropic vertical velocity and thermal fluctuations go to zero as Gaussians whose amplitudes oscillate in opposite phase of each other while decaying with an algebraic rate.
• Feb 16 2018 hep-th arXiv:1802.05362v1
We study the large source asymptotics of the generating functional in quantum field theory using the holographic renormalization group, and draw comparisons with the asymptotics of the Hopf characteristic function in fractal geometry. Based on the asymptotic behavior, we find a correspondence relating the Weyl anomaly and the fractal dimension of the Euclidean path integral measure. We are led to propose an equivalence between the logarithmic ultraviolet divergence of the Shannon entropy of this measure and the integrated Weyl anomaly, reminiscent of a known relation between logarithmic divergences of entanglement entropy and a central charge. It follows that the information dimension associated with the Euclidean path integral measure satisfies a c-theorem.
• We introduce a join construction as a way of completing the description of the relative conormal space of a function, and then apply a recent result of the second author to deduce a numerical criterion for the A_f condition for the case when the function has non-vanishing derivative at the origin.
• We point out the existence of a new general relativistic contribution to the perihelion advance of Mercury that, while smaller than the contributions arising from the solar quadrupole moment and angular momentum, is 100 times larger than the second-post-Newtonian contribution. It arises in part from relativistic "cross-terms" in the post-Newtonian equations of motion between Mercury's interaction with the Sun and with the other planets, and in part from an interaction between Mercury's motion and the gravitomagnetic field of the moving planets. At a few parts in $10^6$ of the leading general relativistic precession of 42.98 arcseconds per century, these effects are likely to be detectable by the BepiColombo mission to place and track two orbiters around Mercury, scheduled for launch around 2018.
• We determine constraints on spatially-flat tilted dynamical dark energy XCDM and $\phi$CDM inflation models by analyzing Planck 2015 cosmic microwave background (CMB) anisotropy data and baryon acoustic oscillation (BAO) distance measurements. XCDM is a simple and widely used but physically inconsistent parameterization of dynamical dark energy, while the $\phi$CDM model is a physically consistent one in which a scalar field $\phi$ with an inverse power-law potential energy density powers the currently accelerating cosmological expansion. Both these models have one additional parameter compared to standard $\Lambda$CDM and both better fit the TT + lowP + lensing + BAO data than does the standard tilted flat-$\Lambda$CDM model, with $\Delta \chi^2 = -1.26\ (-1.60)$ for the XCDM ($\phi$CDM) model relative to the $\Lambda$CDM model. While this is a 1.1$\sigma$ (1.3$\sigma$) improvement over standard $\Lambda$CDM and so not significant, dynamical dark energy models cannot be ruled out. In addition, both dynamical dark energy models reduce the tension between the Planck 2015 CMB anisotropy and the weak lensing $\sigma_8$ constraints.
• In this paper, we will show that gravity can emerge from an effective field theory, obtained by tracing out the fermionic system from an interacting quantum field theory, when we impose the condition that the field equations must be Cauchy predictable. The source of the gravitational field can be identified with the quantum interactions that existed in the interacting QFT. This relation is very similar to the ER= EPR conjecture and strongly relies on the fact that emergence of a classical theory will be dependent on the underlying quantum processes and interactions. We consider two concrete example for reaching the result - one where initially there was no gravity and other where gravity was present. The latter case will result in first order corrections to Einstein's equations and immediately reproduces well-known results like effective event horizons and gravitational birefringence.
• We generalize Banaszczyk's seminal tail bound for the Gaussian mass of a lattice to a wide class of test functions. We therefore obtain quite general transference bounds, as well as bounds on the number of lattice points contained in certain bodies. As example applications, we bound the lattice kissing number in $\ell_p$ norms by $e^{(n+ o(n))/p}$ for $0 < p \leq 2$, and also give a proof of a new transference bound in the $\ell_1$ norm.
• Gravitational lensing deflects the paths of cosmic infrared background (CIB) photons, leaving a measurable imprint on CIB maps. The resulting statistical anisotropy can be used to reconstruct the matter distribution out to the redshifts of CIB sources. To this end, we generalize the CMB lensing quadratic estimator to any weakly non-Gaussian source field, by deriving the optimal lensing weights. We point out the additional noise and bias caused by the non-Gaussianity and the `self-lensing' of the source field. We propose methods to reduce, subtract or model these non-Gaussianities. We show that CIB lensing should be detectable with Planck data, and detectable at high significance for future CMB experiments like CCAT-Prime. The CIB thus constitutes a new source image for lensing studies, providing constraints on the amplitude of structure at intermediate redshifts between galaxies and the CMB. CIB lensing measurements will also give valuable information on the star formation history in the universe, constraining CIB halo models beyond the CIB power spectrum. By laying out a detailed treatment of lens reconstruction from a weakly non-Gaussian source field, this work constitutes a stepping stone towards lens reconstruction from continuum or line intensity mapping data, such as the Lyman-alpha emission, absorption, and the 21cm radiation.
• We prove that given a finite rank free group $\mathbb{F}$ of rank $\geq 3$ and two exponentially growing outer automorphisms $\psi$ and $\phi$ with dual lamination pairs $\Lambda^\pm_\psi$ and $\Lambda^\pm_\phi$ associated to them, and given a free factor system $\mathcal{F}$ with co-edge number $\geq 2$, $\phi, \psi$ each preserving $\mathcal{F}$, so that the pair $(\phi, \Lambda^\pm_\phi), (\psi, \Lambda^\pm_\psi)$ is independent relative to $\mathcal{F}$, then there $\exists$ $M\geq 1$, such that for any integer $m,n \geq M$, the group $\langle \phi^m, \psi^n \rangle$ is a free group of rank 2, all of whose non-trivial elements except perhaps the powers of $\phi, \psi$ and their conjugates, are fully irreducible relative to $\mathcal{F}$ with a lamination pair which fills relative to $\mathcal{F}$. In addition if both $\Lambda^\pm_\phi, \Lambda^\pm_\psi$ are non-geometric then this lamination pair is also non-geometric. We also prove that the extension groups induced by such subgroups will be relatively hyperbolic under some natural conditions.
• In this paper we initiate the study of $\aleph_0$-categorical semigroups, where a countable semigroup is $\aleph_0$-categorical if it is defined up to isomorphism by its first order theory. We show that $\aleph_0$-categoricity transfers to certain important substructures such as maximal subgroups and principal factors. Conversely, we consider when $\aleph_0$-categoricity is implied by the $\aleph_0$-categoricity of the substructures. We examine the relationship between $\aleph_0$-categoricity and a number of semigroup and monoid constructions, namely direct sums, 0-direct unions, semidirect products and $\mathcal{P}$-semigroups. As a corollary, we determine the $\aleph_0$-categoricity of an $E$-unitary inverse semigroup with finite semilattice of idempotents in terms of that of the maximal group homomorphic image.
• Feb 16 2018 math.AG arXiv:1802.05702v1
We prove a universal property for blow-ups in regularly immersed subschemes, based on a notion we call "virtual effective Cartier divisor". We also construct blow-ups of regular closed immersions in derived algebraic geometry.
• Generative adversarial networks (GANs) learn a deep generative model that is able to synthesise novel, high-dimensional data samples. New data samples are synthesised by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties, that may be useful for down stream tasks such as classification or retrieval. Unfortunately, GANs do not offer an "inverse model", a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pre-trained GAN. Using our proposed inversion technique, we are able to identify which attributes of a dataset a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare performance of various GAN models trained on three image datasets. We provide code for all of our experiments, https://github.com/ToniCreswell/InvertingGAN.
• We are concerned with robust and accurate forecasting of multiphase flow rates in wells and pipelines during oil and gas production. In practice, the possibility to physically measure the rates is often limited; besides, it is desirable to estimate future values of multiphase rates based on the previous behavior of the system. In this work, we demonstrate that a Long Short-Term Memory (LSTM) recurrent artificial network is able not only to accurately estimate the multiphase rates at current time (i.e., act as a virtual flow meter), but also to forecast the rates for a sequence of future time instants. For a synthetic severe slugging case, LSTM forecasts compare favorably with the results of hydrodynamical modeling. LSTM results for a realistic noizy dataset of a variable rate well test show that the model can also successfully forecast multiphase rates for a system with changing flow patterns.
• We study quantum transport after an inhomogeneous quantum quench in the presence of a localised defect. We focus on free fermions on a one-dimensional lattice with a hopping defect and use an initial state with different densities on the left and right half of the system. By analytically deriving and numerically verifying the asymptotics of particle density and current at large times and distances, we demonstrate how the defect obstructs particle transport, resulting in partial preservation of the initial density difference between the two sides and in reduced steady state current in comparison with the defectless case. Our analytical results are exactly reproduced by a semiclassical treatment and generalised to an arbitrary non-interacting particle-conserving defect.
• The Polaron measure is defined as the transformed path measure $$\widehat\mathbb P_\alpha,T= Z_\alpha,T^-1\,\u2009\exp\bigg{\frac\alpha2\int_-T^T\int_-T^T\frace^-|t-s||\omega(t)-\omega(s)| \,d s \,d t\bigg}\,\u2009d\mathbb P$$ with respect to $\mathbb P$ which governs the law of the increments of the three dimensional Brownian motion on a finite interval $[-T,T]$, and $Z_{\alpha,T}$ is the partition function or the normalizing constant and $\alpha>0$ is a constant. The Polaron measure reflects a self attractive interaction. According to a conjecture of Pekar that was proved in [DV83] $$\gamma=\lim_\alpha \to∞\frac1\alpha^2\bigg[\lim_T\to∞\frac\log Z_\alpha,T2T\bigg]$$ exists and has a variational formula. In this article we show that for sufficiently small $\alpha>0$, the limit ${\widehat{\mathbb P}}_{\alpha}=\lim_{T\to\infty}\widehat{\mathbb P}_{\alpha,T}$ exists and identify it explicitly. As a corollary we deduce the central limit theorem for $\frac{1}{\sqrt{2T}}(\omega(T)-\omega(-T))$ under $\widehat{\mathbb P}_{\alpha,T}$ and obtain an expression for the limiting variance.
• Clinical notes are text documents that are created by clinicians for each patient encounter. They are typically accompanied by medical codes, which describe the diagnosis and treatment. Annotating these codes is labor intensive and error prone; furthermore, the connection between the codes and the text is not annotated, obscuring the reasons and details behind specific diagnoses and treatments. We present an attentional convolutional network that predicts medical codes from clinical text. Our method aggregates information across the document using a convolutional neural network, and uses an attention mechanism to select the most relevant segments for each of the thousands of possible codes. The method is accurate, achieving precision @ 8 of 0.7 and a Micro-F1 of 0.52, which are both significantly better than the prior state of the art. Furthermore, through an interpretability evaluation by a physician, we show that the attention mechanism identifies meaningful explanations for each code assignment.
• Many text classification tasks are known to be highly domain-dependent. Unfortunately, the availability of training data can vary drastically across domains. Worse still, for some domains there may not be any annotated data at all. In this work, we propose a multinomial adversarial network (MAN) to tackle the text classification problem in this real-world multidomain setting (MDTC). We provide theoretical justifications for the MAN framework, proving that different instances of MANs are essentially minimizers of various f-divergence metrics (Ali and Silvey, 1966) among multiple probability distributions. MANs are thus a theoretically sound generalization of traditional adversarial networks that discriminate over two distributions. More specifically, for the MDTC task, MAN learns features that are invariant across multiple domains by resorting to its ability to reduce the divergence among the feature distributions of each domain. We present experimental results showing that MANs significantly outperform the prior art on the MDTC task. We also show that MANs achieve state-of-the-art performance for domains with no labeled data.
• Many platforms are characterized by the fact that future user arrivals are likely to have preferences similar to users who were satisfied in the past. In other words, arrivals exhibit \em positive externalities. We study multiarmed bandit (MAB) problems with positive externalities. Our model has a finite number of arms and users are distinguished by the arm(s) they prefer. We model positive externalities by assuming that the preferred arms of future arrivals are self-reinforcing based on the experiences of past users. We show that classical algorithms such as UCB which are optimal in the classical MAB setting may even exhibit linear regret in the context of positive externalities. We provide an algorithm which achieves optimal regret and show that such optimal regret exhibits substantially different structure from that observed in the standard MAB setting.
• In our previous work [Y. Angelopoulos, S. Aretakis, and D. Gajic, Late-time asymptotics for the wave equation on spherically symmetric stationary backgrounds, in Advances in Mathematics 323 (2018), 529-621] we showed that the coefficient in the precise leading-order late-time asymptotics for solutions to the wave equation with smooth, compactly supported initial data on Schwarzschild backgrounds is proportional to the time-inverted Newman-Penrose constant (TINP), that is the Newman-Penrose constant of the associated time integral. The time integral (and hence the TINP constant) is canonically defined in the domain of dependence of any Cauchy hypersurface along which the stationary Killing field is non-vanishing. As a result, an explicit expression of the late-time polynomial tails was obtained in terms of initial data on Cauchy hypersurfaces intersecting the future event horizon to the future of the bifurcation sphere. In this paper, we extend the above result to Cauchy hypersurfaces intersecting the bifurcation sphere via a novel geometric interpretation of the TINP constant in terms of a modified gradient flux on Cauchy hypersurfaces. We show, without appealing to the time integral construction, that a general conservation law holds for these gradient fluxes. This allows us to express the TINP constant in terms of initial data on Cauchy hypersurfaces for which the time integral construction breaks down.
• There is an increasing interest in the electronic properties of few layer graphene as it offers a platform to study electronic interactions because the dispersion of bands can be tuned with number and stacking of layers in combination with electric field. However, electronic interaction becomes important only in very clean devices and so far the trilayer graphene experiments are understood within non-interacting electron picture. Here, we report evidence of strong electronic interactions and quantum Hall ferromagnetism (QHF) seen in ABA trilayer graphene (ABA-TLG). Due to high mobility $\sim$500,000 cm$^2$V$^{-1}$s$^{-1}$ in our device compared to previous studies, we find all symmetry broken states and that Landau Level (LL) gaps are enhanced by interactions; an aspect explained by our self-consistent Hartree-Fock (H-F) calculations. Moreover, we observe hysteresis as a function of filling factor ($\nu$) and spikes in the longitudinal resistance which, together, signal the formation of QHF states at low magnetic field.
• Predicting how a proposed cancer treatment will affect a given tumor can be cast as a machine learning problem, but the complexity of biological systems, the number of potentially relevant genomic and clinical features, and the lack of very large scale patient data repositories make this a unique challenge. "Pure data" approaches to this problem are underpowered to detect combinatorially complex interactions and are bound to uncover false correlations despite statistical precautions taken (1). To investigate this setting, we propose a method to integrate simulations, a strong form of prior knowledge, into machine learning, a combination which to date has been largely unexplored. The results of multiple simulations (under various uncertainty scenarios) are used to compute similarity measures between every pair of samples: sample pairs are given a high similarity score if they behave similarly under a wide range of simulation parameters. These similarity values, rather than the original high dimensional feature data, are used to train kernelized machine learning algorithms such as support vector machines, thus handling the curse-of-dimensionality that typically affects genomic machine learning. Using four synthetic datasets of complex systems--three biological models and one network flow optimization model--we demonstrate that when the number of training samples is small compared to the number of features, the simulation kernel approach dominates over no-prior-knowledge methods. In addition to biology and medicine, this approach should be applicable to other disciplines, such as weather forecasting, financial markets, and agricultural management, where predictive models are sought and informative yet approximate simulations are available. The Python SimKern software, the models (in MATLAB, Octave, and R), and the datasets are made freely available at https://github.com/davidcraft/SimKern.
• Sensing is the process of deriving signals from the environment that allows artificial systems to interact with the physical world. The Shannon theorem specifies the maximum rate at which information can be acquired. However, this upper bound is hard to achieve in many man-made systems. The biological visual systems, on the other hand, have highly efficient signal representation and processing mechanisms that allow precise sensing. In this work, we argue that redundancy is one of the critical characteristics for such superior performance. We show architectural advantages by utilizing redundant sensing, including correction of mismatch error and significant precision enhancement. For a proof-of-concept demonstration, we have designed a heuristic-based analog-to-digital converter - a zero-dimensional quantizer. Through Monte Carlo simulation with the error probabilistic distribution as a priori, the performance approaching the Shannon limit is feasible. In actual measurements without knowing the error distribution, we observe at least 2-bit extra precision. The results may also help explain biological processes including the dominance of binocular vision, the functional roles of the fixational eye movements, and the structural mechanisms allowing hyperacuity.
• Antiferromagnetic MnPt exhibits a spin reorientation transition (SRT) as a function of temperature, and off-stoichiometric Mn-Pt alloys also display SRTs as a function of concentration. The magnetocrystalline anisotropy in these alloys is studied using first-principles calculations based on the coherent potential approximation and the disordered local moment method. The anisotropy is fairly small and sensitive to the variations in composition and temperature due to the cancellation of large contributions from different parts of the Brillouin zone. Concentration and temperature-driven SRTs are found in reasonable agreement with experimental data. Contributions from specific band-structure features are identified and used to explain the origin of the SRTs.
• We prove a number of statistical properties of Hecke coefficients for unitary cuspidal representations on $\operatorname{GL}(2)$ over number fields (unconditionally) and on $\operatorname{GL}(n)$ over number fields (conditionally, either assuming the Ramanujan conjecture, or the functoriality of $\pi\otimes\pi^\vee$). Using partial bounds on Hecke coefficients, properties of Rankin-Selberg $L$-functions, and instances of Langlands functoriality, we obtain bounds on the set of places where (linear combinations of) Hecke coefficients are bounded above (or below). We furthermore prove a number of consequences: we obtain an improved answer to a question of Serre about the occurrence of large Hecke eigenvalues of Maass forms ($|a_p|>1$ for density at least $0.00135$ set of primes), we prove the existence of negative Hecke coefficients over arbitrary number fields, and we obtain distributional results on the Hecke coefficients $a_v$ when $v$ varies in certain congruence or Galois classes. E.g., if $E$ is an elliptic curve without CM we show that $a_p(E)<0$ for a density $\geq \frac{1}{8}$ of primes $p\equiv a\pmod{n}$, or density $\geq \frac{1}{16}$ of primes of the form $p=m^2+27n^2$.
• We introduce a multiscale formalism which combines time-dependent nonequilibrium Green function (TD-NEGF) algorithm, scaling linearly in the number of time steps and describing quantum-mechanically conduction electrons in the presence of time-dependent fields of arbitrary strength or frequency, with classical description of the dynamics of local magnetic moments based on the Landau-Lifshitz-Gilbert (LLG) equation. Our TD-NEGF+LLG approach can be applied to a variety of problems where current-driven spin torque induces the dynamics of magnetic moments as the key resource for next generation spintronics. Previous approaches for describing such nonequilibrium many-body system have neglected noncommutativity of quantum Hamiltonian of conduction electrons at different times and, therefore, the impact of time-dependent magnetic moments on electrons which can lead to pumping of spin and charge currents that, in turn, can self-consistently affect dynamics of magnetic moments themselves. Using magnetic domain wall (DW) as an example, we predict that its motion will pump time-dependent spin and charge currents (on the top of injected DC currents driving the DW motion) where conversion of spin currents into AC voltage via the inverse spin Hall effect offers a tool to precisely track the DW position along magnetic nanowire.
• Old and novel finite difference schemes, using Backward Differentiation Formula (BDF), are studied for the approximation of one-dimensional non-linear diffusion equations with an obstacle term, of the form $\min(v_t - a(t,x) v_xx + b(t,x) v_x + r(t,x) v, v- \varphi(t,x))= f(t,x).$A new unconditional stability result is obtained for one of the schemes, second order consistent in both space and time. Our study considers in particular the "generic" case when $v_{xx}$ is bounded but $x\rightarrow v_{xx}(t,x)$ may have isolated discontinuities. Numerical examples show second order convergence in both space and time, unconditionally on the ratio of the mesh steps. An $L^2$ error estimate of order $\frac{1}{2}$ is furthermore obtained. Application to the American option problem in mathematical finance is given and used throughout the paper. Also a Crank-Nicolson finite difference scheme is revisited to better explain its behavior, which may switch from second order to first order, depending on the mesh parameters. In the analysis, an equivalence of the obstacle equation with a Hamilton-Jacobi-Bellman equation is also mentioned in the case when there is no time dependency in the coefficients. We also consider two academic problems with explicit solutions for parabolic equations with an obstacle term, in order to study the relevance of the proposed schemes.
• We introduce a novel generative formulation of deep probabilistic models implementing "soft" constraints on the dynamics of the functions they can model. In particular we develop a flexible methodological framework where the modeled functions and derivatives of a given order are subject to inequality or equality constraints. We characterize the posterior distribution over model and constraint parameters through stochastic variational inference techniques. As a result, the proposed approach allows for accurate and scalable uncertainty quantification of predictions and parameters. We demonstrate the application of equality constraints in the challenging problem of parameter inference in ordinary differential equation models, while we showcase the application of inequality constraints on monotonic regression on count data. The proposed approach is extensively tested in several experimental settings, leading to highly competitive results in challenging modeling applications, while offering high expressiveness, flexibility and scalability.
• We experimentally demonstrate, for the first time, DDoS mitigation of QKD-based networks utilizing a software defined network application. Successful quantum-secured link allocation is achieved after a DDoS attack based on real-time monitoring of quantum parameters
• In order to avoid unacceptable $\mu$-distortions inconsistent with observational data on the Cosmic Microwave Background, Primordial Black Holes (PBHs) must be less massive than $10^{12} M_{\odot}$, quite closely above the highest black hole mass yet observed. This comparableness leads us to posit that all supermassive black holes originate as PBHs.
• We present here the solutions of magnetized accretion flows on to a compact object with hard surface such as neutron stars. The magnetic field of the central star is assumed dipolar and the magnetic axis is assumed to be aligned with the rotation axis of the star. We have used an equation of state for the accreting fluid in which the adiabatic index is dependent on temperature and composition of the flow. We have also included cooling processes like bremsstrahlung and cyclotron processes in the accretion flow. We found all possible accretion solutions. All accretion solutions terminate with a shock very near to the star surface and the height of this primary shock do not vary much with either the spin period or the Bernoulli parameter of the flow, although the strength of the shock may vary with the period. For moderately rotating central star there are possible formation of multiple sonic points in the flow and therefore, a second shock far away from the star surface may also form. However, the second shock is much weaker than the primary one near the surface. We found that if rotation period is below a certain value $(\pmin)$, then multiple critical points or multiple shocks are not possible and $\pmin$ depends upon the composition of the flow. We also found that cooling effect dominates after the shock and that the cyclotron and the bremsstrahlung cooling processes should be considered to obtain a consistent accretion solution.

serfati philippe Feb 16 2018 10:57 UTC

+On (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 16 2018 10:44 UTC

+On (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 19:03 UTC

+On (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 19:03 UTC

+On (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 19:03 UTC

+On (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 13:29 UTC

on transport and continuity equations with regular speed out of an hypersurface, and on it, having 2 relative normal components with the same punctual sign (possibly varying) and better unexpected results on solutions and jacobians etc, see (https://www.researchgate.net/profile/Philippe_Serfati), pa

...(continued)
serfati philippe Feb 15 2018 12:35 UTC

on transport and continuity equations with regular speed out of an hypersurface, and on it, having 2 relative normal components with the same punctual sign (possibly varying) and better unexpected results on solutions and jacobians etc, see (https://www.researchgate.net/profile/Philippe_Serfati), pa

...(continued)
Beni Yoshida Feb 13 2018 19:53 UTC

This is not a direct answer to your question, but may give some intuition to formulate the problem in a more precise language. (And I simplify the discussion drastically). Consider a static slice of an empty AdS space (just a hyperbolic space) and imagine an operator which creates a particle at some

...(continued)
Abhinav Deshpande Feb 10 2018 15:42 UTC

I see. Yes, the epsilon ball issue seems to be a thorny one in the prevalent definition, since the gate complexity to reach a target state from any of a fixed set of initial states depends on epsilon, and not in a very nice way (I imagine that it's all riddled with discontinuities). It would be inte

...(continued)
Elizabeth Crosson Feb 10 2018 05:49 UTC

Thanks for the correction Abhinav, indeed I meant that the complexity of |psi(t)> grows linearly with t.

Producing an arbitrary state |phi> exactly is also too demanding for the circuit model, by the well-known argument that given any finite set of gates, the set of states that can be reached i

...(continued)