# Top arXiv papers

• We study symmetry-enriched topological order in two-dimensional tensor network states by using graded matrix product operator algebras to represent symmetry induced domain walls. A close connection to the theory of graded unitary fusion categories is established. Tensor network representations of the topological defect superselection sectors are constructed for all domain walls. The emergent symmetry-enriched topological order is extracted from these representations, including the symmetry action on the underlying anyons. Dual phase transitions, induced by gauging a global symmetry, and condensation of a bosonic subtheory, are analyzed and the relationship between topological orders on either side of the transition is derived. Several examples are worked through explicitly.
• We theoretically and experimentally investigate a strong uncertainty relation valid for any $n$ unitary operators, which implies the standard uncertainty relation as a special case, and which can be written in terms of geometric phases. It is saturated by every pure state of any $n$-dimensional quantum system, generates a tight overlap uncertainty relation for the transition probabilities of any $n+1$ pure states, and gives an upper bound for the out-of-time-order correlation function. We test these uncertainty relations experimentally for photonic polarisation qubits, including the minimum uncertainty states of the overlap uncertainty relation, via interferometric measurements of generalised geometric phases.
• Contextuality is a necessary resource for universal quantum computation and non-contextual quantum mechanics can be simulated efficiently by classical computers in many cases. Orders of Planck's constant, $\hbar$, can also be used to characterize the classical-quantum divide by expanding quantities of interest in powers of $\hbar$---all orders higher than $\hbar^0$ can be interpreted as quantum corrections to the order $\hbar^0$ term. We show that contextual measurements in finite-dimensional systems have formulations within the Wigner-Weyl-Moyal (WWM) formalism that require higher than order $\hbar^0$ terms to be included in order to violate the classical bounds on their expectation values. As a result, we show that contextuality as a resource is equivalent to orders of $\hbar$ as a resource within the WWM formalism. This explains why qubits can only exhibit state-independent contextuality under Pauli observables as in the Peres-Mermin square while odd-dimensional qudits can also exhibit state-dependent contextuality. In particular, we find that qubit Pauli observables lack an order $\hbar^0$ contribution in their Weyl symbol and so exhibit contextuality regardless of the state being measured. On the other hand, odd-dimensional qudit observables generally possess non-zero order $\hbar^0$ terms, and higher, in their WWM formulation, and so exhibit contextuality depending on the state measured: odd-dimensional qudit states that exhibit measurement contextuality have an order $\hbar^1$ contribution that allows for the violation of classical bounds while states that do not exhibit measurement contextuality have insufficiently large order $\hbar^1$ contributions.
• We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman's construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of $\alpha$-percolation: for a random subset $W$ of vertices of a given graph, we consider the size of the largest connected $\alpha$-subset of $W$, where $X$ is an $\alpha$-subset of $W$ if $|X \cap W| \geq \alpha |X|$.
• This review article is devoted to the interplay between frustrated magnetism and quantum critical phenomena, covering both theoretical concepts and ideas as well as recent experimental developments in correlated-electron materials. The first part deals with local-moment magnetism in Mott insulators and the second part with frustration in metallic systems. In both cases, frustration can either induce exotic phases accompanied by exotic quantum critical points or lead to conventional ordering with unconventional crossover phenomena. In addition, the competition of multiple phases inherent to frustrated systems can lead to multi-criticality.
• In this paper certain Chow weight structures on the "big" triangulated motivic categories $DM_R^{eff}\subset DM_R$ are defined in terms of motives of all smooth varieties over the base field. This definition allows studying basic properties of these weight structures without applying resolution of singularities; thus we don't have to assume that the coefficient ring $R$ contains $1/p$ in the case where the characteristic $p$ of the base field is positive. Moreover, in the case where $R$ satisfies the latter assumption our weight structures are "compatible" with the weight structures that were defined in previous papers in terms of Chow motives. The results of this article yield certain Chow-weight filtration (also) on $p$-adic cohomology of motives and smooth varieties.
• Nov 23 2017 cs.NA arXiv:1711.08453v1
In this paper, we derive a family of fast and stable algorithms for multiplying and inverting $n \times n$ Pascal matrices that run in $O(n log^2 n)$ time and are closely related to De Casteljau's algorithm for Bézier curve evaluation. These algorithms use a recursive factorization of the triangular Pascal matrices and improve upon the cripplingly unstable $O(n log n)$ fast Fourier transform-based algorithms which involve a Toeplitz matrix factorization. We conduct numerical experiments which establish the speed and stability of our algorithm, as well as the poor performance of the Toeplitz factorization algorithm. As an example, we show how our formulation relates to Bézier curve evaluation.
• Owing to data-intensive large-scale applications, distributed computation systems have gained significant recent interest, due to their ability of running such tasks over a large number of commodity nodes in a time efficient manner. One of the major bottlenecks that adversely impacts the time efficiency is the computational heterogeneity of distributed nodes, often limiting the task completion time due to the slowest worker. In this paper, we first present a lower bound on the expected computation time based on the work-conservation principle. We then present our approach of work exchange to combat the latency problem, in which faster workers can be reassigned additional leftover computations that were originally assigned to slower workers. We present two variations of the work exchange approach: a) when the computational heterogeneity knowledge is known a priori; and b) when heterogeneity is unknown and is estimated in an online manner to assign tasks to distributed workers. As a baseline, we also present and analyze the use of an optimized Maximum Distance Separable (MDS) coded distributed computation scheme over heterogeneous nodes. Simulation results also compare the proposed approach of work exchange, the baseline MDS coded scheme and the lower bound obtained via work-conservation principle. We show that the work exchange scheme achieves time for computation which is very close to the lower bound with limited coordination and communication overhead even when the knowledge about heterogeneity levels is not available.
• The estimation of optimal treatment regimes is of considerable interest to precision medicine. In this work, we propose a causal $k$-nearest neighbor method to estimate the optimal treatment regime. The method roots in the framework of causal inference, and estimates the causal treatment effects within the nearest neighborhood. Although the method is simple, it possesses nice theoretical properties. We show that the causal $k$-nearest neighbor regime is universally consistent. That is, the causal $k$-nearest neighbor regime will eventually learn the optimal treatment regime as the sample size increases. We also establish its convergence rate. However, the causal $k$-nearest neighbor regime may suffer from the curse of dimensionality, i.e. performance deteriorates as dimensionality increases. To alleviate this problem, we develop an adaptive causal $k$-nearest neighbor method to perform metric selection and variable selection simultaneously. The performance of the proposed methods is illustrated in simulation studies and in an analysis of a chronic depression clinical trial.
• We study the relativistic hydrodynamics with chiral anomaly and dynamical electromagnetic fields, Chiral MagnetoHydroDynamics (CMHD). We formulate the CMHD as a low-energy effective theory based on a derivative expansion. We demonstrate the modification of ordinary MagnetoHydroDynamics (MHD) due to chiral anomaly can be obtained from the second law of thermodynamics and is tied to chiral magnetic effect with the universal coefficient. When axial charge imbalance becomes larger than a critical value, a new type of collective gapless excitation in the CMHD appears, as a result of the interplay among magnetic field, flow velocity, and chiral anomaly; we call it "Chiral MagnetoHelical Mode" (CMHM). These modes carry definite magnetic and fluid helicities and will either grow exponentially or dissipate in time, depending on the relative sign between their helicity and axial charge density. The presence of exponentially growing CMHM indicates a hydrodynamic instability.
• The four-loop Sudakov form factor in maximal super Yang-Mills theory is analysed in detail. It is shown explicitly how to construct a basis of integrals that have a uniformly transcendental expansion in the dimensional regularisation parameter, further elucidating the number-theoretic properties of Feynman integrals. The physical form factor is expressed in this basis for arbitrary colour factor. In the nonplanar sector the required integrals are integrated numerically using a mix of sector-decomposition and Mellin-Barnes representation methods. Both the cusp as well as the collinear anomalous dimension are computed. The results show explicitly the violation of quadratic Casimir scaling at the four-loop order. A thorough analysis concerning the reliability of reported numerical uncertainties is carried out.
• Eigenvector-based centrality measures are among the most popular centrality measures in network science. The underlying idea is intuitive and the mathematical description is extremely simple in the framework of standard, mono-layer networks. Moreover, several efficient computational tools are available for their computation. Moving up in dimensionality, several efforts have been made in the past to describe an eigenvector-based centrality measure that generalizes Bonacich index to the case of multiplex networks. In this work, we propose a new definition of eigenvector centrality that relies on the Perron eigenvector of a multi-homogeneous map defined in terms of the tensor describing the network. We prove that existence and uniqueness of such centrality are guaranteed under very mild assumptions on the multiplex network. Extensive numerical studies are proposed to test the newly introduced centrality measure and to compare it to other existing eigenvector-based centralities.
• We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. Conditioned upon a new clothing-agnostic yet descriptive person representation, our framework first generates a coarse synthesized image with the target clothing item overlaid on that same person in the same pose. We further enhance the initial blurry clothing area with a refinement network. The network is trained to learn how much detail to utilize from the target clothing item, and where to apply to the person in order to synthesize a photo-realistic image in which the target item deforms naturally with clear visual patterns. Experiments on our newly collected Zalando dataset demonstrate its promise in the image-based virtual try-on task over state-of-the-art generative models.
• We study faster algorithms for producing the minimum degree ordering used to speed up Gaussian elimination. This ordering is based on viewing the non-zero elements of a symmetric positive definite matrix as edges of an undirected graph, and aims at reducing the additional non-zeros (fill) in the matrix by repeatedly removing the vertex of minimum degree. It is one of the most widely used primitives for pre-processing sparse matrices in scientific computing. Our result is in part motivated by the observation that sub-quadratic time algorithms for finding min-degree orderings are unlikely, assuming the strong exponential time hypothesis (SETH). This provides justification for the lack of provably efficient algorithms for generating such orderings, and leads us to study speedups via degree-restricted algorithms as well as approximations. Our two main results are: (1) an algorithm that produces a min-degree ordering whose maximum degree is bounded by $\Delta$ in $O(m \Delta \log^3{n})$ time, and (2) an algorithm that finds an $(1 + \epsilon)$-approximate marginal min-degree ordering in $O(m \log^{5}n \epsilon^{-2})$ time. Both of our algorithms rely on a host of randomization tools related to the $\ell_0$-estimator by [Cohen 97]. A key technical issue for the final nearly-linear time algorithm are the dependencies of the vertex removed on the randomness in the data structures. To address this, we provide a method for generating a pseudo-deterministic access sequence, which then allows the incorporation of data structures that only work under the oblivious adversary model.
• In this paper, we develop the theory of Perelman's $W$-functional on manifolds with isolated conical singularities. In particular, we show that the infimum of $W$-functional over a certain weighted Sobolev space on manifolds with isolated conical singularities is finite, and the minimizer exists, if the scalar curvature satisfies certain condition near the singularities. We also obtain an asymptotic order for the minimizer near the singularities.
• We propose a Las Vegas transformation of Markov Chain Monte Carlo (MCMC) estimators of Restricted Boltzmann Machines (RBMs). We denote our approach Markov Chain Las Vegas (MCLV). MCLV gives statistical guarantees in exchange for random running times. MCLV uses a stopping set built from the training data and has maximum number of Markov chain steps K (referred as MCLV-K). We present a MCLV-K gradient estimator (LVS-K) for RBMs and explore the correspondence and differences between LVS-K and Contrastive Divergence (CD-K), with LVS-K significantly outperforming CD-K training RBMs over the MNIST dataset, indicating MCLV to be a promising direction in learning generative models.
• We consider the possibility that the primordial curvature perturbation is direction-dependent. To first order this is parameterised by a quadrupolar modulation of the power spectrum and results in statistical anisotropy of the cosmic microwave background, which can be quantified using the bipolar spherical harmonic representation. We compute these for the Planck Release 2 SMICA map and use them to infer the quadrupole modulation of the primordial power spectrum which, going beyond previous work, we allow to be scale-dependent. Uncertainties are estimated from Planck FFP9 simulations. Consistent with the Planck collaboration's findings, we find no evidence for a constant quadrupole modulation, nor one scaling with wave number as a power law. However our non-parametric reconstruction suggests several spectral features. When a constant quadrupole modulation is fitted to data limited to the wave number range $0.005 \leq k/\mathrm{Mpc}^{-1} \leq 0.008$, we find that its preferred direction is aligned with the cosmic hemispherical asymmetry. To determine the statistical significance we construct two different test statistics and test them on our reconstructions from data, against reconstructions of realisations of noise only. With a test statistic sensitive only to the amplitude of the modulation, the reconstructions are unusual at $2.5\sigma$ significance in the full wave number range, but at $2.2\sigma$ when limited to the intermediate wave number range $0.008 \leq k/\mathrm{Mpc}^{-1} \leq 0.074$. With the second test statistic, sensitive also to direction, the reconstructions are unusual with $4.6\sigma$ significance, dropping to $2.7 \sigma$ for the intermediate wave number range. Our approach is easily generalised to include other data sets such as polarisation, large-scale structure and forthcoming 21-cm line observations which will enable these anomalies to be investigated further.
• We numerically performed wave dynamical simulations based on the Maxwell-Bloch (MB) model for a quadrupole-deformed microcavity laser with spatially selective pumping. We demonstrate the appearance of an asymmetric lasing mode whose spatial pattern violates both the x- and y-axes mirror symmetries of the cavity. Dynamical simulations revealed that a lasing mode consisting of a clockwise or counterclockwise rotating-wave component is a stable stationary solution of the MB model. From the results of a passive-cavity mode analysis, we interpret these asymmetric rotating-wave lasing modes by the locking of four nearly degenerate passive-cavity modes. For comparison, we carried out simulations for a uniform pumping case and found a different locking rule for the nearly degenerate modes. Our results demonstrate a nonlinear dynamical mechanism for the formation of a lasing mode that adjusts its pattern to a pumped area.
• Nov 23 2017 math.SP arXiv:1711.08439v1
We investigate the spectrum of the three-dimensional Dirichlet Laplacian in a prototypal infinite polyhedral layer, that is formed by three perpendicular quarter-plane walls of constant width joining each other. Such a domain contains six edges and two corners. It is a canonical example of what is called a non-smooth conical layer and we name it after Fichera because near the non-convex corner, it coincides with the famous Fichera cube that illustrates the interaction between edge and corner singularities. We show that the essential spectrum of the Laplacian on such a domain is a half-line and we characterize its minimum as the first eigenvalue of the two-dimensional Laplacian on a broken guide. By a Born-Oppenheimer type strategy, we also prove that its discrete spectrum is finite and that a lower bound is given by the ground state of a special Sturm-Liouville operator. By finite element computations, we exhibit exactly one eigenvalue under the essential spectrum threshold leaving a relative gap of 3%. We extend these results to a variant of the Fichera layer with rounded edges (for which we find a very small relative gap of 0.5%), and to a three-dimensional cross where the three walls are full thickened planes.
• The massive galaxy cluster "El Gordo" (ACT-CL J0102--4915) is a rare merging system with a high collision speed suggested by multi-wavelength observations and the theoretical modeling. Zhang et al. (2015) propose two types of mergers, a nearly head-on merger and an off-axis merger with a large impact parameter, to reproduce most of the observational features of the cluster, by using numerical simulations. The different merger configurations of the two models result in different gas motion in the simulated clusters. In this paper, we predict the kinetic Sunyaev-Zel'dovich (kSZ) effect, the relativistic correction of the thermal Sunyaev-Zel'dovich (tSZ) effect, and the X-ray spectrum of this cluster, based on the two proposed models. We find that (1) the amplitudes of the kSZ effect resulting from the two models are both on the order of $\Delta T/T\sim10^{-5}$; but their morphologies are different, which trace the different line-of-sight velocity distributions of the systems; (2) the relativistic correction of the tSZ effect around $240 {\rm\,GHz}$ can be possibly used to constrain the temperature of the hot electrons heated by the shocks; and (3) the shift between the X-ray spectral lines emitted from different regions of the cluster can be significantly different in the two models. The shift and the line broadening can be up to $\sim 25{\rm\,eV}$ and $50{\rm\,eV}$, respectively. We expect that future observations of the kSZ effect and the X-ray spectral lines (e.g., by ALMA, XARM) will provide a strong constraint on the gas motion and the merger configuration of ACT-CL J0102--4915.
• The linear dilaton geometry in five dimensions, rediscovered recently in the continuum limit of the clockwork model, may offer a solution to the hierarchy problem which is qualitatively different from other extra-dimensional scenarios and leads to distinctive signatures at the LHC. We discuss the structure of the theory, in particular aspects of naturalness and UV completion, and then explore its phenomenology, suggesting novel strategies for experimental searches. In particular, we propose to analyze the diphoton and dilepton invariant mass spectra in Fourier space in order to identify an approximately periodic structure of resonant peaks. Among other signals, we highlight displaced decays from resonantly-produced long-lived states and high-multiplicity final states from cascade decays of excited gravitons.
• Nov 23 2017 math.CO cs.CG math.GT arXiv:1711.08436v1
We prove that for every $d\geq 2$, deciding if a pure, $d$-dimensional, simplicial complex is shellable is NP-hard, hence NP-complete. This resolves a question raised, e.g., by Danaraj and Klee in 1978. Our reduction also yields that for every $d \ge 2$ and $k \ge 0$, deciding if a pure, $d$-dimensional, simplicial complex is $k$-decomposable is NP-hard. For $d \ge 3$, both problems remain NP-hard when restricted to contractible pure $d$-dimensional complexes.
• Pasterski, Shao and Strominger have recently proposed that massless scattering amplitudes can be mapped to correlators on the celestial sphere at infinity via a Mellin transform. We apply this prescription to arbitrary $n$-point tree-level gluon amplitudes. The Mellin transforms of MHV amplitudes are given by generalized hypergeometric functions on the Grassmannian $Gr(4,n)$, while generic non-MHV amplitudes are given by more complicated Gelfand $A$-hypergeometric functions.
• Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, $\Sigma m_\nu$, through Bayesian inference. Because these constraints depend on the choice for the prior probability $\pi(\Sigma m_\nu)$, we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix $M_\nu$, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over $M_\nu$, and by including the known squared mass splittings, we predict a theoretical probability distribution over $\Sigma m_\nu$ that we interpret as a Bayesian prior probability $\pi(\Sigma m_\nu)$. We find that $\pi(\Sigma m_\nu)$ peaks close to the smallest $\Sigma m_\nu$ allowed by the measured mass splittings, roughly $0.06 \, {\rm eV}$ ($0.1 \, {\rm eV}$) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors $\pi(\Sigma m_\nu)$ allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for $\pi(\Sigma m_\nu)$, which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.
• Since the discovery of the first extrasolar planet more than twenty years ago, we have discovered more than three thousand planets orbiting stars other than the Sun. Current observational instruments (on board the Hubble Space Telescope, Spitzer, and on ground-based facilities) allowed the scientific community to obtain important information on the physical and chemical properties of these planets. However, for a more in-depth characterisation of these worlds, more powerful telescopes are needed. Thanks to the high sensitivity of their instruments, the next generation of space observatories (e.g. James Webb Space Telescope, ARIEL) will provide observations of unprecedented quality, allowing us to extract far more information than what was previously possible. Such high quality observations will provide constraints on theoretical models of exoplanet atmospheres and lead to a greater understanding of the physics and chemistry. Important modelling efforts have been carried out during the past few years, showing that numerous parameters and processes (such as the element abundances, temperature, mixing, etc.) are likely to effect the atmospheric composition of exoplanets and subsequently the observable spectra. In this manuscript, we review the different parameters that can influence the molecular composition of exoplanet atmospheres. We also consider future developments that are necessary to improve atmospheric models, driven by the need to interpret the available observations and show how ARIEL is going to improve our view and characterisation of exoplanet atmospheres.
• We develop a Mellin transform framework which allows us to simultaneously analyze the four known exactly solvable 1+1 dimensional lattice polymer models: the log-gamma, strict-weak, beta, and inverse-beta models. Using this framework we prove the conjectured fluctuation exponents of the free energy and the polymer path for the stationary point-to-point versions of these four models. The fluctuation exponent for the polymer path was previously unproved for the strict-weak, beta, and inverse-beta models.
• We consider generalizations of classical function spaces by requiring that a holomorphic in \Omega function satisfies some property when we approach from \Omega, not the whole boundary, but only a part of it. These spaces endowed with their natural topology are Fréchet spaces. We prove some generic non-extendability results in such spaces and generic nowhere differentiability on the corresponding part of the boundary of \Omega.
• Many new physics scenarios beyond the Standard Model often necessitate the existence of a (light) neutral scalar $H$, which might couple to the charged leptons in a flavor violating way, while evading all existing constraints. We show that such scalars could be effectively produced at future lepton colliders, either on-shell or off-shell depending on their mass, and induce lepton flavor violating (LFV) signals, i.e. $e^+ e^- \to \ell_\alpha^\pm \ell_\beta^\mp (+H)$ with $\alpha\neq \beta$. We find that a large parameter space of the scalar mass and the LFV couplings can be probed, well beyond the current low-energy constraints in the lepton sector. In particular, a scalar-loop induced explanation of the longstanding muon $g-2$ anomaly can be directly tested in the on-shell mode.
• Controlling femtosecond optical pulses with temporal precision better than one cycle of the carrier field has a profound impact on measuring and manipulating interactions between light and matter. We explore pulses that are carved from a continuous-wave laser via electro-optic modulation and realize the regime of sub-cycle optical control without a mode-locked resonator. Our ultrafast source, with a repetition rate of 10 GHz, is derived from an optical-cavity-stabilized laser and a microwave-cavity-stabilized electronic oscillator. Sub-cycle timing jitter of the pulse train is achieved by coherently linking the laser and oscillator through carrier-envelope phase stabilization enabled by a photonic-chip supercontinuum that spans up to 1.9 octaves across the near infrared. Moreover, the techniques we report are relevant for other ultrafast lasers with repetition rates up to 30 GHz and may allow stable few-cycle pulses to be produced by a wider range of sources.
• The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agency may be interested by the return time of a 10m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein-Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
• We present a general framework for the information backflow (IB) approach of Markovianity, that not only includes a large number, if not all, of IB prescriptions proposed so far, but also is equivalent to CP-divisibility for invertible evolutions. Following the common approach of IB, where monotonic decay of some physical property is seen as the definition of Markovianity, we propose, in our framework, a general description of what should be called a proper physicality quantifier' to define Markovianity. We elucidate different properties of our framework and use it to show that generalized trace-distance measure in $2$ dimension and quantum mutual information, for invertible dynamics, in any dimension serve as sufficient criteria for IB-Markovianity for a number of prescriptions, suggested earlier in literature.
• Given a matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ and a vector $b \in\mathbb{R}^{d}$, we show how to compute an $\epsilon$-approximate solution to the regression problem $\min_{x\in\mathbb{R}^{d}}\frac{1}{2} \|\mathbf{A} x - b\|_{2}^{2}$ in time $\tilde{O} ((n+\sqrt{d\cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon^{-1})$ where $\kappa_{\text{sum}}=\mathrm{tr}\left(\mathbf{A}^{\top}\mathbf{A}\right)/\lambda_{\min}(\mathbf{A}^{T}\mathbf{A})$ and $s$ is the maximum number of non-zero entries in a row of $\mathbf{A}$. Our algorithm improves upon the previous best running time of $\tilde{O} ((n+\sqrt{n \cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon^{-1})$. We achieve our result through a careful combination of leverage score sampling techniques, proximal point methods, and accelerated coordinate descent. Our method not only matches the performance of previous methods, but further improves whenever leverage scores of rows are small (up to polylogarithmic factors). We also provide a non-linear generalization of these results that improves the running time for solving a broader class of ERM problems.
• For variational problems with $O(N)$-symmetry the existence of several geometrically distinct solutions had been shown by use of group theoretic approach in previous articles. It was done by a crafty choice of a family $H_i \subset O(N)$ subgroups such that the fixed point subspaces $E^{H_i} \subset E$ of the action in a corresponding functional space are linearly independent, next restricting the problem to each $E^{H_i}$ and using the Palais symmetry principle. In this work we give a thorough explanation of this approach showing a correspondence between the equivalence classes of such subgroups, partial orthogonal flags in $\mathbb{R}^N$, and unordered partitions of the number $N$. By showing that spaces of functions invariant with respect to different classes of groups are linearly independent we prove that the amount of series of geometrically distinct solutions obtained in this way grows exponentially in $N$, in contrast to logarithmic, and linear growths of earlier papers.
• We develop a general theory for the existence of extremal Kähler metrics of Poincaré type in the sense of Auvray, defined on the complement of a toric divisor of a polarized toric variety. In the case when the divisor is smooth, we obtain a list of necessary conditions which must be satisfied for such a metric to exist. Using the explicit methods of Apostolov-Calderbank-Gauduchon together with the computational approach of Sektnan, we show that on a Hirzebruch complex surface the necessary conditions are also sufficient. In particular, on such a complex surface the complement of the infinity section admits an extremal Kähler metric of Poincaré type whereas the complement of a fibre admits a complete ambitoric extremal Kähler metric which is not of Poincaré type.
• We prove \emphoptimal improvements of the Hardy inequality on the hyperbolic space. Here, optimal means that the resulting operator is critical in the sense of [J.Funct.Anal. 266 (2014), pp. 4422-89], namely the associated inequality cannot be further improved. Such inequalities arise from more general, \emphoptimal ones valid for the operator $P_{\lambda}:= -\Delta_{\mathbb{H}^N} - \lambda$ where $0 \leq \lambda \leq \lambda_{1}(\mathbb{H}^N)$ and $\lambda_{1}(\mathbb{H}^N)$ is the bottom of the $L^2$ spectrum of $-\Delta_{\mathbb{H}^N}$, a problem that had been studied in [J.Funct.Anal. 272 (2017), pp. 1661-1703 ] only for the operator $P_{\lambda_{1}(\mathbb{H}^N)}$. A different, critical and new inequality on $\mathbb{H}^N$, locally of Hardy type, is also shown. Such results have in fact greater generality since there are shown on general Cartan-Hadamard manifolds under curvature assumptions, possibly depending on the point. Existence/nonexistence of extremals for the related Hardy-Poincaré inequalities are also proved using concentration-compactness technique and a Liouville comparison theorem. As applications of our inequalities we obtain an improved Rellich inequality and we derive a quantitative version of Heisenberg-Pauli-Weyl uncertainty principle for the operator $P_\lambda.$
• In my master thesis, I investigated the chiral-magnetic effect in the context of holography; I focused in especially on the impact of the chiral anomaly at transport properties and non-equilbrium behaviour in response to an holographic quench. Concretely, I considered an $U(1)_\text{A}\times U(1)_\text{V}$-Einstein-Maxwell bottom-up model consisting of two massless gauge fields, coupled by a Chern-Simons term in the fivedimensional AdS spacetime. The two gauge fields provide a time dependent electric field and a static magnetic field, parallel to it. As response of the system to quench, I investigated the electromagnetic current in direction of the magnetic field which is generated due to the CME. In the first part of the thesis, I characterised the initial response of the system, in a fixed Schwarzschild AdS background, subjected to a 'fast' quench. The corresponding hyperbolic PDE is solved by means of a fully spectral code in spaces as well as in time direction. Note that this was the first application of a fully spectral code within holography. In the case of 'fast' quenches, the system exhibits an universal scaling behaviour, independent of the external parameters as the strength of the anomaly and the magnetic field, respectively. The late time behaviour of the system shows, depending on the quench and external parameters, in some cases long lived oscillations in the current. Furthermore, I computed the quasi-normal modes of the systems, including the backreaction of the matter fields on the background metric. It turns out that the long lived oscillations appear only in presence of the anomaly and can be traced back to the presence of Landau levels in the system. The results of my master thesis were partly published in arXiv:1607.06817; however, the thesis contains a lot of interesting, and so far unpublished, results and can be viewed as extended version of the paper.
• Feature selection plays a critical role in data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that strike an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.
• We investigate two closely related nonparametric hypothesis testing problems. In the first problem (i.e., the existence problem), we test whether a testing data stream is generated by one of a set of composite distributions. In the second problem (i.e., the association problem), we test which one of the multiple distributions generates a testing data stream. We assume that some distributions in the set are unknown with only training sequences generated by the corresponding distributions are available. For both problems, we construct the generalized likelihood (GL) tests, and characterize the error exponents of the maximum error probabilities. For the existence problem, we show that the error exponent is mainly captured by the Chernoff information between the set of composite distributions and alternative distributions. For the association problem, we show that the error exponent is captured by the minimum Chernoff information between each pair of distributions as well as the KL divergences between the approximated distributions (via training sequences) and the true distributions. We also show that the ratio between the lengths of training and testing sequences plays an important role in determining the error decay rate.
• The purpose of the present paper is to develop the inverse scattering transform for the nonlocal semi-discrete nonlinear Schrodinger equation (known as Ablowitz-Ladik equation) with PT-symmetry. This includes: the eigenfunctions (Jost solutions) of the associated Lax pair, the scattering data and the fundamental analytic solutions. In addition, the paper studies the spectral properties of the associated discrete Lax operator. Based on the formulated (additive) Riemann-Hilbert problem, the 1- and 2-soliton solutions for the nonlocal Ablowitz-Ladik equation are derived. Finally, the completeness relation for the associated Jost solutions is proved. Based on this, the expansion formula over the complete set of Jost solutions is derived. This will allow one to interpret the inverse scattering transform as a generalised Fourier transform.
• Variational inequalities are an important mathematical tool for modelling free boundary problems that arise in different application areas. Due to the intricate nonsmooth structure of the resulting models, their analysis and optimization is a difficult task that has drawn the attention of researchers for several decades. In this paper we focus on a class of variational inequalities, called of the second kind, with a twofold purpose. First, we aim at giving a glance at some of the most prominent applications of these types of variational inequalities in mechanics, and the related analytical and numerical difficulties. Second, we consider optimal control problems constrained by these variational inequalities and provide a thorough discussion on the existence of Lagrange multipliers and the different types of optimality systems that can be derived for the characterization of local minima. The article ends with a discussion of the main challenges and future perspectives of this important problem class.
• Recurrent Backpropagation and Equilibrium Propagation are algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Recurrent Backpropagation computes error derivatives whereas Equilibrium Propagation relaxes to another nearby fixed point. In this work we establish a close connection between these two algorithms. We show that, at every moment in the second phase, the temporal derivatives of the neural activities in Equilibrium Propagation are equal to the error derivatives computed iteratively in Recurrent Backpropagation. This work shows that it is not required to have a special network for the computation of error derivatives, and gives support to the hypothesis that, in biological neural networks, temporal derivatives of neural activities may code for error signals.
• For the first time we develop the gauge invariance of the supersymmetric grassmannian sigma model $G(M,N)$. It is richer then its purely bosonic submodel and we show how to use it in order to reduce some constant curvature holomorphic solutions of the model into simpler expressions.
• We establish a finiteness property of the quantum K-ring of the complete flag manifold.
• Effective utilization of photovoltaic (PV) plants requires weather variability robust global solar radiation (GSR) forecasting models. Random weather turbulence phenomena coupled with assumptions of clear sky model as suggested by Hottel pose significant challenges to parametric & non-parametric models in GSR conversion rate estimation. Also, a decent GSR estimate requires costly high-tech radiometer and expert dependent instrument handling and measurements, which are subjective. As such, a computer aided monitoring (CAM) system to evaluate PV plant operation feasibility by employing smart grid past data analytics and deep learning is developed. Our algorithm, SolarisNet is a 6-layer deep neural network trained on data collected at two weather stations located near Kalyani metrological site, West Bengal, India. The daily GSR prediction performance using SolarisNet outperforms the existing state of art and its efficacy in inferring past GSR data insights to comprehend daily and seasonal GSR variability along with its competence for short term forecasting is discussed.
• Word embeddings use vectors to represent words such that the geometry between vectors captures semantic relationship between the words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding can be leveraged to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 years of text data with the U.S. Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures global social shifts -- e.g., the women's movement in the 1960s and Asian immigration into the U.S -- and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a powerful new intersection between machine learning and quantitative social science.
• An orthogonally equivariant estimator for the covariance matrix is proposed that is valid when the dimension $p$ is larger than the sample size $n$. Equivariance under orthogonal transformations is a less restrictive assumption than structural assumptions on the true covariance matrix. It reduces the problem of estimation of the covariance matrix to that of estimation of its eigenvalues. In this paper, the eigenvalue estimates are obtained from an adjusted likelihood function derived by approximating the integral over the eigenvectors of the sample covariance matrix, which is a challenging problem in its own right. Comparisons with two well-known orthogonally equivariant estimators are given, which are based on Monte-Carlo risk estimates for simulated data and misclassification errors in a real data analysis.
• Nov 23 2017 math.GR arXiv:1711.08410v1
We introduce a class of countable groups by some abstract group-theoretic conditions. It includes linear groups with finite amenable radical and finitely generated residually finite groups with some non-vanishing $\ell^2$-Betti numbers that are not virtually a product of two infinite groups. Further, it includes acylindrically hyperbolic groups. For any group $\Gamma$ in this class we determine the general structure of its possible lattice embeddings, i.e. of all compactly generated, locally compact groups that contain $\Gamma$ as a lattice. This leads to a precise description of possible non-uniform lattice embeddings of groups in this class. Further applications include the determination of possible lattice embeddings of fundamental groups of closed manifolds with pinched negative curvature.
• Nov 23 2017 math.LO arXiv:1711.08409v1
In this this paper we introduce the notion of involutive filters of pseudo-hoops, and we emphasize their role in the probability theory on these structures. A characterization of involutive pseudo-hoops is given and their properties are investigated. We give characterizations of involutive filters of a bounded pseudo-hoop and we prove that in the case of bounded Wajsberg pseudo-hoops the notions of fantastic and involutive filters coincide. One of main results consists of proving that a normal filter $F$ of a bounded pseudo-hoop $A$ is involutive if and only if $A/F$ is an involutive pseudo-hoop. It is also proved that any Boolean filter of a bounded Wajsberg pseudo-hoop is involutive. The notions of state operators and state-morphism operators on pseudo-hoops are introduced and the relationship between these operators are investigated. For a bounded Wajsberg pseudo-hoop we prove that the kernel of any state operator is an involutive filter.
• Hybrid analog and digital beamforming is a promising candidate for large-scale mmWave MIMO systems because of its ability to significantly reduce the hardware complexity of the conventional fully-digital beamforming schemes while being capable of approaching the performance of fully-digital schemes. Most of the prior work on hybrid beamforming considers narrowband channels. However, broadband systems such as mmWave systems are frequency-selective. In broadband systems, it is desirable to design common analog beamformer for the entire band while employing different digital beamformers in different frequency sub-bands. This paper considers hybrid beamforming design for systems with OFDM modulation. First, for a SU-MIMO system where the hybrid beamforming architecture is employed at both transmitter and receiver, we show that hybrid beamforming with a small number of RF chains can asymptotically approach the performance of fully-digital beamforming for a sufficiently large number of transceiver antennas due to the sparse nature of the mmWave channels. For systems with a practical number of antennas, we then propose a unified heuristic design for two different hybrid beamforming structures, the fully-connected and the partially-connected structures, to maximize the overall spectral efficiency of a mmWave MIMO system. Numerical results are provided to show that the proposed algorithm outperforms the existing hybrid beamforming methods and for the fully-connected architecture the proposed algorithm can achieve spectral efficiency very close to that of the optimal fully-digital beamforming but with much fewer RF chains. Second, for the MU-MISO case, we propose a heuristic hybrid percoding design to maximize the weighted sum rate in the downlink and show numerically that the proposed algorithm with practical number of RF chains can already approach the performance of fully-digital beamforming.
• After shortly analyzing data relevant to fission hindrance of odd-A nuclei and high-$K$ isomers in superheavy (SH) region we point out the inconsistency of current fission theory and propose an approach based on the instanton formalism. A few results of this method, simplified by replacing selfconsistency by elements of the macro-micro model, are given to illustrate its features.

Zoltán Zimborás Nov 17 2017 07:59 UTC

Interesting title for a work on Mourre theory for Floquet Hamiltonians.
I wonder how this slipped through the prereview process in arXiv.

Aram Harrow Nov 07 2017 08:52 UTC

I am not sure, but the title is great.

Noon van der Silk Nov 07 2017 05:13 UTC

I'm not against this idea; but what's the point? Clearly it's to provide some benefit to efficient implementation of particular procedures in Quil, but it'd be nice to see some detail of that, and how this might matter outside of Quil.

Noon van der Silk Nov 01 2017 21:51 UTC

This is an awesome paper; great work! :)

Xiaodong Qi Oct 25 2017 19:55 UTC

Paper source repository is here https://github.com/CQuIC/NanofiberPaper2014
Comments can be submitted as an issue in the repository. Thanks!

Siddhartha Das Oct 06 2017 03:18 UTC

Here is a work in related direction: "Unification of Bell, Leggett-Garg and Kochen-Specker inequalities: Hybrid spatio-temporal inequalities", Europhysics Letters 104, 60006 (2013), which may be relevant to the discussions in your paper. [https://arxiv.org/abs/1308.0270]

Bin Shi Oct 05 2017 00:07 UTC

Welcome to give the comments for this paper!