# Top arXiv papers

• The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low energy states cannot generated by constant depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem, and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming $\mathsf{QCMA} \neq \mathsf{QMA}$), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS. Finally, our techniques imply the existence of \emphapproximate quantum low-weight check (qLWC) codes with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity $1 - 1/\mathsf{poly}(n)$. This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of $O(\sqrt{n \log n})$. The principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian.
• We derive an attainable bound on the precision of quantum state estimation for finite dimensional systems, providing a construction for the asymptotically optimal measurement. Our results hold under an assumption called local asymptotic covariance, which is weaker than unbiasedness or local unbiasedness. The derivation is based on an analysis of the limiting distribution of the estimator's deviation from the true value of the parameter, and takes advantage of quantum local asymptotic normality, a duality between sequences of identically prepared states and Gaussian states of continuous variable systems. We first prove our results for the mean square error of a special class of models, called D-invariant, and then extend the results to arbitrary models, generic cost functions, and global state estimation, where the unknown parameter is not restricted to a local neighbourhood of the true value. The extension includes a treatment of nuisance parameters, namely parameters that are not of interest to the experimenter but nevertheless affect the estimation. As an illustration of the general approach, we provide the optimal estimation strategies for the joint measurement of two qubit observables, for the estimation of qubit states in the presence of amplitude damping noise, and for noisy multiphase estimation.
• We study the problem of computing the $p\rightarrow q$ norm of a matrix $A \in R^{m \times n}$, defined as $\|A\|_p→q ~:=~ \max_x \,∈\,R^n ∖{0} \frac\|Ax\|_q\|x\|_p$This problem generalizes the spectral norm of a matrix ($p=q=2$) and the Grothendieck problem ($p=\infty$, $q=1$), and has been widely studied in various regimes. When $p \geq q$, the problem exhibits a dichotomy: constant factor approximation algorithms are known if $2 \in [q,p]$, and the problem is hard to approximate within almost polynomial factors when $2 \notin [q,p]$. The regime when $p < q$, known as \emphhypercontractive norms, is particularly significant for various applications but much less well understood. The case with $p = 2$ and $q > 2$ was studied by [Barak et al, STOC'12] who gave sub-exponential algorithms for a promise version of the problem (which captures small-set expansion) and also proved hardness of approximation results based on the Exponential Time Hypothesis. However, no NP-hardness of approximation is known for these problems for any $p < q$. We study the hardness of approximating matrix norms in both the above cases and prove the following results: - We show that for any $1< p < q < \infty$ with $2 \notin [p,q]$, $\|A\|_{p\rightarrow q}$ is hard to approximate within $2^{O(\log^{1-\epsilon}\!n)}$ assuming $NP \not\subseteq BPTIME(2^{\log^{O(1)}\!n})$. This suggests that, similar to the case of $p \geq q$, the hypercontractive setting may be qualitatively different when $2$ does not lie between $p$ and $q$. - For all $p \geq q$ with $2 \in [q,p]$, we show $\|A\|_{p\rightarrow q}$ is hard to approximate within any factor than $1/(\gamma_{p^*} \cdot \gamma_q)$, where for any $r$, $\gamma_r$ denotes the $r^{th}$ norm of a gaussian, and $p^*$ is the dual norm of $p$.
• This paper introduces a novel measure-theoretic learning theory to analyze generalization behaviors of practical interest. The proposed learning theory has the following abilities: 1) to utilize the qualities of each learned representation on the path from raw inputs to outputs in representation learning, 2) to guarantee good generalization errors possibly with arbitrarily rich hypothesis spaces (e.g., arbitrarily large capacity and Rademacher complexity) and non-stable/non-robust learning algorithms, and 3) to clearly distinguish each individual problem instance from each other. Our generalization bounds are relative to a representation of the data, and hold true even if the representation is learned. We discuss several consequences of our results on deep learning, one-shot learning and curriculum learning. Unlike statistical learning theory, the proposed learning theory analyzes each problem instance individually via measure theory, rather than a set of problem instances via statistics. Because of the differences in the assumptions and the objectives, the proposed learning theory is meant to be complementary to previous learning theory and is not designed to compete with it.
• We study a quantum Stirling cycle which extracts work using quantized energy levels. The work and the efficiency of the engine depend on the length of the potential well, and the Carnot efficiency is achieved in a low temperature limiting case. We show that the lack of information about the position of the particle inside the potential well can be converted into useful work without resorting to any measurement. In the low temperature limit, we calculate the amount of work extractable from distinguishable particles, fermions and bosons.
• We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e. $\mathsf{AC^0}$ tampering functions), our codes have codeword length $n = k^{1+o(1)}$ for a $k$-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length $2^{O(\sqrt{k})}$. Our construction remains efficient for circuit depths as large as $\Theta(\log(n)/\log\log(n))$ (indeed, our codeword length remains $n\leq k^{1+\epsilon})$, and extending our result beyond this would require separating $\mathsf{P}$ from $\mathsf{NC^1}$. We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.
• Visibility of singlet-triplet qubit readout is reduced to almost zero in large magnetic field gradients due to relaxation processes. Here we present a new readout technique that is robust against relaxation and allows for measurement when previously studied methods fail. This technique maps the qubit onto spin states that are immune to relaxation using a spin dependent electron tunneling process between the qubit and the lead. We probe this readout's performance as a function of magnetic field gradient and applied magnetic field, and optimize the pulse applied to the qubit through experiment and simulation.
• Bhat characterizes the family of linear maps defined on $B(\mathcal{H})$ which preserve unitary conjugation. We generalize this idea and study the maps with a similar equivariance property on finite-dimensional matrix algebras. We show that the maps with equivariance property are significant to study $k$-positivity of linear maps defined on finite-dimensional matrix algebras. Choi showed that $n$-positivity is different from $(n-1)$-positivity for the linear maps defined on $n$ by $n$ matrix algebras. In this paper, we present a parametric family of linear maps $\Phi_{\alpha, \beta,n} : M_{n}(\mathbb{C}) \rightarrow M_{n^{2}}(\mathbb{C})$ and study the properties of positivity, completely positivity, decomposability etc. We determine values of parameters $\alpha$ and $\beta$ for which the family of maps $\Phi_{\alpha, \beta,n}$ is positive for any natural number $n \geq 3$. We focus on the case of $n=3,$ that is, $\Phi_{\alpha, \beta,3}$ and study the properties of $2$-positivity, completely positivity and decomposability. In particular, we give values of parameters $\alpha$ and $\beta$ for which the family of maps $\Phi_{\alpha, \beta,3}$ is $2$-positive and not completely positive.
• We develop a theory to describe dynamics of a nonstationary open quantum system interacting with a hybrid environment, which includes high-frequency and low-frequency noise components. One part of the system-bath interaction is treated in a perturbative manner, whereas the other part is considered exactly. This approach allows us to derive a set of master equations where the relaxation rates are expressed as convolutions of the Bloch-Redfield and Marcus formulas. Our theory enables analysis of systems that have extremely small energy gaps in the presence of a realistic environment. We apply the theory to an example of the 16-qubit quantum annealing problem with dangling qubits and show qualitative agreement with experimental results.
• We show a global existence result of weak solutions for a class of generalized Surface Quasi-Geostrophic equation in the inviscid case. We also prove the global regularity of such solutions for the equation with slightly supercritical dissipation, which turns out to correspond to a logarithmically supercritical diffusion due to the singular nature of the velocity. Our last result is the eventual regularity in the supercritical cases for such weak solutions. The main idea in the proof of the existence part is based on suitable commutator estimates along with a careful cutting into low/high frequencies and inner/outer spatial scales to pass to the limit; while the proof of both the global regularity result and the eventual regularity for the supercritical diffusion are essentially based on the use of the so-called modulus of continuity method.
• In this paper, we provide a sufficient condition of the energy equality for the incompressible Navier-Stokes equations in bounded domains.
• We give a maximal independent set (MIS) algorithm that runs in $O(\log \log \Delta)$ rounds in the congested clique model, where $\Delta$ is the maximum degree of the input graph. This improves upon the $O(\frac{\log(\Delta) \cdot \log \log \Delta}{\sqrt{\log n}} + \log \log \Delta )$ rounds algorithm of [Ghaffari, PODC '17], where $n$ is the number of vertices of the input graph. In the first stage of our algorithm, we simulate the first $O(\frac{n}{\text{poly} \log n})$ iterations of the sequential random order Greedy algorithm for MIS in the congested clique model in $O(\log \log \Delta)$ rounds. This thins out the input graph relatively quickly: After this stage, the maximum degree of the residual graph is poly-logarithmic. In the second stage, we run the MIS algorithm of [Ghaffari, PODC '17] on the residual graph, which completes in $O(\log \log \Delta)$ rounds on graphs of poly-logarithmic degree.
• The floating structure problem describes the interaction between surface water waves and a floating body, generally a boat or a wave energy converter. As shown by Lannes in [18] the equations for the fluid motion can be reduced to a set of two evolution equations on the surface elevation and the horizontal discharge. The presence of the object is accounted for by a constraint on the discharge under the object; the pressure exerted by the fluid on this object is then the Lagrange multiplier associated to this constraint. Our goal in this paper is to prove the well-posedness of this fluid-structure interaction problem in the shallow water approximation under the assumption that the flow is axisymmetric without swirl. We write the fluid equations as a quasilinear hyperbolic mixed initial boundary value problem and the solid equation as a second order ODE coupled to the fluid equations. Finally we prove the local in time well-posedness for this coupled problem, provided some compatibility conditions on the initial data are satisfied.
• We study the homotopy category $\mathrm{hmf}(R,W)$ of matrix factorizations of non-zero elements $W\in R^\times$, where $R$ is an elementary divisor domain. When $R$ has prime elements and $W$ factors into a square-free element $W_0$ and a finite product of primes of multiplicity greater than one and which do not divide $W_0$, we show that $\mathrm{hmf}(R,W)$ is triangle-equivalent with an orthogonal sum of the triangulated categories of singularities $\mathrm{D}_{\mathrm sing}(A_n(p))$ of the local Artinian rings $A_n(p)=R/\langle p^n\rangle$, where $p$ runs over the prime divisors of $W$ of order $n\geq 2$. This result holds even when $R$ is not Noetherian. The triangulated categories $\mathrm{D}_{\mathrm sing}(A_n(p))$ are Krull-Schmidt and we describe them explicitly. We also study the cocycle category $\mathrm{zmf}(R,W)$, showing that it is additively generated by elementary matrix factorizations. Finally, we discuss a few classes of examples.
• We study the Plateau problem with a lower dimensional obstacle in $\mathbb{R}^n$. Intuitively, in $\mathbb{R}^3$ this corresponds to a soap film (spanning a given contour) that is pushed from below by a "vertical" 2D half-space (or some smooth deformation of it). We establish almost optimal $C^{1,1/2-}$ estimates for the solutions near points on the free boundary of the contact set in dimension $n = 3$. The $C^{1,1/2-}$ estimates follow from an $\varepsilon$-regularity result for minimal surfaces with thin obstacles (valid in any dimension $n$) in the spirit of the De Giorgi's improvement of flatness. To prove it, we follow Savin's small perturbations method. A nontrivial difficulty in using Savin's approach with minimal surfaces with thin obstacles is that near a typical contact point the solution consists of two smooth surfaces that intersect transversally, and hence it is not very flat at small scales. Via a new "dichotomy approach" based on barrier arguments we are able to overcome this difficulty and prove the desired result.
• We introduce the notion of a \emphcomplex cell, a complexification of the cells/cylinders used in real tame geometry. Complex cells are equipped with a natural notion of holomorphic extension, and the hyperbolic geometry of a cell within its extension provides the class of complex cells with a rich geometric function theory absent in the real case. We use this to prove a complex analog of the cellular decomposition theorem of real tame geometry. In the algebraic case we show that the complexity of such decompositions depends polynomially on the degrees of the equations involved. Using this theory, we sharpen the Yomdin-Gromov algebraic lemma on $C^r$-smooth parametrizations of semialgebraic sets: we show that the number of $C^r$ charts can be taken to be polynomial in the smoothness order $r$ and in the complexity of the set. The algebraic lemma was initially invented in the work of Yomdin and Gromov to produce estimates for the topological entropy of $C^\infty$ maps. Combined with work of Burguet, Liao and Yang, our refined version establishes an optimal sharpening of these estimates for \emphanalytic maps, in the form of tight bounds on the tail entropy and volume growth. This settles a conjecture of Yomdin who proved the same result in dimension two in 1991. A self-contained proof of these estimates using the refined algebraic lemma is given in an appendix by Yomdin. The algebraic lemma has more recently been used in the study of rational points on algebraic and transcendental varieties. We use the theory of complex cells in these two directions. In the algebraic context we prove a sharpening of a result of Heath-Brown on interpolating rational points in algebraic varieties. In the transcendental context we prove an interpolation result for (unrestricted) logarithmic images of subanalytic sets.
• Secure communication is of paramount importance in modern society. Asymmetric cryptography methods such as the widely used RSA method allow secure exchange of information between parties who have not shared secret keys. However, the existing asymmetric cryptographic schemes rely on unproven mathematical assumptions for security. Further, the digital keys used in their implementation are susceptible to copying that might remain unnoticed. Here we introduce a secure communication method that overcomes these two limitations by employing Physical Unclonable Keys (PUKs). Using optical PUKs realized in opaque scattering materials and employing off-the-shelf equipment, we transmit messages in an error-corrected way. Information is transmitted as patterned wavefronts of few-photon wavepackets which can be successfully decrypted only with the receiver's PUK. The security of PUK-Enabled Asymmetric Communication (PEAC) is not based on any stored secret but on the hardness of distinguishing between different few-photon wavefronts.
• Feb 22 2018 math.MG math.FA arXiv:1802.07563v1
It is proved that the classical Laplace transform is a continuous valuation which is positively GL$(n)$ covariant and logarithmic translation covariant. Conversely, these properties turn out to be sufficient to characterize this transform.
• Feb 22 2018 math.MG arXiv:1802.07561v1
For $1 \leq p < \infty$, Ludwig, Haberl and Parapatits classified $L_p$ Minkowski valuations intertwining the special linear group with additional conditions such as homogeneity and continuity. In this paper,a complete classification of $L_p$ Minkowski valuations intertwining the special linear group on polytopes without any additional conditions is established for $p \geq 1$ including $p = \infty$. For $n=3$ and $p=1$, there exist valuations not mentioned before.
• Feb 22 2018 math.MG arXiv:1802.07559v1
In this article, a classification of continuous, linearly intertwining, symmetric $L_p$-Blaschke ($p>1$) valuations is established as an extension of Haberl's work on Blaschke valuations. More precisely, we show that for dimensions $n\geq 3$, the only continuous, linearly intertwining, normalized symmetric $L_p$-Blaschke valuation is the normalized $L_p$-curvature image operator, while for dimension $n = 2$, a rotated normalized $L_p$-curvature image operator is an only additional one. One of the advantages of our approach is that we deal with normalized symmetric $L_p$-Blaschke valuations, which makes it possible to handle the case $p=n$. The cases where $p \not =n$ are also discussed by studying the relations between symmetric $L_p$-Blaschke valuations and normalized ones.
• We consider some of Jonathan and Peter Borweins' contributions to the high-precision computation of $\pi$ and the elementary functions, with particular reference to their book "Pi and the AGM" (Wiley, 1987). Here "AGM" is the arithmetic-geometric mean of Gauss and Legendre. Because the AGM converges quadratically, it can be combined with fast multiplication algorithms to give fast algorithms for the $n$-bit computation of $\pi$, and more generally the elementary functions. These algorithms run in almost linear time $O(M(n)\log n)$, where $M(n)$ is the time for $n$-bit multiplication. We outline some of the results and algorithms given in Pi and the AGM, and present some related (but new) results. In particular, we improve the published error bounds for some quadratically and quartically convergent algorithms for $\pi$, such as the Gauss-Legendre algorithm. We show that an iteration of the Borwein-Borwein quartic algorithm for $\pi$ is equivalent to two iterations of the Gauss-Legendre quadratic algorithm for $\pi$, in the sense that they produce exactly the same sequence of approximations to $\pi$ if performed using exact arithmetic.
• In this paper we prove that any $C^{1,\alpha}$ curve in $\mathbb{R}^n$, with $\alpha \in (\frac{1}{2},1]$, is the solution of the gradient flow equation for some $C^1$ convex function $f$, if and only if it is strongly self-contracted.
• We consider eigenvalues of the Dirichlet-to-Neumann operator for Laplacian in the domain (or manifold) with edges and establish the asymptotics of the eigenvalue counting function \beginequation* \mathsfN(\lambda)= \kappa_0\lambda^d +O(\lambda^d-1)\qquad \textas\ \ \lambda\to+∞, \endequation* where $d$ is dimension of the boundary. Further, in certain cases we establish two-term asymptotics \beginequation* \mathsfN(\lambda)= \kappa_0\lambda^d+\kappa_1\lambda^d-1+o(\lambda^d-1)\qquad \textas\ \ \lambda\to+∞. \endequation* We also establish improved asymptotics for Riesz means.
• We propose a Global-Local optimization algorithm for quantum control that combines standard local search methodologies with evolutionary algorithms. This allows us to find faster solutions to a set of problems relating to ultracold control of Bose-Einstein condensates.
• We propose a quantum optimal control algorithm that performs a gradient descent in a reduced basis named GRadient Optimization Using Parametrization (GROUP). We compare this optimization algorithm to the other state-of-the-art algorithms in quantum control namely, Gradient-Ascent Pulse Engieering (GRAPE), Krotov's method and Nelder-Mead using Chopped Random Basis (CRAB). We find that GROUP converges much faster than Nelder-Mead with CRAB and achieves better results than GRAPE and Krotov's method on the control problem presented here.
• We consider the set $\mathrm{MC}_d$ of monic centered polynomials of one complex variable with degree $d \geq 2$, and study the map $\widehat{\Phi}_d:\mathrm{MC}_d\to \widetilde{\Lambda}_d \subset \mathbb{C}^d / \mathfrak{S}_d$ which maps each $f \in \mathrm{MC}_d$ to its unordered collection of fixed-point multipliers. We give an explicit formula for counting the number of elements of each fiber $\widehat{\Phi}_d^{-1}\left(\bar{\lambda}\right)$ for every $\bar{\lambda} \in \widetilde{\Lambda}_d$. This formula contains no induction process, and is a drastic improvement of our previous result which gave a rather long algorithm with some induction processes for counting the number of elements of each fiber.
• Eigenvalues of higher multiplicity of the Nonlinear Fourier Transform (NFT) are considered for information transmission over fiber optic channels. The effects of phase, time or frequency shifts on this generalized NFT are derived, as well as an expression for the signal energy. These relations are used to design transmit signals and numerical algorithms to compute the direct and inverse NFTs, and to numerically demonstrate communication using a soliton with one double eigenvalue.
• We prove an abstract compactness result for gradient flow lines of a non-local unregularized gradient flow equation on a scale Hilbert space. This is the first step towards Floer theory on scale Hilbert spaces.
• Feb 22 2018 cs.LG arXiv:1802.07427v1
In the large-scale multiclass setting, assigning labels often consists of answering multiple questions to drill down through a hierarchy of classes. Here, the labor required per annotation scales with the number of questions asked. We propose active learning with partial feedback. In this setup, the learner asks the annotator if a chosen example belongs to a (possibly composite) chosen class. The answer eliminates some classes, leaving the agent with a partial label. Success requires (i) a sampling strategy to choose (example, class) pairs, and (ii) learning from partial labels. Experiments on the TinyImageNet dataset demonstrate that our most effective method achieves a 21% relative improvement in accuracy for a 200k binary question budget. Experiments on the TinyImageNet dataset demonstrate that our most effective method achieves a 26% relative improvement (8.1% absolute) in top1 classification accuracy for a 250k (or 30%) binary question budget, compared to a naive baseline. Our work may also impact traditional data annotation. For example, our best method fully annotates TinyImageNet with only 482k (with EDC though, ERC is 491) binary questions (vs 827k for naive method).
• Let V be a linear subspace of M_n(C) which contains the identity matrix and is stable under Hermitian transpose. A "quantum k-clique" for V is a rank k orthogonal projection P in M_n(C) for which dim(PVP) = k^2, and a "quantum k-anticlique" is a rank k orthogonal projection for which dim(PVP) = 1. We give upper and lower bounds both for the largest dimension of V which would ensure the existence of a quantum k-anticlique, and for the smallest dimension of V which would ensure the existence of a quantum k-clique.
• We theoretically investigate a near-quantum-limited parametric amplifier based on the non-linear dynamics of quasiparticles flowing through a superconducting-insulator-superconducting junction. Photon-assisted tunneling, resulting from the combination of dc- and ac-voltage bias, gives rise to a strong parametric interaction for the electromagnetic modes reflected by the junction coupled to a transmission line. We show phase-sensitive and phase-preserving amplification, together with single- and two-mode squeezing. For an Aluminum junction pumped at 8 GHz, we predict narrow-band phase-sensitive amplification of microwaves signals to more than 30 dB, and broadband phase-preserving amplification larger than 15 dB over a 4 GHz band. We also predict single-mode squeezing reaching -20 dB and two-mode squeezing of -14 dB over a 4 GHz band. A key feature is that the device parameters can be tuned in-situ by the applied dc- and ac-voltage biases.
• In this note, we provide a simple derivation of expressions for the restricted partition function and its polynomial part. Our proof relies on elementary algebra on rational functions and a lemma that expresses the polynomial part as an average of the partition function.
• We establish connections between the problem of learning a two-layers neural network with good generalization error and tensor decomposition. We consider a model with input $\boldsymbol x \in \mathbb R^d$, $r$ hidden units with weights $\{\boldsymbol w_i\}_{1\le i \le r}$ and output $y\in \mathbb R$, i.e., $y=\sum_{i=1}^r \sigma(\left \langle \boldsymbol x, \boldsymbol w_i\right \rangle)$, where $\sigma$ denotes the activation function. First, we show that, if we cannot learn the weights $\{\boldsymbol w_i\}_{1\le i \le r}$ accurately, then the neural network does not generalize well. More specifically, the generalization error is close to that of a trivial predictor with access only to the norm of the input. This result holds for any activation function, and it requires that the weights are roughly isotropic and the input distribution is Gaussian, which is a typical assumption in the theoretical literature. Then, we show that the problem of learning the weights $\{\boldsymbol w_i\}_{1\le i \le r}$ is at least as hard as the problem of tensor decomposition. This result holds for any input distribution and assumes that the activation function is a polynomial whose degree is related to the order of the tensor to be decomposed. By putting everything together, we prove that learning a two-layers neural network that generalizes well is at least as hard as tensor decomposition. It has been observed that neural network models with more parameters than training samples often generalize well, even if the problem is highly underdetermined. This means that the learning algorithm does not estimate the weights accurately and yet is able to yield a good generalization error. This paper shows that such a phenomenon cannot occur when the input distribution is Gaussian and the weights are roughly isotropic. We also provide numerical evidence supporting our theoretical findings.
• Feb 22 2018 physics.comp-ph quant-ph arXiv:1802.07259v1
We combine the multigrid (MG) method with state-of-the-art concepts from the variational formulation of the numerical renormalization group. The resulting MG renormalization (MGR) method is a natural generalization of the MG method for solving partial differential equations. When the solution on a grid of $N$ points is sought, our MGR method has a computational cost scaling as $\mathcal{O}(\log(N))$, as opposed to $\mathcal{O}(N)$ for the best standard MG method. Therefore MGR can exponentially speed up standard MG computations. To illustrate our method, we develop a novel algorithm for the ground state computation of the nonlinear Schrödinger equation. Our algorithm acts variationally on tensor products and updates the tensors one after another by solving a local nonlinear optimization problem. We compare several different methods for the nonlinear tensor update and find that the Newton method is the most efficient as well as precise. The combination of MGR with our nonlinear ground state algorithm produces accurate results for the nonlinear Schrödinger equation on $N = 10^{18}$ grid points in three spatial dimensions.
• The relic abundance of heavy stable particles charged under a confining gauge group can be depleted by a second stage of annihilations near the deconfinement temperature. This proceeds via the formation of quarkonia-like states, in which the heavy pair subsequently annihilates. The size of the quarkonium formation cross section was the subject of some debate. We estimate this cross section in a simple toy model. The dominant process can be viewed as a rearrangement of the heavy and light quarks, leading to a geometric cross section of hadronic size. In contrast, processes in which only the heavy constituents are involved lead to mass-suppressed cross sections. These results apply to any scenario with bound states of sizes much larger than their inverse mass, such as U(1) models with charged particles of different masses, and can be used to construct ultra-heavy dark-matter models with masses above the naïve unitarity bound. They are also relevant for the cosmology of any stable colored relic.
• Accumulation of energy by reactive elements is limited by the amplitude of time-harmonic external sources. In the steady-state regime, all incident power is fully reflected back to the source, and the stored energy does not increase in time, although the external source continuously supplies energy. Here, we show that this claim is not true if the reactive element is time-varying, and time-varying lossless loads of a transmission line or lossless metasurfaces can accumulate electromagnetic energy supplied by a time-harmonic source continuously in time without any theoretical limit. We analytically derive the required time dependence of the load reactance and show that it can be in principle realized as a series connection of mixers and filters. Furthermore, we prove that properly designing time-varying LC circuits one can arbitrarily engineer the time dependence of the current in the circuit fed by a given time-harmonic source. As an example, we theoretically demonstrate a circuit with a linearly increasing current through the inductor. Such LC circuits can accumulate huge energy from both the time-harmonic external source and the pump which works on varying the circuit elements in time. Finally, we discuss how this stored energy can be released in form of a time-compressed pulse.
• We compute the Euclidean correlators of the stress tensor in pure $SU(3)$ Yang-Mills theory at finite temperature at zero and finite spatial momenta with lattice simulations. We perform continuum extrapolations using $N_\tau=10,12,16,20$ lattices with renormalized anisotropy 2. We use these correlators to estimate the shear viscosity of the gluon plasma in the deconfined phase. For $T=1.5T_c$ we obtain $\eta/s=0.17(2)$.
• We demonstrate that summing up series of Feynman diagrams can yield unbiased accurate results for strongly-correlated fermions even when the convergence radius vanishes. We consider the unitary Fermi gas, a model of non-relativistic fermions in three-dimensional continuous space. Diagrams are built from partially-dressed or fully-dressed propagators of single particles and pairs. The series is resummed by a conformal-Borel transformation that incorporates the large-order behavior and the analytic structure in the Borel plane, which are found by the instanton approach. We report highly accurate numerical results for the equation of state in the normal unpolarized regime, and reconcile experimental data with the theoretically conjectured fourth virial coefficient.
• Topological data analysis (TDA) provides a growing body of tools for computing geometric and topological information about spaces from a finite sample of points. We present a new adaptive algorithm for finding provably dense samples of points on real algebraic varieties given a set of defining polynomials. The algorithm utilizes methods from numerical algebraic geometry to give formal guarantees about the density of the sampling and it also employs geometric heuristics to minimize the size of the sample. As TDA methods consume significant computational resources that scale poorly in the number of sample points, our sampling minimization makes applying TDA methods more feasible. We demonstrate our algorithm on several examples.
• The roles played by learning and memorization represent an important topic in deep learning research. Recent work on this subject has shown that the optimization behavior of DNNs trained on shuffled labels is qualitatively different from DNNs trained with real labels. Here, we propose a novel permutation approach that can differentiate memorization from learning in deep neural networks (DNNs) trained as usual (i.e., using the real labels to guide the learning, rather than shuffled labels). The evaluation of weather the DNN has learned and/or memorized, happens in a separate step where we compare the predictive performance of a shallow classifier trained with the features learned by the DNN, against multiple instances of the same classifier, trained on the same input, but using shuffled labels as outputs. By evaluating these shallow classifiers in validation sets that share structure with the training set, we are able to tell apart learning from memorization. Application of our permutation approach to multi-layer perceptrons and convolutional neural networks trained on image data corroborated many findings from other groups. Most importantly, our illustrations also uncovered interesting dynamic patterns about how DNNs memorize over increasing numbers of training epochs, and support the surprising result that DNNs are still able to learn, rather than only memorize, when trained with pure Gaussian noise as input.
• Two new techniques are introduced into the theory of the domination game. The cutting lemma bounds the game domination number of a partially dominated graph with the game domination number of suitably modified partially dominated graph. The union lemma bounds the S-game domination number of a disjoint union of paths using appropriate weighting functions. Using these tools a conjecture asserting that the so-called three legged spiders are game domination critical graphs is proved. An extended cutting lemma is also derived and all game domination critical trees on 18, 19, and 20 vertices are listed.
• Phosphate fertilizers were first implicated by Schroeder and Balassa in 1963 for increasing the Cd concentration in cultivated soils and crops. This suggestion has become a part of the accepted paradigm on soil toxicity. Consequently, stringent fertilizer control programs to monitor Cd have been launched. Attempts to link Cd toxicity and fertilizers to chronic diseases, sometimes with good evidence, but mostly on less certain data are frequent. A re-assessment of this "accepted" paradigm is timely, given the larger body of data available today. The data show that both the input and output of Cd per hectare from fertilizers are negligibly small compared to the total amount of Cd/hectare usually present in the soil itself. It is shown that claimed correlations between fertilizer input and cadmium accumulation in crops are not robust. Alternative scenarios that explain the data are examined. Thus soil acidulation on fertilizer loading, and the effect of magnesium, zinc, and fluoride ions contained in fertilizers are considered using recent Cd$^{2+}$, Mg$^{2+}$ and F$^-$ ion-association theories. The protective role of ions like Zn, Se, Fe, etc., is emphasized, and the question of cadmium toxicity in the presence of other ions is considered. These help to clarify and rectify difficulties found in the standard point of view. This analysis does \it not modify the accepted views on Cd contamination by airborne delivery, smoking, and industrial activity.
• Semidefinite programming can be considered over any real closed field, including fields of Puiseux series equipped with their nonarchimedean valuation. Nonarchimedean semidefinite programs encode parametric families of classical semidefinite programs, for sufficiently large values of the parameter. Recently, a correspondence has been established between nonarchimedean semidefinite programs and stochastic mean payoff games with perfect information. This correspondence relies on tropical geometry. It allows one to solve generic nonarchimedean semidefinite feasibility problems, of large scale, by means of stochastic game algorithms. In this paper, we show that the mean payoff of these games can be interpreted as a condition number for the corresponding nonarchimedean feasibility problems. This number measures how close a feasible instance is from being infeasible, and vice versa. We show that it coincides with the maximal radius of a ball in Hilbert's projective metric, that is included in the feasible set. The geometric interpretation of the condition number relies in particular on a duality theorem for tropical semidefinite feasibility programs. Then, we bound the complexity of the feasibility problem in terms of the condition number. We finally give explicit bounds for this condition number, in terms of the characteristics of the stochastic game. As a consequence, we show that the simplest algorithm to decide whether a stochastic mean payoff game is winning, namely value iteration, has a pseudopolynomial complexity when the number of random positions is fixed.
• The electromagnetic multipole moments of the open-flavor $Z_{\bar cq}$ states are investigated by assuming a diquark-antidiquark picture for their internal structure and quantum numbers $J^{PC} = 1^{+-}$ for their spin-parity. In particular, their magnetic and quadrupole moments are extracted in the framework of light-cone QCD sum rule by the help of the photon distribution amplitudes. The electromagnetic multipole moments of the open-flavor $Z_{\bar cq}$ states are important dynamical observables, which encode valuable information on their underlying structure. The results obtained for the magnetic moments of different structures are considerably large and can be measured in future experiments. We obtain very small values for the quadrupole moments of $Z_{\bar cq}$ states indicating a nonspherical charge distribution.
• Feb 22 2018 cs.GR arXiv:1802.07710v1
Medical visualization is the use of computers to create 3D images from medical imaging data sets, almost all surgery and cancer treatment in the developed world relies on it.Volume visualization techniques includes iso-surface visualization, mesh visualization and point cloud visualization techniques, these techniques have revolutionized medicine. Much of modern medicine relies on the 3D imaging that is possible with magnetic resonance imaging (MRI) scanners, functional magnetic resonance imaging (fMRI)scanners, positron emission tomography (PET) scanners, ultrasound imaging (US) scanners, X-Ray scanners, bio-marker microscopy imaging scanners and computed tomography (CT) scanners, which make 3D images out of 2D slices. The primary goal of this report is the application-oriented optimization of existing volume rendering methods providing interactive frame rates. Techniques are presented for traditional alpha-blending rendering, surface-shaded display, maximum intensity projection (MIP), and fast previewing with fully interactive parameter control. Different preprocessing strategies are proposed for interactive iso-surface rendering and fast previewing, such as the well-known marching cube algorithm.
• In an antiferromagnet (AF) with uniaxial anisotropy, spin-up and spin-down magnon excitations coexist and form an internal degree of freedom. A magnon spin current can be thermally generated near an exchange-coupled ferromagnet (F)/AF interface where the degeneracy is lifted. Here we investigate thermal magnon spin transport in an F/AF/F heterostructure. We find that a sufficiently large temperature gradient can switch the downstream F via magnonic spin-transfer torque if it is initially antiparallel with the upstream F. Reciprocal switching from the parallel to the antiparallel state can be achieved by reversing the temperature gradient. The threshold temperature gradient decreases with an increasing interfacial exchange coupling and an increasing temperature.
• The anisotropy of galaxy clustering in redshift space has long been used to probe the rate of growth of cosmological perturbations. However, if galaxies are aligned by large-scale tidal fields, then a sample with an orientation-dependent selection effect has an additional anisotropy imprinted onto its correlation function. We use the LOWZ and CMASS catalogs of SDSS-III BOSS Data Release 12 to divide galaxies into two sub-samples based on their offset from the Fundamental Plane, which should be correlated with orientation. These sub-samples must trace the same underlying cosmology, but have opposite orientation-dependent selection effects. We measure the clustering parameters of each sub-sample and compare them in order to calculate the dimensionless parameter $B$, a measure of how strongly galaxies are aligned by gravitational tidal fields. We found that for CMASS (LOWZ), the measured $B$ was $-0.024 \pm 0.015$ ($-0.030 \pm 0.016$). This result can be compared to the theoretical predictions of Hirata 2009, who argued that since galaxy formation physics does not depend on the direction of the observer, the same intrinsic alignment parameters that describe galaxy-ellipticity correlations should also describe intrinsic alignments in the radial direction. We find that the ratio of observed to theoretical values is $0.51\pm 0.32$ ($0.77\pm0.41$) for CMASS (LOWZ). We combine the results to obtain a total ${\rm {Obs}/{Theory}} = 0.61\pm 0.26$. This measurement constitutes evidence (between 2 and 3$\sigma$) for radial intrinsic alignments, and is consistent with theoretical expectations ($<2\sigma$ difference).
• In this paper we investigate the dynamical behavior of fractional differential system associated to 5D Maxwell-Bloch model in terms of fractional Caputo derivatives.
• Given an involution on a rational homology 3-sphere $Y$ with quotient the $3$-sphere, we prove a formula for the Lefschetz number of the map induced by this involution in the reduced monopole Floer homology. This formula is motivated by a variant of Witten's conjecture relating the Donaldson and Seiberg--Witten invariants of 4-manifolds. A key ingredient is a skein-theoretic argument, making use of an exact triangle in monopole Floer homology, that computes the Lefschetz number in terms of the Murasugi signature of the branch set and the sum of Frøyshov invariants associated to spin structures on $Y$. We discuss various applications of our formula in gauge theory, knot theory, contact geometry, and 4-dimensional topology.
• Discrete quantum feedback control consists of a managed dynamics according to the information acquired by a previous measurement. Energy fluctuations along such dynamics satisfy generalized fluctuation relations, which are useful tools to study the thermodynamics of systems far away from equilibrium. Due to the practical challenge to assess energy fluctuations in the quantum scenario, the experimental verification of detailed fluctuation relations in the presence of feedback control remains elusive. We present a feasible method to experimentally verify detailed fluctuation relations for discrete feedback control quantum dynamics. Two new detailed fluctuation relations are developed and employed. The method is based on a quantum interferometric strategy that yields for the verification of fluctuation relations in the presence of feedback control. An analytical example to illustrate the applicability of the method is discussed. The comprehensive technique introduced here can be experimentally implemented at micro-scale with the current technology in a variety of experimental platforms.

Beni Yoshida Feb 13 2018 19:53 UTC

This is not a direct answer to your question, but may give some intuition to formulate the problem in a more precise language. (And I simplify the discussion drastically). Consider a static slice of an empty AdS space (just a hyperbolic space) and imagine an operator which creates a particle at some

...(continued)
Abhinav Deshpande Feb 10 2018 15:42 UTC

I see. Yes, the epsilon ball issue seems to be a thorny one in the prevalent definition, since the gate complexity to reach a target state from any of a fixed set of initial states depends on epsilon, and not in a very nice way (I imagine that it's all riddled with discontinuities). It would be inte

...(continued)
Elizabeth Crosson Feb 10 2018 05:49 UTC

Thanks for the correction Abhinav, indeed I meant that the complexity of |psi(t)> grows linearly with t.

Producing an arbitrary state |phi> exactly is also too demanding for the circuit model, by the well-known argument that given any finite set of gates, the set of states that can be reached i

...(continued)
Abhinav Deshpande Feb 09 2018 20:21 UTC

Elizabeth, interesting comment! Did you mean to say that the complexity of $U(t)$ increases linearly with $t$ as opposed to exponentially?

Also, I'm confused about your definition. First, let us assume that the initial state is well defined and is $|\psi(0)\rangle$.
If you define the complexit

...(continued)
Elizabeth Crosson Feb 08 2018 04:27 UTC

The complexity of a state depends on the dynamics that one is allowed to use to generate the state. If we restrict the dynamics to be "evolving according a specific Hamiltonian H" then we immediately have that the complexity of U(t) = exp(i H t) grows exponentially with t, up until recurrences that

...(continued)
Danial Dervovic Feb 05 2018 15:03 UTC

Thank you Māris for the extremely well thought-out and articulated points here.

I think this very clearly highlights the need to think explicitly about the precompute time if using the lifting to directly simulate the quantum walk, amongst other things.

I wish to give a well-considered respons

...(continued)
Michael A. Sherbon Feb 02 2018 15:56 UTC

Good general review on the Golden Ratio and Fibonacci ... in physics, more examples are provided in the paper “Fine-Structure Constant from Golden Ratio Geometry,” Specifically,

\alpha^{-1}\simeq\frac{360}{\phi^{2}}-\frac{2}{\phi^{3}}&plus;\frac{\mathit{A^{2}}}{K\phi^{4}}-\frac{\mathit{A^{\math

...(continued)
Māris Ozols Feb 01 2018 17:53 UTC

This paper considers the problem of using "lifted" Markov chains to simulate the mixing of coined quantum walks. The Markov chain has to approximately (in the total variational distance) sample from the distribution obtained by running the quantum walk for a randomly chosen time $t \in [0,T]$ follow

...(continued)
Johnnie Gray Feb 01 2018 12:59 UTC

Thought I'd just comment here that we've rather significantly updated this paper.

wenling yang Jan 30 2018 19:08 UTC