Top arXiv papers

sign in to customize
  • PDF
    We show a nearly quadratic separation between deterministic communication complexity and the logarithm of the partition number, which is essentially optimal. This improves upon a recent power 1.5 separation of Göös, Pitassi, and Watson (FOCS 2015). In query complexity, we establish a nearly quadratic separation between deterministic (and even randomized) query complexity and subcube partition complexity, which is also essentially optimal. We also establish a nearly power 1.5 separation between quantum query complexity and subcube partition complexity, the first superlinear separation between the two measures. Lastly, we show a quadratic separation between quantum query complexity and one-sided subcube partition complexity. Our query complexity separations use the recent cheat sheet framework of Aaronson, Ben-David, and the author. Our query functions are built up in stages by alternating function composition with the cheat sheet construction. The communication complexity separation follows from lifting the query separation to communication complexity.
  • PDF
    A Non-Abelian Thermal State (NATS), the thermal state of a system that exchanges heat and non-commuting charges with other systems, can be derived from the Principle of Maximum Entropy, even though the charges fail to commute with each other. To what extent this state has physical significance, and whether it aligns with other notions of the thermal state (such as complete passivity and equilibrium considerations), has been questioned. We show that the NATS is the thermal state, deriving its form via multiple strategies. We derive its form from the microcanonical state of the system-and-bath composite, by introducing the notion of an approximate microcanonical subspace. This gives plausibility to the notion that typical evolution laws will have the NATS as an equilibrium point locally. We also show that the form of the NATS can be derived from a resource theory in which the state is completely passive. We also prove a zeroeth law and a family of second laws for thermodynamics that involves noncommuting conserved charges.
  • PDF
    Quantum computers are poised to radically outperform their classical counterparts by manipulating coherent quantum systems. A realistic quantum computer will experience errors due to the environment and imperfect control. When these errors are even partially coherent, they present a major obstacle to achieving robust computation. Here, we propose a method for introducing independent random single-qubit gates into the logical circuit in such a way that the effective logical circuit remains unchanged. We prove that this randomization tailors the noise into stochastic Pauli errors, leading to dramatic reductions in worst-case and cumulative error rates, while introducing little or no experimental overhead. Moreover we prove that our technique is robust to variation in the errors over the gate sets and numerically illustrate the dramatic reductions in worst-case error that are achievable. Given such tailored noise, gates with significantly lower fidelity are sufficient to achieve fault-tolerant quantum computation, and, importantly, the worst case error rate of the tailored noise can be directly and efficiently measured through randomized benchmarking experiments. Remarkably, our method enables the realization of fault-tolerant quantum computation under the error rates observed in recent experiments.
  • PDF
    We consider a generalisation of thermodynamics that deals with multiple conserved quantities at the level of individual quantum systems. Each conserved quantity, which, importantly, need not commute with the rest, can be extracted and stored in its own battery. Unlike in standard thermodynamics, where the second law places a constraint on how much of the conserved quantity (energy) that can be extracted, here, on the contrary, there is no limit on how much of any individual conserved quantity that can be extracted. However, other conserved quantities must be supplied, and the second law constrains the combination of extractable quantities and the trade-offs between them which are allowed. We present explicit protocols which allow us to perform arbitrarily good trade-offs and extract arbitrarily good combinations of conserved quantities from individual quantum systems.
  • PDF
    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which is known to belong to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: The first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments (if any solution exists) of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solutions exist) or to count the total number of valid assignments. The query complexity for the worst-case is, respectively, bounded by $O(\sqrt{2^{n-M^{\prime}}})$ and $O(2^{n-M^{\prime}})$, where $n$ is the number of variables and $M^{\prime}$ the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. When compared to heuristic techniques, the proposed quantum algorithm is faster than the classical WalkSAT heuristic and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The quantum algorithm that we propose can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problems. The general version of the algorithm is presented and analyzed.
  • PDF
    The theory of the inflationary multiverse changes the way we think about our place in the world. According to its most popular version, our world may consist of infinitely many exponentially large parts, exhibiting different sets of low-energy laws of physics. Since these parts are extremely large, the interior of each of them behaves as if it were a separate universe, practically unaffected by the rest of the world. This picture, combined with the theory of eternal inflation and anthropic considerations, may help to solve many difficult problems of modern physics, including the cosmological constant problem. In this article I will briefly describe this theory and provide links to the some hard to find papers written during the first few years of the development of the inflationary multiverse scenario.
  • PDF
    The holographic principle has taught us that, as far as their entropy content is concerned, black holes in $(3+1)$-dimensional curved spacetimes behave as ordinary thermodynamic systems in flat $(2+1)$-dimensional spacetimes. In this essay we point out that the opposite behavior can also be observed in black-hole physics. To show this we study the quantum Hawking evaporation of near-extremal Reissner-Nordström black holes. We first point out that the black-hole radiation spectrum departs from the familiar radiation spectrum of genuine $(3+1)$-dimensional perfect black-body emitters. In particular, the would be black-body thermal spectrum is distorted by the curvature potential which surrounds the black hole and effectively blocks the emission of low-energy quanta. Taking into account the energy-dependent gray-body factors which quantify the imprint of passage of the emitted radiation quanta through the black-hole curvature potential, we reveal that the $(3+1)$-dimensional black holes effectively behave as perfect black-body emitters in a flat $(9+1)$-dimensional spacetime.
  • PDF
    We present a simple generalisation of the $\Lambda$CDM model which on the one hand reaches very good agreement with the present day experimental data and provides an internal inflationary mechanism on the other hand. It is based on Palatini modified gravity with quadratic Starobinsky term and generalized Chaplygin gas as a matter source providing, besides a current accelerated expansion, the epoch of endogenous inflation driven by type III freeze singularity. It follows from our statistical analysis that astronomical data favours negative value of the parameter coupling quadratic term into Einstein-Hilbert Lagrangian and as a consequence the bounce instead of initial Big-Bang singularity is preferred.
  • PDF
    A tanglegram consists of two binary rooted trees with the same number of leaves and a perfect matching between the leaves of the trees. We show that the two halves of a random tanglegram essentially look like two independently chosen random plane binary trees. This fact is used to derive a number of results on the shape of random tanglegrams, including theorems on the number of cherries and generally occurrences of subtrees, the root branches, the number of automorphisms, and the height. For each of these, we obtain limiting probabilities or distributions. Finally, we investigate the number of matched cherries, for which the limiting distribution is identified as well.
  • PDF
    Creativity, together with the making of ideas into fruition, is essential for progress. Today the evolution from an idea to its application can be facilitated by the implementation of Fabrication Laboratories, or FabLabs, having affordable digital tools for prototyping. FabLabs aiming at scientific research and invention are now starting to be established inside Universities and Research Centers. We review the setting up of the ICTP Scientific FabLab in Trieste, Italy and propose to replicate this class of multi-purpose workplaces within academia as a support for science, education and development world-wide.
  • PDF
    We investigate the dynamics of BPS vortices in the presence of magnetic impurities taking the form of axially-symmetric localised lumps and delta-functions. We present numerical results for vortices on flat space, as well as exact results for vortices on hyperbolic space in the presence of delta-function impurities. In fact, delta-function impurities of appropriate strength can be captured within the moduli space approximation by keeping one or more of the vortices fixed. We also show that previous work on vortices on the 2-sphere extends naturally to the inclusion of delta-function impurities.
  • PDF
    Dirac sought an interpretation of mathematical formalism in terms of physical entities and Einstein insisted that physics should describe "the real states of the real systems". While Bell inequalities put into question the reality of states, modern device-independent approaches do away with the idea of entities: physics is not built of physical systems. Focusing on the correlations between operationally defined inputs and outputs, device-independent methods promote a view more distant from conventional theory than Einstein's 'principle theories' were from 'constructive theories'. On the examples of indefinite causal orders and almost quantum correlations, we ask a puzzling question: if physical theory is not about systems, then what is it about? The answer given by the device-independent models is that physics is about languages. In moving away from the information-theoretic reconstructions of quantum theory, this answer marks a new conceptual development in the foundations of physics.
  • PDF
    Bundle gerbes are simple examples of higher geometric structures that show their utility in dealing with topological subtleties of physical theories. I review a recent construction of torsion topological invariants for condensed matter systems via equivariant bundle gerbes. The construction covers static and periodically driven systems with time reversal invariance in 2 and 3 space dimensions. It involves refinements of geometry of gerbes that are discussed in the first lecture, the second one being devoted to the applications to topological insulators.
  • PDF
    Single photon sources (SPS) are a fundamental building block for optical implementations of quantum information protocols. Among SPSs, multiple crystal heralded single photon sources seem to give the best compromise between high pair production rate and low multiple photon events. In this work, we study their performance in a practical quantum key distribution experiment, by evaluating the achievable key rates. The analysis focuses on the two different schemes, symmetric and asymmetric, proposed for the practical implementation of heralded single photon sources, with attention on the performance of their composing elements. The analysis is based on the protocol proposed by Bennett and Brassard in 1984 and on its improvement exploiting decoy state technique. Finally, a simple way of exploiting the post-selection mechanism for a passive, one decoy state scheme is evaluated.
  • PDF
    We proposed Neural Enquirer as a neural network architecture to execute a SQL-like query on a knowledge-base (KB) for answers. Basically, Neural Enquirer finds the distributed representation of a query and then executes it on knowledge-base tables to obtain the answer as one of the values in the tables. Unlike similar efforts in end-to-end training of semantic parser, Neural Enquirer is fully neuralized: it not only gives distributional representation of the query and the knowledge-base, but also realizes the execution of compositional queries as a series of differentiable operations, with intermediate results (consisting of annotations of the tables at different levels) saved on multiple layers of memory. Neural Enquirer can be trained with gradient descent, with which not only the parameters of the controlling components and semantic parsing component, but also the embeddings of the tables and query words can be learned from scratch. The training can be done in an end-to-end fashion, but it can take stronger guidance, e.g., the step-by-step supervision for complicated queries, and benefit from it. Neural Enquirer is one step towards building neural network systems which seek to understand language by executing it on real-world. Our experiments show that Neural Enquirer can learn to execute fairly complicated queries on tables with rich structures.
  • PDF
    Most human behaviors consist of multiple parts, steps, or subtasks. These structures guide our action planning and execution, but when we observe others, the latent structure of their actions is typically unobservable, and must be inferred in order to learn new skills by demonstration, or to assist others in completing their tasks. For example, an assistant who has learned the subgoal structure of a colleague's task can more rapidly recognize and support their actions as they unfold. Here we model how humans infer subgoals from observations of complex action sequences using a nonparametric Bayesian model, which assumes that observed actions are generated by approximately rational planning over unknown subgoal sequences. We test this model with a behavioral experiment in which humans observed different series of goal-directed actions, and inferred both the number and composition of the subgoal sequences associated with each goal. The Bayesian model predicts human subgoal inferences with high accuracy, and significantly better than several alternative models and straightforward heuristics. Motivated by this result, we simulate how learning and inference of subgoals can improve performance in an artificial user assistance task. The Bayesian model learns the correct subgoals from fewer observations, and better assists users by more rapidly and accurately inferring the goal of their actions than alternative approaches.
  • PDF
    Probabilistic numerical methods aim to model numerical error as a source of epistemic uncertainty that is subject to probabilistic analysis and reasoning, enabling the principled propagation of numerical uncertainty through a computational pipeline. In this paper we focus on numerical methods for integration. We present probabilistic (Bayesian) versions of both Markov chain and Quasi Monte Carlo methods for integration and provide rigorous theoretical guarantees for convergence rates, in both posterior mean and posterior contraction. The performance of probabilistic integrators is guaranteed to be no worse than non-probabilistic integrators and is, in many cases, asymptotically superior. These probabilistic integrators therefore enjoy the "best of both worlds", leveraging the sampling efficiency of advanced Monte Carlo methods whilst being equipped with valid probabilistic models for uncertainty quantification. Several applications and illustrations are provided, including examples from computer vision and system modelling using non-linear differential equations. A survey of open challenges in probabilistic integration is provided.
  • PDF
    It is a well established and understood fact that photons propagating in free space interact with the gravitational field, leading to well-known effects such as gravitational redshift or gravitational lensing. While these phenomena might give an impression that photons in free space have a sort of mass, this impression falls short upon considering their dispersion relation. In this letter we show that unlike in free space, when photons are brought to a stop within a planar cavity, they acquire a mass that cannot not be distinguished from that of a solid-state body freely moving in a bidimensional space, both from the inertial and gravitational point of view.
  • PDF
    The purpose of this paper is to propose a continuum micromechanics model for the simulation of uniaxial compressive and tensile tests on lime-based mortars, in order to predict their stiffness, compressive and tensile strengths, and tensile fracture energy. In tension, we adopt an incremental strain-controlled form of the Mori-Tanaka scheme with a damageable matrix phase, while a simple $J_2$ yield criterion is employed in compression. To reproduce the behavior of lime-based mortars correctly, the scheme must take into account shrinkage cracking among aggregates. This phenomenon is introduced into the model via penny-shaped cracks, whose density is estimated on the basis of a particle size distribution combined with the results of finite element analyses of a single crack formation between two spherical inclusions. Our predictions show a good agreement with experimental data and explain the advantages of compliant crushed brick fragments, often encountered in ancient mortars, over stiff sand particles. The validated model provides a reliable tool for optimizing the composition of modern lime-based mortars with applications in conservation and restoration of architectural heritage.
  • PDF
    In this article we explore a certain definition of "alternate quantization" for the critical O(N) model. We elaborate on a prescription to evaluate the Renyi entropy of alternately quantized critical O(N) model. We show that there exists new saddles of the q-Renyi free energy functional corresponding to putting certain combinations of the Kaluza-Klein modes into alternate quantization. This leads us to an analysis of trying to determine the true state of the theory by trying to ascertain the global minima among these saddle points.
  • PDF
    We call a group FJ if it satisfies the $K$- and $L$-theoretic Farrell-Jones conjecture with coefficient in $\mathbb Z$. We show that if $G$ is FJ, then the simple Borel conjecture (in dimensions $\ge 5$) holds for every group of the form $G\rtimes\mathbb Z$. If in addition $Wh(G\times \mathbb Z)=0$, which is true for all known torsion free FJ groups, then the bordism Borel conjecture (in dimensions $n\ge 5$) holds for $G\rtimes\mathbb Z$. We also show that if the $L$-theoretic Farrell-Jones conjecture with coefficient in $\mathbb Z$ holds for a torsion free group $G$, then the Novikov conjecture holds for any repeated semi-direct product $\big(((G\rtimes\mathbb Z)\rtimes\mathbb Z)\cdots\big)\rtimes\mathbb Z$. One of the key ingredients in proving these rigidity results is another main result, which says that if a torsion free group $G$ satisfies the $L$-theoretic Farrell-Jones conjecture with coefficient in $\mathbb Z$, then any semi-direct product $G\rtimes\mathbb Z$ also satisfies the $L$-theoretic Farrell-Jones conjecture with coefficient in $\mathbb Z$. We also obtain an obstruction for the corresponding statement to hold for the $K$-theoretic Farrell-Jone conjecture.
  • PDF
    A search for narrow resonances in proton-proton collisions at sqrt(s) = 13 TeV is presented. The invariant mass distribution of the two leading jets is measured with the CMS detector using a data set corresponding to an integrated luminosity of 2.4 inverse femtobarns. The highest observed dijet mass is 6.1 TeV. The distribution is smooth and no evidence for resonant particles is observed. Upper limits at 95 % confidence level are set on the production cross section for narrow resonances with masses above 1.5 TeV. When interpreted in the context of specific models, the limits exclude string resonances with masses below 7.0 TeV, scalar diquarks below 6.0 TeV, axigluons and colorons below 5.1 TeV, excited quarks below 5.0 TeV, color-octet scalars below 3.1 TeV, and W' bosons below 2.6 TeV. These results significantly extend previously published limits.
  • PDF
    The peanosphere construction of Duplantier, Miller, and Sheffield provides a means of representing a $\gamma$-Liouville quantum gravity (LQG) surface, $\gamma \in (0,2)$, decorated with a space-filling form of Schramm's SLE$_\kappa$, $\kappa = 16/\gamma^2 \in (4,\infty)$, $\eta$ as a gluing of a pair of trees which are encoded by a correlated two-dimensional Brownian motion $Z$. We prove a KPZ-type formula which relates the Hausdorff dimension of any Borel subset $A$ of the range of $\eta$ which can be defined as a function of $\eta$ (modulo time parameterization) to the Hausdorff dimension of the corresponding time set $\eta^{-1}(A)$. This result serves to reduce the problem of computing the Hausdorff dimension of any set associated with an SLE, CLE, or related processes in the interior of a domain to the problem of computing the Hausdorff dimension of a certain set associated with a Brownian motion. For many natural examples, the associated Brownian motion set is well-known. As corollaries, we obtain new proofs of the Hausdorff dimensions of the SLE$_\kappa$ curve for $\kappa \not=4$; the double points and cut points of SLE$_\kappa$ for $\kappa >4$; and the intersection of two flow lines of a Gaussian free field. We also obtain the Hausdorff dimension of the set of $m$-tuple points of space-filling SLE$_\kappa$ for $\kappa>4$ and $m \geq 3$ by computing the Hausdorff dimension of the so-called $(m-2)$-tuple $\pi/2$-cone times of a correlated planar Brownian motion.
  • PDF
    The discovery of the Higgs boson by the LHC and the measurement of its mass at around 125 GeV, taken together with the absence of signals of physics beyond the standard model, make it possible that we might live in a metastable electroweak vacuum. Intriguingly, we seem to be very close to the boundary of stability and this near-criticality makes our vacuum extremely long-lived. In this talk I describe the state-of-the-art calculation leading to these results, explaining what are the ingredients and assumptions that enter in it, with special emphasis on the role of the top mass. I also discuss possible implications of this metastability for physics beyond the standard model and comment on the possible impact of physics at the Planck scale on near-criticality.
  • PDF
    For current fluctuations in non-equilibrium steady states of Markovian processes, we derive four different universal bounds valid beyond the Gaussian regime. Different variants of these bounds apply to either the entropy change or any individual current, e.g., the rate of substrate consumption in a chemical reaction or the electron current in an electronic device. The bounds vary with respect to their degree of universality and tightness. A universal parabolic bound on the generating function of an arbitrary current depends solely on the average entropy production. A second, stronger bound requires knowledge both of the thermodynamic forces that drive the system and of the topology of the network of states. These two bounds are conjectures based on extensive numerics. An exponential bound that depends only on the average entropy production and the average number of transitions per time is rigorously proved. This bound has no obvious relation to the parabolic bound but it is typically tighter further away from equilibrium. An asymptotic bound that depends on the specific transition rates and becomes tight for large fluctuations is also derived. This bound allows for the prediction of the asymptotic growth of the generating function. Even though our results are restricted to networks with a finite number of states, we show that the parabolic bound is also valid for three paradigmatic examples of driven diffusive systems for which the generating function can be calculated using the additivity principle. Our bounds provide a new general class of constraints for nonequilibrium systems.
  • PDF
    Near a black hole, differential rotation of a magnetized accretion disk is thought to produce an instability that amplifies weak magnetic fields, driving accretion and outflow. These magnetic fields would naturally give rise to the observed synchrotron emission in galaxy cores and to the formation of relativistic jets, but no observations to date have been able to resolve the expected horizon-scale magnetic-field structure. We report interferometric observations at 1.3-millimeter wavelength that spatially resolve the linearly polarized emission from the Galactic Center supermassive black hole, Sagittarius A*. We have found evidence for partially ordered fields near the event horizon, on scales of ~6 Schwarzschild radii, and we have detected and localized the intra-hour variability associated with these fields.
  • PDF
    Removing strong outbursts from multiwavelength light curves of the blazar Mrk 421, we construct outburstless time series for this system. A model-independent power spectrum light curve analysis in the optical, hard X-ray and gamma-rays of this outburstless state shows clear evidence for a periodicity of ≈400 days. A subsequent full maximum likelihood analysis fitting an eclipse model confirms a periodicity of 387.16 days. The power spectrum of the signal in the outburstless state of the source does not follow a flicker noise behaviour and so, the system producing it is not self-organised. This means that the periodicity is not produced by any internal physical processes associated to the central engine. The simplest physical mechanism to which this periodicity could be ascribed is a dynamical effect produced by an orbiting supermassive black hole companion of mass ∼10^7 M_⊙eclipsing the central black hole, which has a mass ∼10^8 M_⊙. The optimal model restricts the physics of the eclipsing binary black hole candidate system to have an eclipse fraction of 0.36, occurring over approximately 30% of the orbital period.
  • PDF
    This paper proposes a novel algorithm to optimally size and place storage in low voltage (LV) networks based on a linearized multiperiod optimal power flow method which we call forward backward sweep optimal power flow (FBS-OPF). We show that this method has good convergence properties, its solution deviates slightly from the optimum and makes the storage sizing and placement problem tractable for longer investment horizons. We demonstrate the usefulness of our method by assessing the economic viability of distributed and centralized storage in LV grids with a high photovoltaic penetration (PV). As a main result, we quantify that for the CIGRE LV test grid distributed storage configurations are preferable, since they allow for less PV curtailment due to grid constraints.
  • PDF
    We perform global linear stability analysis and idealized numerical simulations in global thermal balance to understand the condensation of cold gas from hot/virial atmospheres (coronae), in particular the intracluster medium (ICM). We pay particular attention to geometry (e.g., spherical versus plane-parallel) and the nature of the gravitational potential. Global linear analysis gives a similar value for the fastest growing thermal instability modes in spherical and Cartesian geometries. Simulations and observations suggest that cooling in halos critically depends on the ratio of the cooling time to the free-fall time ($t_{cool}/t_{ff}$). Extended cold gas condenses out of the ICM only if this ratio is smaller than a threshold value close to 10. Previous works highlighted the difference between the nature of cold gas condensation in spherical and plane-parallel atmospheres; namely, cold gas condensation appeared easier in spherical atmospheres. This apparent difference due to geometry arises because the previous plane-parallel simulations focussed on \em in situ condensation of multiphase gas but spherical simulations studied condensation \em anywhere in the box. Unlike previous claims, our nonlinear simulations show that there are only minor differences in cold gas condensation, either in situ or anywhere, for different geometries. The amount of cold gas condensing depends on the shape of the gravitational potential well; gas has more time to condense if gravitational acceleration decreases toward the center. In our idealized simulations with heating balancing cooling in each layer, there can be significant mass/energy/momentum transfer across layers that can trigger condensation and drive $t_{cool}/t_{ff}$ far beyond the critical value close to 10. Triggered condensation is very prominent in plane-parallel simulations, in which a large amount of cold gas condenses out.
  • PDF
    Measurements are reported of high frequency cross-spectra of signals from the Fermilab Holometer, a pair of co-located 39 m, high power Michelson interferometers. The instrument obtains differential position sensitivity to cross-correlated signals far exceeding any previous measurement in a broad frequency band extending to the 3.8 MHz inverse light crossing time of the apparatus. A model of universal exotic spatial shear correlations that matches the Planck scale holographic information bound of space-time position states is excluded to 4.6\sigma significance.
  • PDF
    In this paper we present a general convex optimization approach for solving high-dimensional tensor regression problems under low-dimensional structural assumptions. We consider using convex and \emphweakly decomposable regularizers assuming that the underlying tensor lies in an unknown low-dimensional subspace. Within our framework, we derive general risk bounds of the resulting estimate under fairly general dependence structure among covariates. Our framework leads to upper bounds in terms of two very simple quantities, the Gaussian width of a convex set in tensor space and the intrinsic dimension of the low-dimensional tensor subspace. These general bounds provide useful upper bounds on rates of convergence for a number of fundamental statistical models of interest including multi-response regression, vector auto-regressive models, low-rank tensor models and pairwise interaction models. Moreover, in many of these settings we prove that the resulting estimates are minimax optimal.
  • PDF
    We present a recalibration of the Sloan Digital Sky Survey (SDSS) photometry with new flat fields and zero points derived from Pan-STARRS1 (PS1). Using PSF photometry of 60 million stars with $16 < r < 20$, we derive a model of amplifier gain and flat-field corrections with per-run RMS residuals of 3 millimagnitudes (mmag) in $griz$ bands and 15 mmag in $u$ band. The new photometric zero points are adjusted to leave the median in the Galactic North unchanged for compatibility with previous SDSS work. We also identify transient non-photometric periods in SDSS ("contrails") based on photometric deviations co-temporal in SDSS bands. The recalibrated stellar PSF photometry of SDSS and PS1 has an RMS difference of 9,7,7,8 mmag in $griz$, respectively, when averaged over $15'$ regions.
  • PDF
    Attractor neural network is an important theoretical scenario for modeling memory function in the hippocampus and in the cortex. In these models, memories are stored in the plastic recurrent connections of neural populations in the form of "attractor states". The maximal information capacity for conventional abstract attractor networks with unconstrained connections is 2 bits/synapse. However, an unconstrained synapse has the capacity to store infinite amount of bits in a noiseless theoretical scenario: a capacity that conventional attractor networks cannot achieve. Here, I propose a hierarchical attractor network that can achieve an ultra high information capacity. The network has two layers: a visible layer with $N_v$ neurons, and a hidden layer with $N_h$ neurons. The visible-to-hidden connections are set at random and kept fixed during the training phase, in which the memory patterns are stored as fixed-points of the network dynamics. The hidden-to-visible connections, initially normally distributed, are learned via a local, online learning rule called the Three-Threshold Learning Rule and there is no within-layer connections. The results of simulations suggested that the maximal information capacity grows exponentially with the expansion ratio $N_h/N_v$. As a first order approximation to understand the mechanism providing the high capacity, I simulated a naive mean-field approximation (nMFA) of the network. The exponential increase was captured by the nMFA, revealing that a key underlying factor is the correlation between the hidden and the visible units. Additionally, it was observed that, at maximal capacity, the degree of symmetry of the connectivity between the hidden and the visible neurons increases with the expansion ratio. These results highlight the role of hierarchical architecture in remarkably increasing the performance of information storage in attractor networks.
  • PDF
    Spatially resolved studies of high redshift galaxies, an essential insight into galaxy formation processes, have been mostly limited to stacking or unusually bright objects. We present here the study of a typical (L$^{*}$, M$_\star$ = 6 $\times 10^9$ $M_\odot$) young lensed galaxy at $z=3.5$, observed with MUSE, for which we obtain 2D resolved spatial information of Ly$\alpha$ and, for the first time, of CIII] emission. The exceptional signal-to-noise of the data reveals UV emission and absorption lines rarely seen at these redshifts, allowing us to derive important physical properties (T$_e\sim$15600 K, n$_e\sim$300 cm$^{-3}$, covering fraction f$_c\sim0.4$) using multiple diagnostics. Inferred stellar and gas-phase metallicities point towards a low metallicity object (Z$_{\mathrm{stellar}}$ = $\sim$ 0.07 Z$_\odot$ and Z$_{\mathrm{ISM}}$ $<$ 0.16 Z$_\odot$). The Ly$\alpha$ emission extends over $\sim$10 kpc across the galaxy and presents a very uniform spectral profile, showing only a small velocity shift which is unrelated to the intrinsic kinematics of the nebular emission. The Ly$\alpha$ extension is $\sim$4 times larger than the continuum emission, and makes this object comparable to low-mass LAEs at low redshift, and more compact than the Lyman-break galaxies and Ly$\alpha$ emitters usually studied at high redshift. We model the Ly$\alpha$ line and surface brightness profile using a radiative transfer code in an expanding gas shell, finding that this model provides a good description of both observables.
  • PDF
    Several characterizations of umbilic points of submanifolds in arbitrary Riemannian and Lorentzian manifolds are given. As a consequence, we obtain new characterizations of spheres in the Euclidean space and of hyperbolic spaces in the Lorentz-Minkowski space.
  • PDF
    We investigate the effect of a strong magnetic field on a three dimensional smectic A liquid crystal. We identify a critical field above which the uniform layered state loses stability; this is associated to the onset of layer undulations. In a previous work, García-Cervera and Joo considered the two dimensional case and analyzed the transition to the undulated state via a simple bifurcation. In dimension n=3 the situation is more delicate because the first eigenvalue of the corresponding linearized problem is not simple. We overcome the difficulties inherent to this higher dimensional setting by identifying the irreducible representations for natural actions on the functional that take into account the invariances of the problem thus allowing for reducing the bifurcation analysis to a subspace with symmetries. We are able to describe at least two bifurcation branches, one of which is stable, highlighting the richer landscape of energy critical states in the three dimensional setting. Finally, we analyze a reduced two dimensional problem, assuming the magnetic field is very strong, and are able to relate this to a model in micromagnetics studied, from where we deduce the periodicity property of minimizers.
  • PDF
    We investigate sections of arithmetic fundamental groups of hyperbolic curves over function fields. As a consequence we prove that the anabelian section conjecture of Grothendieck holds over all finitely generated fields over $\Bbb Q$ if it holds over all number fields, under the condition of finiteness (of the $\ell$-primary parts) of certain Shafarevich-Tate groups. We also prove that if the section conjecture holds over all number fields then it holds over all finitely generated fields for curves which are defined over a number field.
  • PDF
    In this short note, by combining the work of Amiot-Iyama-Reiten and Thanhoffer de Volcsey-Van den Bergh on Cohen-Macaulay modules with the previous work of the author on orbit categories, we compute the (nonconnective) algebraic K-theory with coefficients of cyclic quotient singularities.
  • PDF
    The Third Reference Catalogue of Bright Galaxies (RC3) is a reasonably complete listing of 23,011 nearby, large, bright galaxies. By using the final imaging data release from the Sloan Digital Sky Survey, we generate scientifically-calibrated FITS mosaics by using the montage program for all SDSS imaging bands for all RC3 galaxies that lie within the survey footprint. We further combine the SDSS g, r, and i band FITS mosaics for these galaxies to create color-composite images by using the STIFF program. We generalized this software framework to make FITS mosaics and color-composite images for an arbitrary catalog and imaging data set. Due to positional inaccuracies inherent in the RC3 catalog, we employ a recursive algorithm in our mosaicking pipeline that first determines the correct location for each galaxy, and subsequently applies the mosaicking procedure. As an additional test of this new software pipeline and to obtain mosaic images of a larger sample of RC3 galaxies, we also applied this pipeline to photographic data taken by the Second Palomar Observatory Sky Survey with $B_J$, $R_F$, and $I_N$ plates. We publicly release all generated data, accessible via a web search form, and the software pipeline to enable others to make galaxy mosaics by using other catalogs or surveys.
  • PDF
    For the quadratic Poincaré gauge theory of gravity (PG) we consider the FLRW cosmologies using an isotropic Bianchi representation. Here the considered cosmologies are for the general case: all the even and odd parity terms of the quadratic PG with their respective scalar and pseudoscalar parameters are allowed with no \empha priori restrictions on their values. With the aid of a manifestly homogeneous and isotropic representation, an effective Lagrangian gives the second order dynamical equations for the gauge potentials. An equivalent set of first order equations for the observables is presented. The generic behavior of physical solutions is discussed and illustrated using numerical simulations.
  • PDF
    In this paper we prove a refined version of Uchida's theorem on isomorphisms between absolute Galois groups of global fields in positive characteristics, where one "ignores" the information provided by a "small" set of primes.
  • PDF
    Electron shelving gives rise to bright and dark periods in the resonance fluorescence of a three-level atom. The corresponding incoherent spectrum contains a very narrow inelastic peak on top of a two-level-like spectrum. Using the theories of balanced and conditional homodyne detection we study ensemble averaged phase-dependent fluctuations of intermittent resonance fluorescence. The sharp peak is found only in the spectra of the squeezed quadrature. In balanced homodyne detection that peak is positive, which greatly reduces the squeezing, also seen in its variance. In conditional homodyne detectionCHD, for weak to moderate laser intensity, the peak is negative, enhancing the squeezing, and for strong fields the sidebands become dispersive which, together with the positive sharp peak dominate the spectrum. The latter effect is due to non-negligible third order fluctuations produced by the atom-laser nonlinearity and the increased steady state population of the shelving state. A simple mathematical approach allows us to obtain accurate analytical results.
  • PDF
    We reanalyze dataset collected during 1998-2003 years by the low energy threshold (10 GeV) neutrino telescope NT200 in the lake Baikal in searches for neutrino signal from dark matter annihilations near the center of the Milky Way. Two different approaches are used in the present analysis: counting events in the cones around the direction towards the Galactic Center and the maximum likelihood method. We assume that the dark matter particles annihilate dominantly over one of the annihilation channels $b\bar{b}$, $W^+W^-$, $\tau^+\tau^-$, $\mu^+\mu^-$ or $\nu\bar{\nu}$. No significant excess of events towards the Galactic Center over expected background of atmospheric origin is found and we derive 90% CL upper limits on the annihilation cross section of dark matter.
  • PDF
    The structural human connectome (i.e.\ the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to ~10^6 nodes and ~10^8 edges. The model that in general describes the observed degrees best is a three-parameter generalized Weibull (also known as a stretched exponential) distribution. Thus the degree distribution is heavy-tailed, but not scale-free. We also calculate the topological (graph) dimension D and the small-world coefficient \sigma of these networks. While \sigma suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
  • PDF
    Recent SDN-based solutions give cloud providers the opportunity to extend their "as-a-service" model with the offer of complete network virtualization. They provide tenants with the freedom to specify the network topologies and addressing schemes of their choosing, while guaranteeing the required level of isolation among them. These platforms, however, have been targeting the datacenter of a single cloud provider with full control over the infrastructure. This paper extends this concept further by supporting the creation of virtual networks that span across several datacenters, which may belong to distinct cloud providers, while including private facilities owned by the tenant. In order to achieve this, we introduce a new network layer above the existing cloud hypervisors, affording the necessary level of control over the communications while hiding the heterogeneity of the clouds. The benefits of this approach are various, such as enabling finer decisions on where to place the virtual machines (e.g., to fulfill legal requirements), avoiding single points of failure, and potentially decreasing costs. Although our focus in the paper is on architecture design, we also present experimental results of a first prototype of the proposed solution.
  • PDF
    In this paper, we propose a reachable set based collision avoidance algorithm for unmanned aerial vehicles (UAVs). UAVs have been deployed for agriculture research and management, surveillance and sensor coverage for threat detection and disaster search and rescue operations. It is essential for the aircraft to have on-board collision avoidance capability to guarantee safety. Instead of the traditional approach of collision avoidance between trajectories, we propose a collision avoidance scheme based on reachable sets and tubes. We then formulate the problem as a convex optimization problem seeking suitable control constraint sets for participating aircraft. We have applied the approach on a case study of two quadrotors collision avoidance scenario.
  • PDF
    When the Euler equations for shallow water are taken to the next order, beyond KdV, $\eta^2$ is no longer an invariant. (It would seem that $\eta$ is the only one.) However, two adiabatic invariants akin to $\eta^2$ can be found. Here we present and test them. When the KdV expansion parameters are zero, $\eta^2$ is recovered from both adiabatic invariants.
  • PDF
    Nonlocal gravity is the recent classical nonlocal generalization of Einstein's theory of gravitation in which the past history of the gravitational field is taken into account. In this theory, nonlocality appears to simulate dark matter. The virial theorem for the Newtonian regime of nonlocal gravity theory is derived and its consequences for "isolated" astronomical systems in virial equilibrium at the present epoch are investigated. In particular, for a sufficiently isolated nearby galaxy in virial equilibrium, the galaxy's baryonic diameter---namely, the diameter of the smallest sphere that completely surrounds the baryonic system at the present time---is predicted to be larger than the effective dark matter fraction times a universal length that is the basic nonlocality length scale of about 3 kpc.
  • PDF
    Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time. In a novel extension to this idea, we propose the use of visual prototypical concepts as side information. For most real-world visual object categories, it may be difficult to establish a unique prototype. However, in cases such as traffic signs, brand logos, flags, and even natural language characters, these prototypical templates are available and can be leveraged for an improved recognition performance. The present work proposes a way to incorporate this prototypical information in a deep learning framework. Using prototypes as prior information, the deepnet pipeline learns the input image projections into the prototypical embedding space subject to minimization of the final classification loss. Based on our experiments with two different datasets of traffic signs and brand logos, prototypical embeddings incorporated in a conventional convolutional neural network improve the recognition performance. Recognition accuracy on the Belga logo dataset is especially noteworthy and establishes a new state-of-the-art. In zero-shot learning scenarios, the same system can be directly deployed to draw inference on unseen classes by simply adding the prototypical information for these new classes at test time. Thus, unlike earlier approaches, testing on seen and unseen classes is handled using the same pipeline, and the system can be tuned for a trade-off of seen and unseen class performance as per task requirement. Comparison with one of the latest works in the zero-shot learning domain yields top results on the two datasets mentioned above.
  • PDF
    The Borwein conjecture asserts that for any positive integer $n$ and $k$, the coefficient $a_{3k}$ of $q^{3k}$ in the expansion of $\prod_{j=0}^n (1-q^{3j+1})(1-q^{3j+2})$ is nonnegative. In this note we prove that for any $k\leq n$, $$a_3k+a_3(n+1)+3k+⋯+a_3n(n+1)+3k>0.$$

Recent comments

Kenneth Goodenough Dec 01 2015 09:38 UTC

Thank you very much for your comment, Hari. Currently we don't have the analytical form of the bound from Pirandola et al. to compare with our results. However, judging by the graph in their paper it is clear that their bound is tighter than our bound for all eta for the case of n = 1. We do expect

...(continued)
Hari Krovi Nov 30 2015 20:26 UTC

Very nice results. I was wondering how your improvement to Takeoka et al for the thermal noise channel compares to the improvement of Pirandola et al (which uses relative entropy of entanglement - ref 34). Sorry if I missed it in your paper.

Mile Gu Nov 20 2015 05:04 UTC

Good question! There shouldn't be any contradiction with the correspondence principle. The reason here is that the quantum models are built to simulate the output behaviour of macroscopic, classical systems, and are not necessarily macroscopic themselves. When we compare quantum and classical comple

...(continued)
hong Nov 20 2015 00:40 UTC

Interesting results. But, just wondering, does it contradict to the correspondence principle?

Marco Tomamichel Nov 17 2015 21:05 UTC

Thanks for pointing this out, this is an unintended omission and we will certainly fix it. I thought Koashi was first to use entropic uncertainty relations for QKD but apparently I was wrong.

Raul Garcia-Patron Nov 17 2015 14:42 UTC

Nice work, congratulations!
Please correct me if I am wrong, but there seems to be an important reference missing in the manuscript, the 2003 paper by Frederic Grosshans and Nicolas Cerf using uncertainty relations to prove the security of individual attacks against CV-QKD: arXiv:quant-ph/0311006

Marco Tomamichel Nov 12 2015 06:07 UTC

Okay, so my scite should not be considered as an endorsement. The only interesting part of this paper is Table I and II (minus the caption, which is wrong).

Chris Ferrie Nov 12 2015 05:36 UTC

Feels a bit like numerology, but the simple point that the setting choices are far from uniform is worrisome.

Marco Tomamichel Nov 12 2015 05:13 UTC

And looking forward to the response as well!

Tom Wong Nov 09 2015 11:12 UTC

This resolves an open problem of whether the procedure of Emms et al (2006), which is based on quantum walks, can distinguish all non-isomorphic strongly regular graphs. Their conclusion: no, because they came up with an example where the procedure fails.