# Top arXiv papers

• We construct a linear system non-local game which can be played perfectly using a limit of finite-dimensional quantum strategies, but which cannot be played perfectly on any finite-dimensional Hilbert space, or even with any tensor-product strategy. In particular, this shows that the set of (tensor-product) quantum correlations is not closed. The constructed non-local game provides another counterexample to the "middle" Tsirelson problem, with a shorter proof than our previous paper (though at the loss of the universal embedding theorem). We also show that it is undecidable to determine if a linear system game can be played perfectly with a limit of finite-dimensional quantum strategies.
• Matrix Product Vectors form the appropriate framework to study and classify one-dimensional quantum systems. In this work, we develop the structure theory of Matrix Product Unitary operators (MPUs) which appear e.g. in the description of time evolutions of one-dimensional systems. We prove that all MPUs have a strict causal cone, making them Quantum Cellular Automata (QCAs), and derive a canonical form for MPUs which relates different MPU representations of the same unitary through a local gauge. We use this canonical form to prove an Index Theorem for MPUs which gives the precise conditions under which two MPUs are adiabatically connected, providing an alternative derivation to that of [Commun. Math. Phys. 310, 419 (2012), arXiv:0910.3675] for QCAs. We also discuss the effect of symmetries on the MPU classification. In particular, we characterize the tensors corresponding to MPU that are invariant under conjugation, time reversal, or transposition. In the first case, we give a full characterization of all equivalence classes. Finally, we give several examples of MPU possessing different symmetries.
• We demonstrate that small quantum memories, realized via quantum error correction in multi-qubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10,000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent 7-qubit trapped ion experiment.
• In this thesis we study properties of open quantum dissipative evolutions of spin systems on lattices described by Lindblad generators, in a particular regime that we denote rapid mixing. We consider dissipative evolutions with a unique fixed point, and which compress the whole space of input states into increasingly small neighborhoods of the fixed point. The time scale at which this compression takes place, or in other words the time we have to wait for any input state to become almost indistinguishable from the fixed point, is called the mixing time of the process. Rapid mixing is a condition on the scaling of this mixing time with the system size: if it is logarithmic, then we have rapid mixing. The main contribution of this thesis is to show that rapid mixing has profound implications for the corresponding system: it is stable against external perturbations and its fixed point satisfies an area law for mutual information.
• Mar 27 2017 quant-ph arXiv:1703.08508v1
We give an arguably simpler and more direct proof of a recent result by Miller, Jain and Shi, who proved device-independent security of a protocol for quantum key distribution in which the devices can be used in parallel. Our proof combines existing results on immunization (Kempe et al., SICOMP 2011) and parallel repetition (Bavarian et al., STOC 2017) of entangled games.
• We quantify the usefulness of a bipartite quantum state in the ancilla-assisted channel discrimination of arbitrary quantum channels, formally defining a worst-case-scenario channel discrimination power for bipartite quantum states. We show that such a quantifier is deeply connected with the operator Schmidt decomposition of the state. We compute the channel discrimination power exactly for pure states, and provide upper and lower bounds for general mixed states. We show that highly entangled states can outperform any state that passes the realignment criterion for separability. Furthermore, while also unentangled states can be used in ancilla-assisted channel discrimination, we show that the channel discrimination power of a state is bounded by its quantum discord.
• The appearance of negative terms in quasiprobability representations of quantum theory is known to be inevitable, and, due to its equivalence with the onset of contextuality, of central interest in quantum computation and information. Until recently, however, nothing has been known about how much negativity is necessary in a quasiprobability representation. Zhu proved that the upper and lower bounds with respect to one type of negativity measure are saturated by quasiprobability representations which are in one-to-one correspondence with the elusive symmetric informationally complete quantum measurements (SICs). We define a family of negativity measures which includes Zhu's as a special case and consider another member of the family which we call "sum negativity." We prove a sufficient condition for local maxima in sum negativity and find exact global maxima in dimensions $3$ and $4$. Notably, we find that Zhu's result on the SICs does not generally extend to sum negativity, although the analogous result does hold in dimension $4$. Finally, the Hoggar lines in dimension $8$ make an appearance in a conjecture on sum negativity.
• Recent theoretical and experimental studies have suggested that quantum Monte Carlo (QMC) simulation can behave similarly to quantum annealing (QA). The theoretical analysis was based on calculating transition rates between local minima, in the large spin limit using WentzelKramers-Brillouin (WKB) approximation, for highly symmetric systems of ferromagnetically coupled qubits. The rate of transition was observed to scale the same in QMC and incoherent quantum tunneling, implying that there might be no quantum advantage of QA compared to QMC other than a prefactor. Quantum annealing is believed to provide quantum advantage through large scale superposition and entanglement and not just incoherent tunneling. Even for incoherent tunneling, the scaling similarity with QMC observed above does not hold in general. Here, we compare incoherent tunneling and QMC escape using perturbation theory, which has much wider validity than WKB approximation. We show that the two do not scale the same way when there are multiple homotopy-inequivalent paths for tunneling. We demonstrate through examples that frustration can generate an exponential number of tunneling paths, which under certain conditions can lead to an exponential advantage for incoherent tunneling over classical QMC escape. We provide analytical and numerical evidence for such an advantage and show that it holds beyond perturbation theory.
• We analyse the control of Majorana zero-energy states by mapping the fermionic system onto a chain of Ising spins. Although the topological protection is lost for the Ising chain, the properties of this system provide added insight into the nature of the quantum states. By controlling the local magnetic field, the Ising chain can be separated into topological and non-topological parts. In this paper we propose (topologically non-protected) schemes which allow performing the braiding operation, and in fact also more general rotations. We consider a T-junction geometry, but we also propose a protocol for a strictly one-dimensional setup. Both setups rely on an extra spin-1/2 coupler included either in the T-junction, or as part of the chain such that it controls one of the Ising links. Depending on the quantum state of the coupler, this link can be either ferromagnetic or antiferromagnetic. The coupler can be manipulated once the topological parts of the chain hosting the Majorana fermions are moved far away. Our scheme overcomes limitations which are a consequence of the 1D character of the Jordan-Wigner transformation. We also propose an experimental implementation of our scheme based on a chain of flux qubits with a design providing the needed control fields.
• We develop a general theory to estimate magnetic field gradients in quantum metrology. We consider a system of $N$ particles distributed on a line whose internal degrees of freedom interact with a magnetic field. Classically, gradient estimation is based on precise measurements of the magnetic field at two different locations, performed with two independent groups of particles. This approach, however, is sensitive to fluctuations of the off-set field determining the level-splitting of the ions and therefore suffers from collective dephasing, so we concentrate on states which are insensitive to these fluctuations. States from the decoherence-free subspace (DFS) allow to measure the gradient directly, without estimating the magnetic field. We use the framework of quantum metrology to assess the maximal accuracy of the precision of gradient estimation. We find that states from the DFS achieve the highest measurement sensitivity, as quantified by the quantum Fisher information and find measurements saturating the quantum Cramér-Rao bound.
• We consider sequences of random quantum channels defined using the Stinespring formula with Haar-distributed random orthogonal matrices. For any fixed sequence of input states, we study the asymptotic eigenvalue distribution of the outputs through tensor powers of random channels. We show that the input states achieving minimum output entropy are tensor products of maximally entangled states (Bell states) when the tensor power is even. This phenomenon is completely different from the one for random quantum channels constructed from Haar-distributed random unitary matrices, which leads us to formulate some conjectures about the regularized minimum output entropy.
• Quantum Tunneling is ubiquitous across different fields, from quantum chemical reactions, and magnetic materials to quantum simulators and quantum computers. While simulating the real-time quantum dynamics of tunneling is infeasible for high-dimensional systems, quantum tunneling also shows up in quantum Monte Carlo (QMC) simulations that scale polynomially with system size. Here we extend a recent results obtained for quantum spin models [Phys. Rev. Lett. \bf 117, 180402 (2016)], and study high-dimensional continuos variable models for proton transfer reactions. We demonstrate that QMC simulations efficiently recover ground state tunneling rates due to the existence of an instanton path, which always connects the reactant state with the product. We discuss the implications of our results in the context of quantum chemical reactions and quantum annealing, where quantum tunneling is expected to be a valuable resource for solving combinatorial optimization problems.
• The existence or absence of non-analytic cusps in the Loschmidt-echo return rate is traditionally employed to distinguish between a regular dynamical phase (regular cusps) and a trivial phase (no cusps) in quantum spin chains after a global quench. However, numerical evidence in a recent study [J. C. Halimeh and V. Zauner-Stauber, arXiv:1610.02019] suggests that instead of the trivial phase a distinct anomalous dynamical phase characterized by a novel type of non-analytic cusps occurs in the one-dimensional transverse-field Ising model when interactions are sufficiently long-range. Using an analytic semiclassical approach and exact diagonalization, we show that this anomalous phase also arises in the fully-connected case of infinite-range interactions, and we discuss its defining signature. Our results show that the transition from the regular to the anomalous dynamical phase coincides with Z2-symmetry breaking in the infinite-time limit, thereby showing a connection between two different concepts of dynamical criticality. Our work further expands the dynamical phase diagram of long-range interacting quantum spin chains, and can be tested experimentally in ion-trap setups and ultracold atoms in optical cavities, where interactions are inherently long-range.
• We study gradient magnetometry with an ensemble of atoms with arbitrary spin. We consider the case of a very general spatial probability distribution function. We calculate precision bounds for estimating the gradient of the magnetic field based on the quantum Fisher information. For quantum states that are invariant under homogeneous magnetic fields, we need to measure a single observable to estimate the gradient. On the other hand, for states that are sensitive to homogeneous fields, the measurement of two observables are needed, as the homogeneous field must also be estimated. This leads to a two-parameter estimation problem. We present a method to calculate precision bounds for gradient estimation with a chain of atoms or with two spatially separated atomic ensembles feeling different magnetic fields. We also consider a single atomic ensemble with an arbitrary density profile, in which the atoms cannot be addressed individually, and which is a very relevant case for experiments. Our model can take into account even correlations between particle positions.
• We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1+1D quantum chromodynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.
• We address the dynamics of a bosonic system coupled to either a bosonic or a magnetic environment, and derive a set of sufficient conditions that allow one to describe the dynamics in terms of the effective interaction with a classical fluctuating field. We find that for short interaction times the dynamics of the open system is described by a Gaussian noise map for several different interaction models and independently on the temperature of the environment. More generally, our results indicate that quantum environments may be described by classical fields whenever global symmetries lead to the definition of environmental operators that remain well defined when increasing the size of the environment.
• In the task of assisted coherence distillation via the set of operations X, where X is either local incoherent operations and classical communication (LICC), local quantum-incoherent operations and classical communication (LQICC), separable incoherent operations (SI), or separable quantum incoherent operations (SQI), two parties, namely Alice and Bob, share many copies of a bipartite joint state. The aim of the process is to generate the maximal possible coherence on the subsystem of Bob. In this paper, we investigate the assisted coherence distillation of some special mixed states, the states with vanished basis-dependent discord and Werner states. We show that all the four sets of operations are equivalent for assisted coherence distillation, whenever Alice and Bob share one of those mixed quantum states. Moreover, we prove that the assisted coherence distillation of the former can reach the upper bound, namely QI relative entropy, while that of the latter can not. Meanwhile, we also present a sufficient condition such that the assistance of Alice via the set of operations X can not help Bob improve his distillable coherence, and this condition is that the state shared by Alice and Bob has vanished basis-dependent discord.
• Quantum samplers are believed capable of sampling efficiently from distributions that are classically hard to sample from. We consider a sampler inspired by the Ising model. It is nonadaptive and therefore experimentally amenable. Under a plausible average-case hardness conjecture, classical sampling upto additive errors from this model is known to be hard. We present a trap-based verification scheme for quantum supremacy that only requires the verifier to prepare single-qubit states. The verification is done on the same model as the original sampler, a square lattice, with only a constant factor overhead. We next revamp our verification scheme to operate in the presence of noise by emulating a fault-tolerant procedure without correcting on-line for the errors, thus keeping the model non-adaptive, but verifying supremacy fault-tolerantly. We show that classically sampling upto additive errors is likely hard in our revamped scheme. Our results are applicable to more general sampling problems such as the Instantaneous Quantum Polynomial-time (IQP) computation model. It should also assist near-term attempts at experimentally demonstrating quantum supremacy and guide long-term ones.
• I will briefly discuss three cosmological models built upon three distinct quantum gravity proposals. I will first highlight the cosmological role of a vector field in the framework of a string/brane cosmological model. I will then present the resolution of the big bang singularity and the occurrence of an early era of accelerated expansion of a geometric origin, in the framework of group field theory condensate cosmology. I will then summarise results from an extended gravitational model based on non-commutative spectral geometry, a model that offers a purely geometric explanation for the standard model of particle physics.
• This paper proposes a crowd counting method. Crowd counting is difficult because of large appearance changes of a target which caused by density and scale changes. Conventional crowd counting methods generally utilize one predictor (e,g., regression and multi-class classifier). However, such only one predictor can not count targets with large appearance changes well. In this paper, we propose to predict the number of targets using multiple CNNs specialized to a specific appearance, and those CNNs are adaptively selected according to the appearance of a test image. By integrating the selected CNNs, the proposed method has the robustness to large appearance changes. In experiments, we confirm that the proposed method can count crowd with lower counting error than a CNN and integration of CNNs with fixed weights. Moreover, we confirm that each predictor automatically specialized to a specific appearance.
• Recently, deep learning (DL) methods have been introduced very successfully into human activity recognition (HAR) scenarios in ubiquitous and wearable computing. Especially the prospect of overcoming the need for manual feature design combined with superior classification capabilities render deep neural networks very attractive for real-life HAR application. Even though DL-based approaches now outperform the state-of-the-art in a number of recognitions tasks of the field, yet substantial challenges remain. Most prominently, issues with real-life datasets, typically including imbalanced datasets and problematic data quality, still limit the effectiveness of activity recognition using wearables. In this paper we tackle such challenges through Ensembles of deep Long Short Term Memory (LSTM) networks. We have developed modified training procedures for LSTM networks and combine sets of diverse LSTM learners into classifier collectives. We demonstrate, both formally and empirically, that Ensembles of deep LSTM learners outperform the individual LSTM networks. Through an extensive experimental evaluation on three standard benchmarks (Opportunity, PAMAP2, Skoda) we demonstrate the excellent recognition capabilities of our approach and its potential for real-life applications of human activity recognition.
• In applications of Einstein gravity one replaces the quantum-mechanical energy-momentum tensor of sources such as the degenerate electrons in a white dwarf or the black-body photons in the microwave background by c-number matrix elements. And not only that, one ignores the zero-point fluctuations in these sources by only retaining the normal-ordered parts of those matrix elements. There is no apparent justification for this procedure, and we show that it is precisely this procedure that leads to the cosmological constant problem. We suggest that solving the problem requires that gravity be treated just as quantum-mechanically as the sources to which it couples, and show that one can then solve the cosmological constant problem if one replaces Einstein gravity by the fully quantum-mechanically consistent conformal gravity theory.
• Hyperpolarisation at room temperature is one of the most important research fields in order to improve liquid, gas or nanoparticle tracer for Magnetic Resonance Imaging (MRI) in medical applications. In this paper we utilize nuclear magnetic resonance (NMR) to investigate the hyperpolarisation effect of negatively charged nitrogen vacancy (NV) centres on carbon-13 nuclei and their spin diffusion in a diamond single crystal close to the excited state level anti crossing (ESLAC) around 50 mT. Whereas the electron spins of the NV centre can be easily polarized in its m = 0 ground state at room temperature just by irradiation with green light , the swop of the NV electron spin polarization to a carbon-13 nuclei is a complex task. We found that the coupling between the polarized NV electron spin, the electron spin of a substitutional nitrogen impurity (P1) as well as its nitrogen-14 nuclei and the carbon-13 nuclear spin has to be considered. Here we show that through an optimization of this procedure, in about two minutes a signal to noise ratio which corresponds to a 23 hour standard measurement without hyperpolarisation and an accumulation of 460 single scans can be obtained. Furthermore we were able to identify several polarisation peaks of different sign at different magnetic fields in a region of some tens of gauss. Most of the peaks can be attributed to a coupling of the NV centres to nearby P1 centres. We present a new theoretical model in a framework of cross polarisation of a four spin dynamic model in good agreement with our experimental data. The results demonstrate the opportunities and power as well as limitations of hyperpolarisation in diamond via NV centres. We expect that the current work may have a significant impact on future applications.
• This work presents a study on the extraction and analysis of a set of 101 categories of eye movement features from three types of eye movement events: fixations, saccades, and post-saccadic oscillations. The eye movements were recorded during a reading task. For the categories of features with multiple instances in a recording we extract corresponding feature subtypes by calculating descriptive statistics on the distributions of these instances. A unified framework of detailed descriptions and mathematical formulas are provided for the extraction of the feature set. The analysis of feature values is performed using a large database of eye movement recordings from a normative population of 298 subjects. We demonstrate the central tendency and overall variability of feature values over the experimental population, and more importantly, we quantify the test-retest reliability (repeatability) of each separate feature. The described methods and analysis can provide valuable tools in fields exploring the eye movements, such as in behavioral studies, attention and cognition research, medical research, biometric recognition, and human-computer interaction.
• Zero-shot learning (ZSL) endows the computer vision system with the inferential capability to recognize instances of a new category that has never seen before. Two fundamental challenges in it are visual-semantic embedding and domain adaptation in cross-modality learning and unseen class prediction steps, respectively. To address both challenges, this paper presents two corresponding methods named Adaptive STructural Embedding (ASTE) and Self-PAsed Selective Strategy (SPASS), respectively. Specifically, ASTE formulates the visualsemantic interactions in a latent structural SVM framework to adaptively adjust the slack variables to embody the different reliableness among training instances. In this way, the reliable instances are imposed with small punishments, wheras the less reliable instances are imposed with more severe punishments. Thus, it ensures a more discriminative embedding. On the other hand, SPASS offers a framework to alleviate the domain shift problem in ZSL, which exploits the unseen data in an easy to hard fashion. Particularly, SPASS borrows the idea from selfpaced learning by iteratively selecting the unseen instances from reliable to less reliable to gradually adapt the knowledge from the seen domain to the unseen domain. Subsequently, by combining SPASS and ASTE, we present a self-paced Transductive ASTE (TASTE) method to progressively reinforce the classification capacity. Extensive experiments on three benchmark datasets (i.e., AwA, CUB, and aPY) demonstrate the superiorities of ASTE and TASTE. Furthermore, we also propose a fast training (FT) strategy to improve the efficiency of most of existing ZSL methods. The FT strategy is surprisingly simple and general enough, which can speed up the training time of most existing methods by 4~300 times while holding the previous performance.
• As an important and challenging problem in computer vision, zero-shot learning (ZSL) aims at automatically recognizing the instances from unseen object classes without training data. To address this problem, ZSL is usually carried out in the following two aspects: 1) capturing the domain distribution connections between seen classes data and unseen classes data; and 2) modeling the semantic interactions between the image feature space and the label embedding space. Motivated by these observations, we propose a bidirectional mapping based semantic relationship modeling scheme that seeks for crossmodal knowledge transfer by simultaneously projecting the image features and label embeddings into a common latent space. Namely, we have a bidirectional connection relationship that takes place from the image feature space to the latent space as well as from the label embedding space to the latent space. To deal with the domain shift problem, we further present a transductive learning approach that formulates the class prediction problem in an iterative refining process, where the object classification capacity is progressively reinforced through bootstrapping-based model updating over highly reliable instances. Experimental results on three benchmark datasets (AwA, CUB and SUN) demonstrate the effectiveness of the proposed approach against the state-of-the-art approaches.
• Mar 28 2017 quant-ph arXiv:1703.08700v1
We establish the equivalence between the loss of coherence due to mixing in a quantum system and the loss of information after performing a projective measurement. Subsequently, it is demonstrated that the quantum discord, a measure of correlation for the bipartite system $\rho_{Alice\leftarrow Bob}$, is identical to the minimum difference (over all projectors |i><i|) between local coherence (LQICC monotone) on Bob side and coherence of the reduced density matrix $\rho^B$.
• Mar 27 2017 cs.NE arXiv:1703.08535v1
Grammatical Evolution (GE) is a population-based evolutionary algorithm, where a formal grammar is used in the genotype to phenotype mapping process. PonyGE2 is an open source implementation of GE in Python, developed at UCD's Natural Computing Research and Applications group. It is intended as an advertisement and a starting-point for those new to GE, a reference for students and researchers, a rapid-prototyping medium for our own experiments, and a Python workout. As well as providing the characteristic genotype to phenotype mapping of GE, a search algorithm engine is also provided. A number of sample problems and tutorials on how to use and adapt PonyGE2 have been developed.
• Recently, Naruse presented a beautiful cancellation-free hook-length formula for skew shapes. The formula involves a sum over objects called \emphexcited diagrams, and the term corresponding to each excited diagram has hook lengths in the denominator, like the classical hook-length formula due to Frame, Robinson and Thrall.\{In this paper, we present a simple bijection that proves an equivalent recursive version of Naruse's result, in the same way that the celebrated hook-walk proof due to Green, Nijenhuis and Wilf gives a bijective (or probabilistic) proof of the hook-length formula for ordinary shapes.\{In particular, we also give a new bijective proof of the classical hook-length formula, quite different from the known proofs.
• The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input - with output still on traditional computer screens - the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.
• Estimating distributions of labels associated with nodes (e.g., number of connections or citizenship of users in a social network) in large graphs via sampling is a vital part of the study of complex networks. Due to their low cost, sampling via random walks (RWs) has been proposed as an attractive solution to this task. Most RW methods assume either that the network is undirected or that walkers can traverse edges regardless of their direction. Some RW methods have been designed for directed networks where edges coming into a node are not directly observable. In this work, we propose Directed Unbiased Frontier Sampling (DUFS), a sampling method based on a large number of coordinated walkers, each starting from a node chosen uniformly at random. It is applicable to directed networks with invisible incoming edges because it constructs, in real-time, an undirected graph consistent with the walkers trajectories, and due to the use of random jumps which prevent walkers from being trapped. DUFS generalizes previous RW methods and is suited for undirected networks and to directed networks regardless of in-edges visibility. We also propose an improved estimator of vertex label distributions which combines information from the initial walker locations with subsequent RW observations. We evaluate DUFS, comparing it against other RW methods, investigating the impact of its parameters on estimation accuracy and providing practical guidelines for choosing them. In estimating out-degree distributions, DUFS yields significantly better estimates of the head than other methods, while matching or exceeding estimation accuracy of the tail. Last, we show that DUFS outperforms VS when estimating distributions of node labels of the top 10% largest degree nodes, even when uniform vertex sampling has the same cost as RW steps
• Semantic segmentation has been a long standing challenging task in computer vision. It aims at assigning a label to each image pixel and needs significant number of pixellevel annotated data, which is often unavailable. To address this lack, in this paper, we leverage, on one hand, massive amount of available unlabeled or weakly labeled data, and on the other hand, non-real images created through Generative Adversarial Networks. In particular, we propose a semi-supervised framework ,based on Generative Adversarial Networks (GANs), which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from the K possible classes or marks it as a fake sample (extra class). The underlying idea is that adding large fake visual data forces real samples to be close in the feature space, enabling a bottom-up clustering process, which, in turn, improves multiclass pixel classification. To ensure higher quality of generated images for GANs with consequent improved pixel classification, we extend the above framework by adding weakly annotated data, i.e., we provide class level information to the generator. We tested our approaches on several challenging benchmarking visual datasets, i.e. PASCAL, SiftFLow, Stanford and CamVid, achieving competitive performance also compared to state-of-the-art semantic segmentation method
• A strained graphene monolayer is shown to operate as a highly efficient quantum heat engine delivering maximum power. The efficiency and power of the proposed device exceeds that of recent proposals. The reason for these excellent characteristics is that strain enables complete valley separation in transmittance through the device, implying that increasing strain leads to very high Seeback coefficient as well as lower conductance. In addition, since time-reversal symmetry is unbroken in our system, the proposed strained graphene quantum heat engine can also act as a high performance refrigerator.
• The Horndeski Lagrangian brings together all possible interactions between gravity and a scalar field that yield second-order field equations in four-dimensional spacetime. As originally proposed, it only addresses phenomenology without torsion, which is a non-Riemannian feature of geometry. Since torsion can potentially affect interesting phenomena such as gravitational waves and early Universe inflation, in this paper we allow torsion to exist and propagate within the Horndeski framework. To achieve this goal, we cast the Horndeski Lagrangian in Cartan's first-order formalism, and introduce wave operators designed to act covariantly on p-form fields that carry Lorentz indices. We find that nonminimal couplings and second-order derivatives of the scalar field in the Lagrangian are indeed generic sources of torsion. Metric perturbations couple to the background torsion and new torsional modes appear. These may be detected via gravitational waves but not through Yang-Mills gauge bosons.
• In visual question answering (VQA), an algorithm must answer text-based questions about images. While multiple datasets for VQA have been created since late 2014, they all have flaws in both their content and the way algorithms are evaluated on them. As a result, evaluation scores are inflated and predominantly determined by answering easier questions, making it difficult to compare different methods. In this paper, we analyze existing VQA algorithms using a new dataset. It contains over 1.6 million questions organized into 12 different categories. We also introduce questions that are meaningless for a given image to force a VQA system to reason about image content. We propose new evaluation schemes that compensate for over-represented question-types and make it easier to study the strengths and weaknesses of algorithms. We analyze the performance of both baseline and state-of-the-art VQA models, including multi-modal compact bilinear pooling (MCB), neural module networks, and recurrent answering units. Our experiments establish how attention helps certain categories more than others, determine which models work better than others, and explain how simple models (e.g. MLP) can surpass more complex models (MCB) by simply learning to answer large, easy question categories.
• Multicellular chemotaxis can occur via individually chemotaxing cells that are physically coupled. Alternatively, it can emerge collectively, from cells chemotaxing differently in a group than they would individually. We find that while the speeds of these two mechanisms are roughly the same, the precision of emergent chemotaxis is higher than that of individual-based chemotaxis for one-dimensional cell chains and two-dimensional cell sheets, but not three-dimensional cell clusters. We describe the physical origins of these results, discuss their biological implications, and show how they can be tested using common experimental measures such as the chemotactic index.
• Mar 29 2017 cs.HC cs.IR arXiv:1703.09662v1
Different users can use a given Internet application in many different ways. The ability to record detailed event logs of user in-application activity allows us to discover ways in which the application is being used. This enables personalization and also leads to important insights with actionable business and product outcomes. Here we study the problem of user session categorization, where the goal is to automatically discover categories/classes of user in-session behavior using event logs, and then consistently categorize each user session into the discovered classes. We develop a three stage approach which uses clustering to discover categories of sessions, then builds classifiers to classify new sessions into the discovered categories, and finally performs daily classification in a distributed pipeline. An important innovation of our approach is selecting a set of events as long-tail features, and replacing them with a new feature that is less sensitive to product experimentation and logging changes. This allows for robust and stable identification of session types even though the underlying application is constantly changing. We deploy the approach to Pinterest and demonstrate its effectiveness. We discover insights that have consequences for product monetization, growth, and design. Our solution classifies millions of user sessions daily and leads to actionable insights.
• In this article, we present an orthogonal basis expansion method for solving stochastic differential equations with a path-independent solution of the form $X_{t}=\phi(t,W_{t})$. For this purpose, we define a Hilbert space and construct an orthogonal basis for this inner product space with the aid of 2D-Hermite polynomials. With considering $X_{t}$ as orthogonal basis expansion, this method is implemented and the expansion coefficients are obtained by solving a system of nonlinear integro-differential equations. The strength of such a method is that expectation and variance of the solution is computed by these coefficients directly. Eventually, numerical results demonstrate its validity and efficiency in comparison with other numerical methods.
• Mar 29 2017 math.DG arXiv:1703.09629v1
This note gives sufficient conditions (isothermic or totally nonisothermic) for an immersion of a compact surface to have no Bonnet mate.
• Existing RNN-based approaches for action recognition from depth sequences require either skeleton joints or hand-crafted depth features as inputs. An end-to-end manner, mapping from raw depth maps to action classes, is non-trivial to design due to the fact that: 1) single channel map lacks texture thus weakens the discriminative power; 2) relatively small set of depth training data. To address these challenges, we propose to learn an RNN driven by privileged information (PI) in three-steps: An encoder is pre-trained to learn a joint embedding of depth appearance and PI (i.e. skeleton joints). The learned embedding layers are then tuned in the learning step, aiming to optimize the network by exploiting PI in a form of multi-task loss. However, exploiting PI as a secondary task provides little help to improve the performance of a primary task (i.e. classification) due to the gap between them. Finally, a bridging matrix is defined to connect two tasks by discovering latent PI in the refining step. Our PI-based classification loss maintains a consistency between latent PI and predicted distribution. The latent PI and network are iteratively estimated and updated in an expectation-maximization procedure. The proposed learning process provides greater discriminative power to model subtle depth difference, while helping avoid overfitting the scarcer training data. Our experiments show significant performance gains over state-of-the-art methods on three public benchmark datasets and our newly collected Blanket dataset.
• Classical higher-order logic, when utilized as a meta-logic in which various other (classical and non-classical) logics can be shallowly embedded, is well suited for realising a universal logic reasoning approach. Universal logic reasoning in turn, as envisioned already by Leibniz, may support the rigorous formalisation and deep logical analysis of rational arguments within machines. A respective universal logic reasoning framework is described and a range of exemplary applications are discussed. In the future, universal logic reasoning in combination with appropriate, controlled forms of rational argumentation may serve as a communication layer between humans and intelligent machines.
• When learning to use an Application Programming Interface (API), programmers need to understand the inputs and outputs (I/O) of the API functions. Current documentation tools automatically document the static information of I/O, such as parameter types and names. What is missing from these tools is dynamic information, such as I/O examples---actual valid values of inputs that produce certain outputs. In this paper, we demonstrate Docio, a prototype toolset we built to generate I/O examples. Docio logs I/O values when API functions are executed, for example in running test suites. Then, Docio puts I/O values into API documents as I/O examples. Docio has three programs: 1) funcWatch, which collects I/O values when API developers run test suites, 2) ioSelect, which selects one I/O example from a set of I/O values, and 3) ioPresent, which embeds the I/O examples into documents. In a preliminary evaluation, we used Docio to generate four hundred I/O examples for three C libraries: ffmpeg, libssh, and protobuf-c. Docio is open-source and available at: http://www3.nd.edu/~sjiang1/docio/
• Committing to a version control system means submitting a software change to the system. Each commit can have a message to describe the submission. Several approaches have been proposed to automatically generate the content of such messages. However, the quality of the automatically generated messages falls far short of what humans write. In studying the differences between auto-generated and human-written messages, we found that 82% of the human-written messages have only one sentence, while the automatically generated messages often have multiple lines. Furthermore, we found that the commit messages often begin with a verb followed by an direct object. This finding inspired us to use a "verb+object" format in this paper to generate short commit summaries. We split the approach into two parts: verb generation and object generation. As our first try, we trained a classifier to classify a diff to a verb. We are seeking feedback from the community before we continue to work on generating direct objects for the commits.
• The phase dependence of the cavity quantum dynamics in a driven equidistant three-level ladder-type system found in a quantum well structure with perpendicular transition dipoles is investigated in the good cavity limit. The pumping laser phases are directly transferred to the superposed amplitudes of the cavity-quantum-well interaction. Their phase difference may be tuned in order to obtain destructive quantum interferences. Therefore, the cavity field vanishes although the emitter continues to be pumped.
• In a model of the late-time cosmic acceleration within the framework of generalized Proca theories, there exists a de Sitter attractor preceded by the dark energy equation of state $w_{\rm DE}=-1-s$, where $s$ is a positive constant. We run the Markov-Chain-Monte-Carlo code to confront the model with the observational data of Cosmic Microwave Background (CMB), baryon acoustic oscillations, supernovae type Ia, and local measurements of the Hubble expansion rate for the background cosmological solutions and obtain the bound $s=0.254^{{}+ 0.118}_{{}-0.097}$ at 95% confidence level (CL). Existence of the additional parameter $s$ to those in the $\Lambda$-Cold-Dark-Matter ($\Lambda$CDM) model allows to reduce tensions of the Hubble constant $H_0$ between the CMB and the low-redshift measurements. Including the cosmic growth data of redshift-space distortions in the galaxy power spectrum and taking into account no-ghost and stability conditions of cosmological perturbations, we find that the bound on $s$ is shifted to $s=0.16^{+0.08}_{-0.08}$ (95 % CL) and hence the model with $s>0$ is still favored over the $\Lambda$CDM model. Apart from the quantities $s, H_0$ and the today's matter density parameter $\Omega_{m0}$, the constraints on other model parameters associated with perturbations are less stringent, reflecting the fact that there are different sets of parameters that give rise to similar cosmic expansion and growth history.
• Mar 29 2017 cs.CV arXiv:1703.09554v1
Convolutional networks reach top quality in pixel-level object tracking but require a large amount of training data (1k ~ 10k) to deliver such results. We propose a new training strategy which achieves state-of-the-art results across three evaluation datasets while using 20x ~ 100x less annotated data than competing methods. Instead of using large training sets hoping to generalize across domains, we generate in-domain training data using the provided annotation on the first frame of each video to synthesize ("lucid dream") plausible future video frames. In-domain per-video training data allows us to train high quality appearance- and motion-based models, as well as tune the post-processing stage. This approach allows to reach competitive results even when training from only a single annotated frame, without ImageNet pre-training. Our results indicate that using a larger training set is not automatically better, and that for the tracking task a smaller training set that is closer to the target domain is more effective. This changes the mindset regarding how many training samples and general "objectness" knowledge are required for the object tracking task.
• While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84% and a recall of 69%.
• In recent years, the performance of face verification systems has significantly improved using deep convolutional neural networks (DCNNs). A typical pipeline for face verification includes training a deep network for subject classification with softmax loss, using the penultimate layer output as the feature descriptor, and generating a cosine similarity score given a pair of face images. The softmax loss function does not optimize the features to have higher similarity score for positive pairs and lower similarity score for negative pairs, which leads to a performance gap. In this paper, we add an L2-constraint to the feature descriptors which restricts them to lie on a hypersphere of a fixed radius. This module can be easily implemented using existing deep learning frameworks. We show that integrating this simple step in the training pipeline significantly boosts the performance of face verification. Specifically, we achieve state-of-the-art results on the challenging IJB-A dataset, achieving True Accept Rates of 0.863 and 0.910 at False Accept Rates 0.0001 and 0.001 respectively on the face verification protocol.
• Mar 29 2017 cs.CV arXiv:1703.09474v1
Person re-identification (re-id) aims to match people across non-overlapping camera views. So far the RGB-based appearance is widely used in most existing works. However, when people appeared in extreme illumination or changed clothes, the RGB appearance-based re-id methods tended to fail. To overcome this problem, we propose to exploit depth information to provide more invariant body shape and skeleton information regardless of illumination and color change. More specifically, we exploit depth voxel covariance descriptor and further propose a locally rotation invariant depth shape descriptor called Eigen-depth feature to describe pedestrian body shape. We prove that the distance between any two covariance matrices on the Riemannian manifold is equivalent to the Euclidean distance between the corresponding Eigen-depth features. Furthermore, we propose a kernelized implicit feature transfer scheme to estimate Eigen-depth feature implicitly from RGB image when depth information is not available. We find that combining the estimated depth features with RGB-based appearance features can sometimes help to better reduce visual ambiguities of appearance features caused by illumination and similar clothes. The effectiveness of our models was validated on publicly available depth pedestrian datasets as compared to related methods for person re-identification.
• Users like sharing personal photos with others through social media. At the same time, they might want to make automatic identification in such photos difficult or even impossible. Classic obfuscation methods such as blurring are not only unpleasant but also not as effective as one would expect. Recent studies on adversarial image perturbations (AIP) suggest that it is possible to confuse recognition systems effectively without unpleasant artifacts. However, in the presence of counter measures against AIPs, it is unclear how effective AIP would be in particular when the choice of counter measure is unknown. Game theory provides tools for studying the interaction between agents with uncertainties in the strategies. We introduce a general game theoretical framework for the user-recogniser dynamics, and present a case study that involves current state of the art AIP and person recognition techniques. We derive the optimal strategy for the user that assures an upper bound on the recognition rate independent of the recogniser's counter measure.

Laura Mančinska Mar 28 2017 13:09 UTC

Great result!

For those familiar with I_3322, William here gives an example of a nonlocal game exhibiting a behaviour that many of us suspected (but couldn't prove) to be possessed by I_3322.

gae spedalieri Mar 13 2017 14:13 UTC

1) Sorry but this is false.

1a) That analysis is specifically for reducing QECC protocol to an entanglement distillation protocol over certain class of discrete variable channels. Exactly as in BDSW96. Task of the protocol is changed in the reduction.

1b) The simulation is not via a general LOCC b

...(continued)
Siddhartha Das Mar 13 2017 13:22 UTC

We feel that we have cited and credited previous works appropriately in our paper. To clarify:

1) The LOCC simulation of a channel and the corresponding adaptive reduction can be found worked out in full generality in the 2012 Master's thesis of Muller-Hermes. We have cited the original paper BD

...(continued)
gae spedalieri Mar 13 2017 08:56 UTC

This is one of those papers where the contribution of previous literature is omitted and not fairly represented.

1- the LOCC simulation of quantum channels (not necessarily teleportation based) and the corresponding general reduction of adaptive protocols was developed in PLOB15 (https://arxiv.org/

...(continued)
Noon van der Silk Mar 08 2017 04:45 UTC

I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent

...(continued)
Qian Wang Mar 07 2017 17:21 UTC

"To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics."

Christopher Chamberland Mar 02 2017 18:48 UTC

A good paper for learning about exRec's is this one https://arxiv.org/abs/quant-ph/0504218. Also, rigorous threshold lower bounds are obtained using an adversarial noise model approach.

Anirudh Krishna Mar 02 2017 18:40 UTC

Here's a link to a lecture from Dan Gottesman's course at PI about exRecs.
http://pirsa.org/displayFlash.php?id=07020028

You can find all the lectures here:
http://www.perimeterinstitute.ca/personal/dgottesman/QECC2007/index.html

Ben Criger Mar 02 2017 08:58 UTC

Good point, I wish I knew more about ExRecs.

Robin Blume-Kohout Feb 28 2017 09:55 UTC

I totally agree -- that part is confusing. It's not clear whether "arbitrary good precision ... using a limited amount of hardware" is supposed to mean that arbitrarily low error rates can be achieved with codes of fixed size (clearly wrong) or just that the resources required to achieve arbitraril

...(continued)