# Top arXiv papers

• The existence of a positive log-Sobolev constant implies a bound on the mixing time of a quantum dissipative evolution under the Markov approximation. For classical spin systems, such constant was proven to exist, under the assumption of a mixing condition in the Gibbs measure associated to their dynamics, via a quasi-factorization of the entropy in terms of the conditional entropy in some sub-$\sigma$-algebras. In this work we analyze analogous quasi-factorization results in the quantum case. For that, we define the quantum conditional relative entropy and prove several quasi-factorization results for it. As an illustration of their potential, we use one of them to obtain a positive log-Sobolev constant for the heat-bath dynamics with product fixed point.
• Apr 26 2018 quant-ph arXiv:1804.09486v1
We study the performance of quantum error correction (QEC) on a system undergoing open-system (OS) dynamics. The noise on the system originates from a joint quantum channel on the system-bath composite, a framework that includes and interpolates between the commonly used system-only quantum noise channel model and the system-bath Hamiltonian noise model. We derive the perfect OSQEC conditions, with QEC recovery only on the system and not the inaccessible bath. When the noise is only approximately correctable, the generic case of interest, we quantify the performance of OSQEC using worst-case fidelity. We find that the leading deviation from unit fidelity after recovery is quadratic in the uncorrectable part, a result reminiscent of past work on approximate QEC for system-only noise, although the approach here requires the use of different techniques than in past work.
• We study the problem of transmission of classical messages through a quantum channel in several network scenarios in the one-shot setting. We consider both the entanglement assisted and unassisted cases for the point to point quantum channel, quantum multiple-access channel, quantum channel with state and the quantum broadcast channel. We show that it is possible to near-optimally characterize the amount of communication that can be transmitted in these scenarios, using the position-based decoding strategy introduced in a prior work [Anshu, Jain and Warsi, 2017]. In the process, we provide a short and elementary proof of the converse for entanglement-assisted quantum channel coding in terms of the quantum hypothesis testing divergence (obtained earlier in [Matthews and Wehner, 2014]). Our proof has the additional utility that it naturally extends to various network scenarios mentioned above. Furthermore, none of our achievability results require a simultaneous decoding strategy, existence of which is an important open question in quantum Shannon theory.
• The ability to distill quantum coherence is key for the implementation of quantum technologies; however, such a task cannot always be accomplished with certainty. Here we develop a general framework of probabilistic distillation of quantum coherence, characterizing the maximal probability of success in the operational task of extracting maximally coherent states in a one-shot setting. We investigate distillation under different classes of free operations, highlighting differences in their capabilities and establishing their fundamental limitations in state transformations. We first provide a geometric interpretation for the maximal success probability, showing that under maximally incoherent operations (MIO) and dephasing-covariant incoherent operations (DIO) the problem can be further simplified in to efficiently computable semidefinite programs. Exploiting these results, we find that DIO and its subset of strictly incoherent operations (SIO) have equal power in probabilistic distillation of coherence from pure input states, while MIO are strictly stronger. We prove a fundamental no-go result: distilling coherence from any full-rank state is impossible even probabilistically. We then present a phenomenon which prohibits any trade-off between the maximal success probability and the distillation fidelity beyond a certain threshold. Finally, we consider probabilistic distillation assisted by a catalyst and demonstrate, with specific examples, its superiority to the deterministic case.
• Major obstacles against all-optical implementation of scalable quantum communication are photon losses during transmission and the probabilistic nature of Bell measurements causing exponential scaling in time and resource with distance. To overcome these obstacles, while conventional quantum repeaters require matter-based operations with long-lived quantum memories, recent proposals have employed encoded multiple photons in entanglement providing an alternative way of scalability. In pursuing global scale quantum networks, naturally arising questions are then (i) whether or not any ultimate limit exists in all-optical implementations, and (ii) whether and how it can be achieved (if the limit exists). Motivated by these questions, we here address the fundamental limits of the efficiency and loss tolerance of the Bell measurement, restricted not by protocols but by the laws of physics, i.e. linear optics and no-cloning theorem. We then propose a Bell measurement scheme with linear optics and multiple photons, which enables one to reach both the fundamental limits: one by linear optics and the other by no-cloning theorem. Remarkably, the quantum repeater based on our scheme allows one to achieve fast and efficient quantum communication over arbitrary long distances, outperforming previous all-photonic and matter-based protocols. Our work provides a fundamental building block for all-optical scalable quantum networks with current optical technologies.
• In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $x\in\mathbb{R}^n$ from $m$ linear measurements $A_1 x, A_2 x,\ldots, A_m x$, where $A_i$ can be chosen based on the outcomes $A_1 x,\ldots, A_{i-1} x$ of previous measurements. The goal is to output a vector $\hat{x}$ for which $$\|x-\hatx\|_p \le C ⋅\min_k\text-sparse x' \|x-x'\|_q\,$$ with probability at least $2/3$, where $C > 0$ is an approximation factor. Indyk, Price and Woodruff (FOCS'11) gave an algorithm for $p=q=2$ for $C = 1+\epsilon$ with $\Oh((k/\epsilon) \loglog (n/k))$ measurements and $\Oh(\log^*(k) \loglog (n))$ rounds of adaptivity. We first improve their bounds, obtaining a scheme with $\Oh(k \cdot \loglog (n/k) +(k/\epsilon) \cdot \loglog(1/\epsilon))$ measurements and $\Oh(\log^*(k) \loglog (n))$ rounds, as well as a scheme with $\Oh((k/\epsilon) \cdot \loglog (n\log (n/k)))$ measurements and an optimal $\Oh(\loglog (n))$ rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for $(p,p)$ for every $0 < p < 2$. We show that the improvement from $O(k \log(n/k))$ measurements to $O(k \log \log (n/k))$ measurements in the adaptive setting can persist with a better $\epsilon$-dependence for other values of $p$ and $q$. For example, when $(p,q) = (1,1)$, we obtain $O(\frac{k}{\sqrt{\epsilon}} \cdot \log \log n \log^3 (\frac{1}{\epsilon}))$ measurements.
• Apr 26 2018 quant-ph arXiv:1804.09467v1
The resource theory of coherence studies the operational value of superpositions in quantum technologies. A key question in this theory concerns the efficiency of manipulation and interconversion of this resource. Here we solve this question completely for mixed states of qubits by determining the optimal probabilities for mixed state conversions via stochastic incoherent operations. This implies new lower bounds on the asymptotic state conversion rate between mixed single-qubit states which in some cases is proven to be tight. Furthermore, we obtain the minimal distillable coherence for given coherence cost among all single-qubit states, which sheds new light on the irreversibility of coherence theory.
• We discuss the following variant of the standard minimum error state discrimination problem: Alice picks the state she sends to Bob among one of several disjoint state ensembles, and she communicates him the chosen ensemble only at a later time. Two different scenarios then arise: either Bob is allowed to arrange his measurement set-up after Alice has announced him the chosen ensemble, or he is forced to perform the measurement before of Alice's announcement. In the latter case, he can only post-process his measurement outcome when Alice's extra information becomes available. We compare the optimal guessing probabilities in the two scenarios, and we prove that they are the same if and only if there exist compatible optimal measurements for all of Alice's state ensembles. When this is the case, post-processing any of the corresponding joint measurements is Bob's optimal strategy in the post-measurement information scenario. Furthermore, we establish a connection between discrimination with post-measurement information and the standard state discrimination. By means of this connection and exploiting the presence of symmetries, we are able to compute the various guessing probabilities in many concrete examples.
• In this paper, we explore quantum interference in molecular conductance from the point of view of graph theory and walks on lattices. By virtue of the Cayley-Hamilton theorem for characteristic polynomials and the Coulson-Rushbrooke pairing theorem for alternant hydrocarbons, it is possible to derive a finite series expansion of the Green's function for electron transmission in terms of the odd powers of the vertex adjacency matrix or Hückel matrix. This means that only odd-length walks on a molecular graph contribute to the conductivity through a molecule. Thus, if there are only even-length walks between two atoms, quantum interference is expected to occur in the electron transport between them. However, even if there are only odd-length walks between two atoms, a situation may come about where the contributions to the QI of some odd-length walks are canceled by others, leading to another class of quantum interference. For non-alternant hydrocarbons, the finite Green's function expansion may include both even and odd powers. Nevertheless, QI can in some circumstances come about for non-alternants, from the cancellation of odd and even-length walk terms. We report some progress, but not a complete resolution of the problem of understanding the coefficients in the expansion of the Green's function in a power series of the adjacency matrix, these coefficients being behind the cancellations that we have mentioned. And we introduce a perturbation theory for transmission as well as some potentially useful infinite power series expansions of the Green's function.
• We study properties of heavy-light-heavy three-point functions in two-dimensional CFTs by using the modular invariance of two-point functions on a torus. We show that our result is non-trivially consistent with the condition of ETH (Eigenstate Thermalization Hypothesis). We also study the open-closed duality of cylinder amplitudes and derive behaviors of disk one-point functions.
• We establish nonclassicality of light as a resource for quantum metrology that is strictly quantifiable based on a quantum resource theory. We first demonstrate that every multimode pure state with negativity in the Glauber-Sudarshan P distribution provides metrological enhancement over all classical states in parameter estimation with respect to a collective quadrature operator. We then show that this metrological power serves as a measure of nonclassicality based on a quantum resource theory for multimode optical states, where the measure is a monotone under linear optics elements including phase shifters, beam splitters, and displacement operators. Our study further implies that nonclassicality quantified by the metrological power is identical to the degree of macroscopic superpositions, namely, quantum macroscopicity.
• A way to encode acceleration directly into fields has recently being proposed, thus establishing a new kind of fields, the accelerated fields. The definition of accelerated fields points to the quantization of space and time, analogously to the way quantities like energy and momentum are quantized in usual quantum field theories. Unruh effect has been studied in connection with quantum field theory in curved spacetime and it is described by recruiting a uniformly accelerated observer. In this work, as a first attempt to demonstrate the utility of accelerated fields, we present an alternative way to derive Unruh effect. We show, by studying quantum field theory on quantum spacetime, that Unruh effect can be obtained without changing the reference frame. Thus, in the framework of accelerated fields, the observational confirmation of Unruh effect could be assigned to the existence of quantum properties of spacetime.
• We examine the entanglement properties of a system that represents two driven microwave cavities each optomechanically coupled to two separate driven optical cavities which are connected by a single-mode optical fiber. The results suggest that it may be possible to achieve near-maximal entanglement of the microwave cavities, thus allowing a teleportation scheme to enable interactions for hybrid quantum computing of superconducting qubits with optical interconnects.
• Novel view synthesis is an important problem in computer vision and graphics. Over the years a large number of solutions have been put forward to solve the problem. However, the large-baseline novel view synthesis problem is far from being "solved". Recent works have attempted to use Convolutional Neural Networks (CNNs) to solve view synthesis tasks. Due to the difficulty of learning scene geometry and interpreting camera motion, CNNs are often unable to generate realistic novel views. In this paper, we present a novel view synthesis approach based on stereo-vision and CNNs that decomposes the problem into two sub-tasks: view dependent geometry estimation and texture inpainting. Both tasks are structured prediction problems that could be effectively learned with CNNs. Experiments on the KITTI Odometry dataset show that our approach is more accurate and significantly faster than the current state-of-the-art. The code and supplementary material will be publicly available. Results could be found here https://youtu.be/5pzS9jc-5t0.
• We present new relations for scattering amplitudes of color ordered gluons, massive quarks and scalars minimally coupled to gravity. Tree-level amplitudes of arbitrary matter and gluon multiplicities involving one graviton are reduced to partial amplitudes in QCD or scalar QCD. The obtained relations are a direct generalization of the recently found Einstein-Yang-Mills relations. The proof of the new relation employs a simple diagrammatic argument trading the graviton-matter couplings to an upgrade' of a gluon coupling with a color-kinematic replacement rule enforced. The use of the Melia-Johansson-Ochirov color basis is a key element of the reduction. We comment on the generalization to multiple gravitons in the single color trace case.
• Nonreciprocal devices such as isolators and circulators are necessary to protect sensitive apparatus from unwanted noise. Recently, a variety of alternatives were proposed to replace ferrite-based commercial technologies, with the motivation to be integrated with microwave superconducting quantum circuits. Here, we review isolators realized with microwave optomechanical circuits and present a gyrator-based picture to develop an intuition on the origin of nonreciprocity in these systems. Such nonreciprocal optomechanical schemes show promise as they can be extended to circulators and directional amplifiers, with perspectives to reach the quantum limit in terms of added noise.
• Quantum anomalies lead to finite expectation values that defy the apparent symmetries of a system. These anomalies are at the heart of topological effects in fundamental, electronic, photonic and ultracold atomic systems, where they result in a unique response to external fields but generally escape a more direct observation. Here, we implement an optical-network realization of a topological discrete-time quantum walk (DTQW), which we design so that such an anomaly can be observed directly in the unique circular polarization of a topological midgap state. This feature arises in a single-step protocol that combines a chiral symmetry with a previously unexplored unitary version of supersymmetry. Having experimental access to the position and coin state of the walker, we perform a full polarization tomography and provide evidence for the predicted anomaly of the midgap states. This approach opens the prospect to distill topological states dynamically for classical and quantum information applications.
• Convolutional Neural Networks (CNN's) are restricted by their massive computation and high storage. Parameter pruning is a promising approach for CNN compression and acceleration, which aims at eliminating redundant model parameters with tolerable performance loss. Despite its effectiveness, existing regularization-based parameter pruning methods usually assign a fixed regularization parameter to all weights, which neglects the fact that different weights may have different importance to CNN. To solve this problem, we propose a theoretically sound regularization-based pruning method to incrementally assign different regularization parameters to different weights based on their importance to the network. On AlexNet and VGG-16, our method can achieve 4x theoretical speedup with similar accuracies compared with the baselines. For ResNet-50, the proposed method also achieves 2x acceleration and only suffers 0.1% top-5 accuracy loss.
• Graph states are the backbone of measurement-based continuous-variable quantum computation. However, experimental realisations of these states induce Gaussian measurement statistics for the field quadratures, which poses a barrier to obtain a genuine quantum advantage. In this letter, we propose mode-selective photon addition and subtraction as viable and experimentally feasible pathways to introduce non-Gaussian features in such continuous-variable graph states. In particular, we investigate how the non-Gaussian properties spread among the vertices of the graph, which allows us to show the degree of control that is achievable in this approach.
• Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset.
• We undertake experimental detection of the entanglement present in arbitrary three-qubit pure quantum states on an NMR quantum information processor. Measurements of only four observables suffice to experimentally differentiate between the six classes of states which are inequivalent under stochastic local operation and classical communication (SLOCC). The experimental realization is achieved by mapping the desired observables onto Pauli $z$-operators of a single qubit, which is directly amenable to measurement. The detection scheme is applied to known entangled states as well as to states randomly generated using a generic scheme that can construct all possible three-qubit states. The results are substantiated via direct full quantum state tomography as well as via negativity calculations and the comparison suggests that the protocol is indeed successful in detecting tripartite entanglement without requiring any \it a priori information about the states.
• An ideal software system in computer graphics should be a combination of innovative ideas, solid software engineering and rapid development. However, in reality these requirements are seldom met simultaneously. In this paper, we present early results on an open-source library named Taichi (http://taichi.graphics), which alleviates this practical issue by providing an accessible, portable, extensible, and high-performance infrastructure that is reusable and tailored for computer graphics. As a case study, we share our experience in building a novel physical simulation system using Taichi.
• We describe a DNN for fine-grained action classification and video captioning. It gives state-of-the-art performance on the challenging Something-Something dataset, with over 220, 000 videos and 174 fine-grained actions. Classification and captioning on this dataset are challenging because of the subtle differences between actions, the use of thousands of different objects, and the diversity of captions penned by crowd actors. The model architecture shares features for classification and captioning, and is trained end-to-end. It performs much better than the existing classification benchmark for Something-Something, with impressive fine-grained results, and it yields a strong baseline on the new Something-Something captioning task. Our results reveal that there is a strong correlation between the degree of detail in the task and the ability of the learned features to transfer to other tasks.
• This paper considers stochastic optimization problems for a large class of objective functions, including convex and continuous submodular. Stochastic proximal gradient methods have been widely used to solve such problems; however, their applicability remains limited when the problem dimension is large and the projection onto a convex set is costly. Instead, stochastic conditional gradient methods are proposed as an alternative solution relying on (i) Approximating gradients via a simple averaging technique requiring a single stochastic gradient evaluation per iteration; (ii) Solving a linear program to compute the descent/ascent direction. The averaging technique reduces the noise of gradient approximations as time progresses, and replacing projection step in proximal methods by a linear program lowers the computational complexity of each iteration. We show that under convexity and smoothness assumptions, our proposed method converges to the optimal objective function value at a sublinear rate of $O(1/t^{1/3})$. Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a $((1-1/e)OPT-\eps)$ guarantee with $O(1/\eps^3)$ stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. Additionally, we obtain $((1/e)OPT -\eps)$ guarantee after using $O(1/\eps^3)$ stochastic gradients for the case that the objective function is continuous DR-submodular but non-monotone and the constraint set is down-closed. By using stochastic continuous optimization as an interface, we provide the first $(1-1/e)$ tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a matroid constraint and $(1/e)$ approximation guarantee for the non-monotone case.
• In this paper we deal with circulant and partitioned into $n$-by-$n$ circulant blocks matrices and introduce spectral results concerning this class of matrices. The problem of finding lists of complex numbers corresponding to a set of eigenvalues of a nonnegative block matrix with circulant blocks is treated. Along the paper we call realizable list if its elements are the eigenvalues of a nonnegative matrix. The Guo's index $\lambda_0$ of a realizable list is the minimum spectral radius such that the list (up to the initial spectral radius) together with $\lambda_0$ is realizable. The Guo's index of block circulant matrices with circulant blocks is obtained, and in consequence, necessary and sufficient conditions concerning the NIEP, Nonnegative Inverse Eigenvalue Problem, for the realizability of some spectra are given.
• Quantum error-correction will be essential for realizing the full potential of large-scale quantum information processing devices. Fundamental to its experimental realization is the repetitive detection of errors via projective measurements of quantum correlations among qubits, and correction using conditional feedback. Performing these tasks repeatedly requires a system in which measurement and feedback decision times are short compared to qubit coherence times, where the measurement reproduces faithfully the desired projection, and for which the measurement process has no detrimental effect on the ability to perform further operations. Here we demonstrate up to 50 sequential measurements of correlations between two beryllium-ion qubits using a calcium ion ancilla, and implement feedback which allows us to stabilize two-qubit subspaces as well as Bell states. Multi-qubit mixed-species gates are used to transfer information from qubits to the ancilla, enabling quantum state detection with negligible crosstalk to the stored qubits. Heating of the ion motion during detection is mitigated using sympathetic recooling. A key element of the experimental system is a powerful classical control system, which features flexible in-sequence processing to implement feedback control. The methods employed here provide a number of essential ingredients for scaling trapped-ion quantum computing, and provide new opportunities for quantum state control and entanglement-enhanced quantum metrology.
• Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17]. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or delivering low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin and Fast-Lip) that are able to certify non-trivial lower bounds of minimum distortions, by bounding the ReLU units with appropriate linear functions (Fast-Lin), or by bounding the local Lipschitz constant (Fast-Lip). Experiments show that (1) our proposed methods deliver bounds close to (the gap is 2-3X) exact minimum distortion found by Reluplex in small MNIST networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that, in fact, there is no polynomial time algorithm that can approximately find the minimum $\ell_1$ adversarial distortion of a ReLU network with a $0.99\ln n$ approximation ratio unless $\mathsf{NP}$=$\mathsf{P}$, where $n$ is the number of neurons in the network.
• Despite the recent popularity of word embedding methods, there is only a small body of work exploring the limitations of these representations. In this paper, we consider one aspect of embedding spaces, namely their stability. We show that even relatively high frequency words (100-200 occurrences) are often unstable. We provide empirical evidence for how various factors contribute to the stability of word embeddings, and we analyze the effects of stability on downstream tasks.
• Apr 26 2018 cs.CV arXiv:1804.09691v1
Face recognition (FR) is one of the most extensively investigated problems in computer vision. Significant progress in FR has been made due to the recent introduction of the larger scale FR challenges, particularly with constrained social media web images, e.g. high-resolution photos of celebrity faces taken by professional photo-journalists. However, the more challenging FR in unconstrained and low-resolution surveillance images remains largely under-studied. To facilitate more studies on developing FR models that are effective and robust for low-resolution surveillance facial images, we introduce a new Surveillance Face Recognition Challenge, which we call the QMUL-SurvFace benchmark. This new benchmark is the largest and more importantly the only true surveillance FR benchmark to our best knowledge, where low-resolution images are not synthesised by artificial under-sampling of native high-resolution images. This challenge contains 463,507 face images of 15,573 distinct identities captured in real-world uncooperative surveillance scenes over wide space and time. As a consequence, it presents an extremely challenging FR benchmark. We benchmark the FR performance on this challenge using five representative deep learning face recognition models, in comparison to existing benchmarks. We show that the current state of the arts are still far from being satisfactory to tackle the under-investigated surveillance FR problem in practical forensic scenarios. Face recognition is generally more difficult in an open-set setting which is typical for surveillance scenarios, owing to a large number of non-target people (distractors) appearing open spaced scenes. This is evidently so that on the new Surveillance FR Challenge, the top-performing CentreFace deep learning FR model on the MegaFace benchmark can now only achieve 13.2% success rate (at Rank-20) at a 10% false alarm rate.
• The large scale structure (LSS) of the universe is generated by the linear density gaussian modes, which are evolved into the observed nonlinear LSS. The posterior surface of the modes is convex in the linear regime, leading to a unique global maximum (MAP), but this is no longer guaranteed in the nonlinear regime. In this paper we investigate the nature of posterior surface using the recently developed MAP reconstruction method, with a simplified but realistic N-body simulation as the forward model. The reconstruction method uses optimization with analytic gradients from back-propagation through the simulation. For low noise cases we recover the initial conditions well into the nonlinear regime ($k\sim 1$ h/Mpc) nearly perfectly. We show that the large scale modes can be recovered more precisely than the linear expectation, which we argue is a consequence of nonlinear mode coupling. For noise levels achievable with current and planned LSS surveys the reconstruction cannot recover very small scales due to noise. We see some evidence of non-convexity, specially for smaller scales where the non-injective nature of the mappings: several very different initial conditions leading to the same near perfect final data reconstruction. We investigate the nature of these phenomena further using a 1-d toy gravity model, where many well separated local maximas are found to have identical data likelihood but differ in the prior. We also show that in 1-d the prior favors some solutions over the true solution, though no clear evidence of these in 3-d. Our main conclusion is that on very small scales and for a very low noise the posterior surface is multi-modal and the global maximum may be unreachable with standard methods, while for realistic noise levels in the context of the current and next generation LSS surveys MAP optimization method is likely to be nearly optimal.
• In this Letter, we wish to point out that the distinguishing feature of Magnetic Penrose process (MPP) is its super high efficiency exceeding $100\%$ (which was established in mid 1980s for discrete particle accretion) of extraction of rotational energy of a rotating black hole electromagnetically for a magnetic field of milli Gauss order. Another similar process, which is also driven by electromagnetic field, is Blandford-Znajek mechanism (BZ), which could be envisaged as high magnetic field limit MPP as it requires threshold magnetic field of order $10^4$G. Recent simulation studies of fully relativistic magnetohydrodynamic flows have borne out super high efficiency signature of the process for high magnetic field regime; viz BZ. We would like to make a clear prediction that similar simulation studies of MHD flows for low magnetic field regime, where BZ would be inoperative, would also have super efficiency.
• This paper describes our approach for the Disguised Faces in the Wild (DFW) 2018 challenge. The task here is to verify the identity of a person among disguised and impostors images. Given the importance of the task of face verification it is essential to compare methods across a common platform. Our approach is based on VGG-face architecture paired with Contrastive loss based on cosine distance met- ric. For augmenting the data set, we source more data from the internet. The experiments show the effectiveness of the approach on the DFW data. We show that adding extra data to the DFW dataset with noisy labels also helps in increasing the gen 11 eralization performance of the network. The proposed network achieves 27.13% absolute increase in accuracy over the DFW baseline.
• Query auto-completion is a search engine feature whereby the system suggests completed queries as the user types. Recently, the use of a recurrent neural network language model was suggested as a method of generating query completions. We show how an adaptable language model can be used to generate personalized completions and how the model can use online updating to make predictions for users not seen during training. The personalized predictions are significantly better than a baseline that uses no user information.
• Apr 26 2018 cs.IT math.IT arXiv:1804.09657v1
This paper considers a sequence of random variables generated according to a common distribution. The distribution might undergo periods of transient changes at an unknown set of time instants, referred to as change-points. The objective is to sequentially collect measurements from the sequence and design a dynamic decision rule for the quickest identification of one change-point in real time, while, in parallel, the rate of false alarms is controlled. This setting is different from the conventional change-point detection settings in which there exists at most one change-point that can be either persistent or transient. The problem is considered under the minimax setting with a constraint on the false alarm rate before the first change occurs. It is proved that the Shewhart test achieves exact optimality under worst-case change points and also worst-case data realization. Numerical evaluations are also provided to assess the performance of the decision rule characterized.
• We present a simple but effective method for automatic latent fingerprint segmentation, called SegFinNet. SegFinNet takes a latent image as an input and outputs a binary mask highlighting the friction ridge pattern. Our algorithm combines fully convolutional neural network and detection-based approach to process the entire input latent image in one shot instead of using latent patches. Experimental results on three different latent databases (i.e. NIST SD27, WVU, and an operational forensic database) show that SegFinNet outperforms both human markup for latents and the state-of-the-art latent segmentation algorithms. Our latent segmentation algorithm takes on average 457 (NIST SD27) and 361 (WVU) msec/latent on Nvidia GTX Ti 1080 with 12GB memory machine. We show that this improved cropping, in turn, boosts the hit rate of a latent fingerprint matcher.
• We apply deep neural networks (DNN) to data from the EXO-200 experiment. In the studied cases, the DNN is able to reconstruct the relevant parameters - total energy and position - directly from raw digitized waveforms, with minimal exceptions. The accuracy of reconstruction, as evaluated on calibration data, either reaches or exceeds what was achieved by the conventional approaches developed by EXO-200 over the course of the experiment. An accurate Monte Carlo simulation is usually a prerequisite for a successful reconstruction with DNNs. We describe how in an experiment such as EXO-200 certain reconstruction and analysis tasks could be successfully performed by training the network on waveforms from experimental data, either reducing or eliminating the reliance on the Monte Carlo.
• Working in subsystems of second order arithmetic, we formulate several representations for hypergraphs. We then prove the equivalence of various vertex coloring theorems to ${\sf WKL}_0$, ${\sf ACA}_0$ and $\Pi ^1_ 1$-${\sf CA}_0$.
• Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1) providing an opportunity to study this important artifact. The dataset consists of 14.7K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR. The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers. We describe the data collection process and report interesting observed phenomena in the peer reviews. We also propose two novel NLP tasks based on this dataset and provide simple baseline models. In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as 'originality' and 'impact'.
• We consider the problem of finding critical points of functions that are non-convex and non-smooth. Studying a fairly broad class of such problems, we analyze the behavior of three gradient-based methods (gradient descent, proximal update, and Frank-Wolfe update). For each of these methods, we establish rates of convergence for general problems, and also prove faster rates for continuous sub-analytic functions. We also show that our algorithms can escape strict saddle points for a class of non-smooth functions, thereby generalizing known results for smooth functions. Our analysis leads to a simplification of the popular CCCP algorithm, used for optimizing functions that can be written as a difference of two convex functions. Our simplified algorithm retains all the convergence properties of CCCP, along with a significantly lower cost per iteration. We illustrate our methods and theory via applications to the problems of best subset selection, robust estimation, mixture density estimation, and shape-from-shading reconstruction.
• Several theories in cognitive neuroscience suggest that when people interact with the world, or simulate interactions, they do so from a first-person egocentric perspective, and seamlessly transfer knowledge between third-person (observer) and first-person (actor). Despite this, learning such models for human action recognition has not been achievable due to the lack of data. This paper takes a step in this direction, with the introduction of Charades-Ego, a large-scale dataset of paired first-person and third-person videos, involving 112 people, with 4000 paired videos. This enables learning the link between the two, actor and observer perspectives. Thereby, we address one of the biggest bottlenecks facing egocentric vision research, providing a link from first-person to the abundant third-person data on the web. We use this data to learn a joint representation of first and third-person videos, with only weak supervision, and show its effectiveness for transferring knowledge from the third-person to the first-person domain.
• In Actor and Observer we introduced a dataset linking the first and third-person video understanding domains, the Charades-Ego Dataset. In this paper we describe the egocentric aspect of the dataset and present annotations for Charades-Ego with 68,536 activity instances in 68.8 hours of first and third-person video, making it one of the largest and most diverse egocentric datasets available. Charades-Ego furthermore shares activity classes, scripts, and methodology with the Charades dataset, that consist of additional 82.3 hours of third-person video with 66,500 activity instances. Charades-Ego has temporal annotations and textual descriptions, making it suitable for egocentric video classification, localization, captioning, and new tasks utilizing the cross-modal nature of the data.
• Tensor decompositions are used in various data mining applications from social network to medical applications and are extremely useful in discovering latent structures or concepts in the data. Many real-world applications are dynamic in nature and so are their data. To deal with this dynamic nature of data, there exist a variety of online tensor decomposition algorithms. A central assumption in all those algorithms is that the number of latent concepts remains fixed throughout the en- tire stream. However, this need not be the case. Every incoming batch in the stream may have a different number of latent concepts, and the difference in latent concepts from one tensor batch to another can provide insights into how our findings in a particular application behave and deviate over time. In this paper, we define "concept" and "concept drift" in the context of streaming tensor decomposition, as the manifestation of the variability of latent concepts throughout the stream. Furthermore, we introduce SeekAndDestroy, an algorithm that detects concept drift in streaming tensor decomposition and is able to produce results robust to that drift. To the best of our knowledge, this is the first work that investigates concept drift in streaming tensor decomposition. We extensively evaluate SeekAndDestroy on synthetic datasets, which exhibit a wide variety of realistic drift. Our experiments demonstrate the effectiveness of SeekAndDestroy, both in the detection of concept drift and in the alleviation of its effects, producing results with similar quality to decomposing the entire tensor in one shot. Additionally, in real datasets, SeekAndDestroy outperforms other streaming baselines, while discovering novel useful components.
• The ASVspoof challenge series was born to spearhead research in anti-spoofing for automatic speaker verification (ASV). The two challenge editions in 2015 and 2017 involved the assessment of spoofing countermeasures (CMs) in isolation from ASV using an equal error rate (EER) metric. While a strategic approach to assessment at the time, it has certain shortcomings. First, the CM EER is not necessarily a reliable predictor of performance when ASV and CMs are combined. Second, the EER operating point is ill-suited to user authentication applications, e.g. telephone banking, characterised by a high target user prior but a low spoofing attack prior. We aim to migrate from CM- to ASV-centric assessment with the aid of a new tandem detection cost function (t-DCF) metric. It extends the conventional DCF used in ASV research to scenarios involving spoofing attacks. The t-DCF metric has 6 parameters: (i) false alarm and miss costs for both systems, and (ii) prior probabilities of target and spoof trials (with an implied third, nontarget prior). The study is intended to serve as a self-contained, tutorial-like presentation. We analyse with the t-DCF a selection of top-performing CM submissions to the 2015 and 2017 editions of ASVspoof, with a focus on the spoofing attack prior. Whereas there is little to choose between countermeasure systems for lower priors, system rankings derived with the EER and t-DCF show differences for higher priors. We observe some ranking changes. Findings support the adoption of the DCF-based metric into the roadmap for future ASVspoof challenges, and possibly for other biometric anti-spoofing evaluations.
• We present a new framework (MADE) that produces distance and age estimates by applying a Bayesian isochrone pipeline to a combination of photometric, astrometric and spectroscopic data. For giant stars, the framework can supplement these observational constraints with posterior predictive distributions for mass from a new Bayesian spectroscopic mass estimator. The new mass estimator is a Bayesian artificial neural network (ANN) that learns the relationship between a specified set of inputs and outputs based on a training set. Posterior predictive distributions for the outputs given new inputs are computed, taking into account input uncertainties, and uncertainties in the parameters of the ANN. MADE trains the ANN on stars with spectroscopic and asteroseismology data to enable posterior predictive distributions for present masses of giant stars to be evaluated given spectroscopic data. We apply MADE to $\sim10\,000$ red giants in the overlap between APO Galactic Evolution Experiment (APOGEE) and the Tycho-Gaia astrometric solution (TGAS). The ANN is trained on a subsample of these stars with new asteroseismology determinations of mass, and is able to predict the masses to a similar degree of uncertainty as the measurement uncertainty. In particular, it is able to reduce the uncertainty on those with the highest measurement uncertainty. Using these masses in the Bayesian isochrone pipeline along with photometric and astrometric data, we are able to obtain distance estimates with uncertainties of order $\sim 10\%$ and age estimates with uncertainties of order $\sim 20\%$. Our resulting catalogue clearly demonstrates the expected thick and thin disc components in the [M/H]-[$\alpha$/M] plane when examined by age.
• Apr 26 2018 math.NT math.CO arXiv:1804.09594v1
Consider the sequence $\mathcal{V}(2,n)$ constructed in a greedy fashion by setting $a_1 = 2$, $a_2 = n$ and defining $a_{m+1}$ as the smallest integer larger than $a_m$ that can be written as the sum of two (not necessarily distinct) earlier terms in exactly one way; the sequence $\mathcal{V}(2,3)$, for example, is given by $$\mathcalV(2,3) = 2,3,4,5,9,10,11,16,22,\dots$$ We prove that if $n \geqslant 5$ is odd, then the sequence $\mathcal{V}(2,n)$ has exactly two even terms $\left\{2,2n\right\}$ if and only if $n-1$ is not a power of 2. We also show that in this case, $\mathcal{V}(2,n)$ eventually becomes a union of arithmetic progressions. If $n-1$ is a power of 2, then there is at least one more even term $2n^2 + 2$ and we conjecture there are no more even terms. In the proof, we display an interesting connection between $\mathcal{V}(2,n)$ and Sierpinski Triangle. We prove several other results, discuss a series of striking phenomena and pose many problems. This relates to existing results of Finch, Schmerl & Spiegel and a classical family of sequences defined by Ulam.
• Domain adaptation is widely used in learning problems lacking labels. Recent researches show that deep adversarial domain adaptation models can make markable improvements in performance, which include symmetric and asymmetric architectures. However, the former has poor generalization ability whereas the latter is very hard to train. In this paper, we propose a novel adversarial domain adaptation method named Adversarial Residual Transform Networks (ARTNs) to improve the generalization ability, which directly transforms the source features into the space of target features. In this model, residual connections are used to share features and adversarial loss is reconstructed, thus making the model more generalized and easier to train. Moreover, regularization is added to the loss function to alleviate a vanishing gradient problem, which enables the training process stable. A series of experimental results based on Amazon review dataset, digits datasets and Office-31 image datasets show that the proposed ARTN method greatly outperform the methods of the state-of-the-art.
• We discuss the impact that Gaia, a European Space Agency (ESA) cornerstone mission that has been in scientific operations since July 2014, is expected to have on the definition of the cosmic distance ladder and the study of resolved stellar populations in and beyond the Milky Way, specifically focusing on results based on Cepheids and RR Lyrae stars. Gaia is observing about 1.7 billion sources, measuring their position, trigonometric parallax, proper motions and time-series photometry in 3 pass-bands down to a faint magnitude limit of G $\sim$21 mag. Among them are thousands of Cepheids and hundreds of thousands of RR Lyrae stars. After a five years of mission the parallax errors are expected to be of about 10 microarcsec for sources brighter than V $\sim$ 12, 13 mag. This will allow an accurate re-calibration of the fundamental relations that make RR Lyrae stars and Cepheids primary standard candles of the cosmic distance ladder and will provide a fresh view of the systems and structures that host these classical pulsators. Results for Cepheids and RR Lyrae stars published in Gaia Data Release 1 (DR1) are reviewed along with some perspectives on Gaia DR2, scheduled for 25 April 2018, which will contain parallaxes based only on Gaia measurements and a first mapping of full-sky RR Lyrae stars and Cepheids.
• This paper is devoted to the important yet unexplored subject of \emphcrowding effects on market impact, that we call "co-impact". Our analysis is based on a large database of metaorders by institutional investors in the U.S. equity market. We find that the market chiefly reacts to the net order flow of ongoing metaorders, without individually distinguishing them. The joint co-impact of multiple contemporaneous metaorders depends on the total number of metaorders and their mutual sign correlation. Using a simple heuristic model calibrated on data, we reproduce very well the different regimes of the empirical market impact curves as a function of volume fraction $\phi$: square-root for large $\phi$, linear for intermediate $\phi$, and a finite intercept $I_0$ when $\phi \to 0$. The value of $I_0$ grows with the sign correlation coefficient. Our study sheds light on an apparent paradox: How can a non-linear impact law survive in the presence of a large number of simultaneously executed metaorders?
• When performing localization and mapping, working at the level of structure can be advantageous in terms of robustness to environmental changes and differences in illumination. This paper presents SegMap: a map representation solution to the localization and mapping problem based on the extraction of segments in 3D point clouds. In addition to facilitating the computationally intensive task of processing 3D point clouds, working at the level of segments addresses the data compression requirements of real-time single- and multi-robot systems. While current methods extract descriptors for the single task of localization, SegMap leverages a data-driven descriptor in order to extract meaningful features that can also be used for reconstructing a dense 3D map of the environment and for extracting semantic information. This is particularly interesting for navigation tasks and for providing visual feedback to end-users such as robot operators, for example in search and rescue scenarios. These capabilities are demonstrated in multiple urban driving and search and rescue experiments. Our method leads to an increase of area under the ROC curve of 28.3% over current state of the art using eigenvalue based features. We also obtain very similar reconstruction capabilities to a model specifically trained for this task. The SegMap implementation will be made available open-source along with easy to run demonstrations at www.github.com/ethz-asl/segmap. A video demonstration is available at https://youtu.be/CMk4w4eRobg.
• Deep learning based models have had great success in object detection, but the state of the art models have not yet been widely applied to biological image data. We apply for the first time an object detection model previously used on natural images to identify cells and recognize their stages in brightfield microscopy images of malaria-infected blood. Many micro-organisms like malaria parasites are still studied by expert manual inspection and hand counting. This type of object detection task is challenging due to factors like variations in cell shape, density, and color, and uncertainty of some cell classes. In addition, annotated data useful for training is scarce, and the class distribution is inherently highly imbalanced due to the dominance of uninfected red blood cells. We use Faster Region-based Convolutional Neural Network (Faster R-CNN), one of the top performing object detection models in recent years, pre-trained on ImageNet but fine tuned with our data, and compare it to a baseline, which is based on a traditional approach consisting of cell segmentation, extraction of several single-cell features, and classification using random forests. To conduct our initial study, we collect and label a dataset of 1300 fields of view consisting of around 100,000 individual cells. We demonstrate that Faster R-CNN outperforms our baseline and put the results in context of human performance.

Max Lu Apr 25 2018 22:08 UTC

"This is a very inspiring paper! The new framework (ZR = All Reality) it provided allows us to understand all kinds of different reality technologies (VR, AR, MR, XR etc) that are currently loosely connected to each other and has been confusing to many people. Instead of treating our perceived sens

...(continued)
Stefano Pirandola Apr 23 2018 12:23 UTC

The most important reading here is Sam Braunstein's foundational paper: https://authors.library.caltech.edu/3827/1/BRAprl98.pdf published in January 98, already containing the key results for the strong convergence of the CV protocol. This is a must-read for those interested in CV quantum informatio

...(continued)
Mark M. Wilde Apr 23 2018 12:09 UTC

One should also consult my paper "Strong and uniform convergence in the teleportation simulation of bosonic Gaussian channels" https://arxiv.org/abs/1712.00145v4 posted in January 2018, in this context.

Stefano Pirandola Apr 23 2018 11:46 UTC

Some quick clarifications on the Braunstein-Kimble (BK) protocol for CV teleportation
and the associated teleportation simulation of bosonic channels.
(Disclaimer: the following is rather technical and CVs might not be so popular on this blog...so I guess this post will get a lot of dislikes :)

1)

...(continued)
NJBouman Apr 22 2018 18:26 UTC

[Fredrik Johansson][1] has pointed out to me (the author) the following about the multiplication benchmark w.r.t. GMP. This will be taken into account in the upcoming revision.

Fredrik Johansson wrote:
> You shouldn't be comparing your code to mpn_mul`, because this function is not actually th

...(continued)
Joel Wallman Apr 18 2018 13:34 UTC

A very nice approach! Could you clarify the conclusion a little bit though? The aspirational goal for a quantum benchmark is to test how well we approximate a *specific* representation of a group (up to similarity transforms), whereas what your approach demonstrates is that without additional knowle

...(continued)
serfati philippe Mar 29 2018 14:07 UTC

see my 2 papers on direction of vorticity (nov1996 + feb1999) = https://www.researchgate.net/profile/Philippe_Serfati (published author, see also mendeley, academia.edu, orcid etc)

serfati philippe Mar 29 2018 13:34 UTC

see my 4 papers, 1998-1999, on contact and superposed vortex patches, cusps (and eg splashs), corners, generalized ones on lR^n and (ir/)regular ones =. http://www.researchgate.net/profile/Philippe_Serfati/ (published author).

Luis Cruz Mar 16 2018 15:34 UTC

Related Work:

- [Performance-Based Guidelines for Energy Efficient Mobile Applications](http://ieeexplore.ieee.org/document/7972717/)
- [Leafactor: Improving Energy Efficiency of Android Apps via Automatic Refactoring](http://ieeexplore.ieee.org/document/7972807/)

Dan Elton Mar 16 2018 04:36 UTC