# Top arXiv papers

• Quantum-limited amplifiers increase the amplitude of the signal at the price of introducing additional noise. Quantum purification protocols operate in the reverse way, by reducing the noise while attenuating the signal. Here we investigate a scenario that interpolates between these two extremes. We search for the physical process that produces the best approximation of a pure and amplified coherent state, starting from multiple copies of a noisy coherent state with Gaussian modulation. We identify the optimal quantum processes, considering both the case of deterministic and probabilistic processes. And we give benchmarks that can be used to certify the experimental demonstration of genuine quantum-enhanced amplification.
• We introduce a toy holographic correspondence based on the multi-scale entanglement renormalization ansatz (MERA) representation of ground states of local Hamiltonians. Given a MERA representation of the ground state of a local Hamiltonian acting on an one dimensional boundary' lattice, we lift it to a tensor network representation of a quantum state of a dual two dimensional bulk' hyperbolic lattice. The dual bulk degrees of freedom are associated with the bonds of the MERA, which describe the renormalization group flow of the ground state, and the bulk tensor network is obtained by inserting tensors with open indices on the bonds of the MERA. We explore properties of `copy bulk states'---particular bulk states that correspond to inserting the copy tensor on the bonds of the MERA. We show that entanglement in copy bulk states is organized according to holographic screens, and that expectation values of certain extended operators in a copy bulk state, dual to a critical ground state, are proportional to $n$-point correlators of the critical ground state. We also present numerical results to illustrate e.g. that copy bulk states, dual to ground states of several critical spin chains, have exponentially decaying correlations, and that the correlation length generally decreases with increase in central charge for these models. Our toy model illustrates a possible approach for deducing an emergent bulk description from the MERA, in light of the on-going dialogue between tensor networks and holography.
• Jan 18 2017 stat.ML arXiv:1701.04503v1
The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure prediction, quantum chemistry, materials design and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry.
• Kitaev's quantum double models, including the toric code, are canonical examples of quantum topological models on a 2D spin lattice. Their Hamiltonian define the groundspace by imposing an energy penalty to any nontrivial flux or charge, but treats any such violation in the same way. Thus, their energy spectrum is very simple. We introduce a new family of quantum double Hamiltonians with adjustable coupling constants that allow us to tune the energy of anyons while conserving the same groundspace as Kitaev's original construction. Those Hamiltonians are made of commuting four-body projectors that provide an intricate splitting of the Hilbert space.
• We consider a problem introduced by Mossel and Ross [Shotgun assembly of labeled graphs, arXiv:1504.07682]. Suppose a random $n\times n$ jigsaw puzzle is constructed by independently and uniformly choosing the shape of each "jig" from $q$ possibilities. We are given the shuffled pieces. Then, depending on $q$, what is the probability that we can reassemble the puzzle uniquely? We say that two solutions of a puzzle are similar if they only differ by permutation of duplicate pieces, and rotation of rotationally symmetric pieces. In this paper, we show that, with high probability, such a puzzle has at least two non-similar solutions when $2\leq q \leq \frac{2}{\sqrt{e}}n$, all solutions are similar when $q\geq (2+\varepsilon)n$, and the solution is unique when $q=\omega(n)$.
• We present a categorical construction for modelling both definite and indefinite causal structures within a general class of process theories that include classical probability theory and quantum theory. Unlike prior constructions within categorical quantum mechanics, the objects of this theory encode finegrained causal relationships between subsystems and give a new method for expressing and deriving consequences for a broad class of causal structures. To illustrate this point, we show that this framework admits processes with definite causal structures, namely one-way signalling processes, non-signalling processes, and quantum n-combs, as well as processes with indefinite causal structure, such as the quantum switch and the process matrices of Oreshkov, Costa, and Brukner. We furthermore give derivations of their operational behaviour using simple, diagrammatic axioms.
• A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets.
• We propose to leverage concept-level representations for complex event recognition in photographs given limited training examples. We introduce a novel framework to discover event concept attributes from the web and use that to extract semantic features from images and classify them into social event categories with few training examples. Discovered concepts include a variety of objects, scenes, actions and event sub-types, leading to a discriminative and compact representation for event images. Web images are obtained for each discovered event concept and we use (pretrained) CNN features to train concept classifiers. Extensive experiments on challenging event datasets demonstrate that our proposed method outperforms several baselines using deep CNN features directly in classifying images into events with limited training examples. We also demonstrate that our method achieves the best overall accuracy on a dataset with unseen event categories using a single training example.
• While recent deep neural networks have achieved promising results for 3D reconstruction from a single-view image, these rely on the availability of RGB textures in images and extra information as supervision. In this work, we propose novel stacked hierarchical networks and an end to end training strategy to tackle a more challenging task for the first time, 3D reconstruction from a single-view 2D silhouette image. We demonstrate that our model is able to conduct 3D reconstruction from a single-view silhouette image both qualitatively and quantitatively. Evaluation is performed using Shapenet for the single-view reconstruction and results are presented in comparison with a single network, to highlight the improvements obtained with the proposed stacked networks and the end to end training strategy. Furthermore, 3D re- construction in forms of IoU is compared with the state of art 3D reconstruction from a single-view RGB image, and the proposed model achieves higher IoU than the state of art of reconstruction from a single view RGB image.
• Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target's ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat.
• Jan 18 2017 math.AG math.AC arXiv:1701.04738v1
The goal of the present article is to survey the general theory of Mori Dream Spaces, with special regards to the question: When is the blow-up of toric variety at a general point a Mori Dream Space? We translate the question for toric surfaces of Picard number one into an interpolation problem involving points in the projective plane. An instance of such an interpolation problem is the Gonzalez-Karu theorem that gives new examples of weighted projective planes whose blow-up at a general point is not a Mori Dream Space.
• Jan 18 2017 cs.DC arXiv:1701.04733v1
GPUs are dedicated processors used for complex calculations and simulations and they can be effectively used for tropical algebra computations. Tropical algebra is based on max-plus algebra and min-plus algebra. In this paper we proposed and designed a library based on Tropical Algebra which is used to provide standard vector and matrix operations namely Basic Tropical Algebra Subroutines (BTAS). The testing of BTAS library is conducted by implementing the sequential version of Floyd Warshall Algorithm on CPU and furthermore parallel version on GPU. The developed library for tropical algebra delivered extensively better results on a less expensive GPU as compared to the same on CPU.
• Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model used during training. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.
• Jan 18 2017 cs.CV q-bio.NC arXiv:1701.04674v1
Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
• We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for multi-scale contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments for low-level applications on BSDS, PASCAL Context, PASCAL Segmentation, and NYUD to evaluate boundary detection performance, showing that COB provides state-of-the-art contours and region hierarchies in all datasets. We also evaluate COB on high-level tasks when coupled with multiple pipelines for object proposals, semantic contours, semantic segmentation, and object detection on various databases (MS-COCO, SBD, PASCAL VOC'07), showing that COB also improves the results for all tasks.
• Given a vertex-weighted graph $G=(V,E)$ and a set $S \subseteq V$, a subset feedback vertex set $X$ is a set of the vertices of $G$ such that the graph induced by $V \setminus X$ has no cycle containing a vertex of $S$. The \textscSubset Feedback Vertex Set problem takes as input $G$ and $S$ and asks for the subset feedback vertex set of minimum total weight. In contrast to the classical \textscFeedback Vertex Set problem which is obtained from the \textscSubset Feedback Vertex Set problem for $S=V$, restricted to graph classes the \textscSubset Feedback Vertex Set problem is known to be NP-complete on split graphs and, consequently, on chordal graphs. However as \textscFeedback Vertex Set is polynomially solvable for AT-free graphs, no such result is known for the \textscSubset Feedback Vertex Set problem on any subclass of AT-free graphs. Here we give the first polynomial-time algorithms for the problem on two unrelated subclasses of AT-free graphs: interval graphs and permutation graphs. As a byproduct we show that there exists a polynomial-time algorithm for circular-arc graphs by suitably applying our algorithm for interval graphs. Moreover towards the unknown complexity of the problem for AT-free graphs, we give a polynomial-time algorithm for co-bipartite graphs. Thus we contribute to the first positive results of the \textscSubset Feedback Vertex Set problem when restricted to graph classes for which \textscFeedback Vertex Set is solved in polynomial time.
• The evaluation of a query over a probabilistic database boils down to computing the probability of a suitable Boolean function, the lineage of the query over the database. The method of query compilation approaches the task in two stages: first, the query lineage is implemented (compiled) in a circuit form where probability computation is tractable; and second, the desired probability is computed over the compiled circuit. A basic theoretical quest in query compilation is that of identifying pertinent classes of queries whose lineages admit compact representations over increasingly succinct, tractable circuit classes. Fostering previous work by Jha and Suciu (2012) and Petke and Razgon (2013), we focus on queries whose lineages admit circuit implementations with small treewidth, and investigate their compilability within tame classes of decision diagrams. In perfect analogy with the characterization of bounded circuit pathwidth by bounded OBDD width, we show that a class of Boolean functions has bounded circuit treewidth if and only if it has bounded SDD width. Sentential decision diagrams (SDDs) are central in knowledge compilation, being essentially as tractable as OBDDs but exponentially more succinct. By incorporating constant width SDDs and polynomial size SDDs, we refine the panorama of query compilation for unions of conjunctive queries with and without inequalities.
• Recently there has been an enormous interest in generative models for images in deep learning. In pursuit of this, Generative Adversarial Networks (GAN) and Variational Auto-Encoder (VAE) have surfaced as two most prominent and popular models. While VAEs tend to produce excellent reconstructions but blurry samples, GANs generate sharp but slightly distorted images. In this paper we propose a new model called Variational InfoGAN (ViGAN). Our aim is two fold: (i) To generated new images conditioned on visual descriptions, and (ii) modify the image, by fixing the latent representation of image and varying the visual description. We evaluate our model on Labeled Faces in the Wild (LFW), celebA and a modified version of MNIST datasets and demonstrate the ability of our model to generate new images as well as to modify a given image by changing attributes.
• Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3% Pearson correlation coefficient between our predicted pain level time series and the ground truth.
• Most existing community-related studies focus on detection, which aim to find the community membership for each user from user friendship links. However, membership alone, without a complete profile of what a community is and how it interacts with other communities, has limited applications. This motivates us to consider systematically profiling the communities and thereby developing useful community-level applications. In this paper, we for the first time formalize the concept of community profiling. With rich user information on the network, such as user published content and user diffusion links, we characterize a community in terms of both its internal content profile and external diffusion profile. The difficulty of community profiling is often underestimated. We novelly identify three unique challenges and propose a joint Community Profiling and Detection (CPD) model to address them accordingly. We also contribute a scalable inference algorithm, which scales linearly with the data size and it is easily parallelizable. We evaluate CPD on large-scale real-world data sets, and show that it is significantly better than the state-of-the-art baselines in various tasks.
• This volume contains the papers presented at LINEARITY 2016, the Fourth International Workshop on Linearity, held on June 26, 2016 in Porto, Portugal. The workshop was a one-day satellite event of FSCD 2016, the first International Conference on Formal Structures for Computation and Deduction. The aim of this workshop was to bring together researchers who are developing theory and applications of linear calculi, to foster their interaction and provide a forum for presenting new ideas and work in progress, and enable newcomers to learn about current activities in this area. Of interest were new results that made a central use of linearity, ranging from foundational work to applications in any field. This included: sub-linear logics, linear term calculi, linear type systems, linear proof-theory, linear programming languages, applications to concurrency, interaction-based systems, verification of linear systems, and biological and chemical models of computation.
• In recent times, the use of separable convolutions in deep convolutional neural network architectures has been explored. Several researchers, most notably (Chollet, 2016) and (Ghosh, 2017) have used separable convolutions in their deep architectures and have demonstrated state of the art or close to state of the art performance. However, the underlying mechanism of action of separable convolutions are still not fully understood. Although their mathematical definition is well understood as a depthwise convolution followed by a pointwise convolution, deeper interpretations such as the extreme Inception hypothesis (Chollet, 2016) have failed to provide a thorough explanation of their efficacy. In this paper, we propose a hybrid interpretation that we believe is a better model for explaining the efficacy of separable convolutions.
• 1. Analog forecasting has been successful at producing robust forecasts for a variety of ecological and physical processes. Analog forecasting is a mechanism-free nonlinear method that forecasts a system forward in time by examining how past states deemed similar to the current state moved forward. Previous work on analog forecasting has typically been presented in an empirical or heuristic context, as opposed to a formal statistical context. 2. The model presented here extends the model-based analog method of McDermott and Wikle (2016) by placing analog forecasting within a fully hierarchical statistical frame- work. In particular, a Bayesian hierarchical spatial-temporal Poisson analog forecasting model is formulated. 3. In comparison to a Poisson Bayesian hierarchical model with a latent dynamical spatio- temporal process, the hierarchical analog model consistently produced more accurate forecasts. By using a Bayesian approach, the hierarchical analog model is able to quantify rigorously the uncertainty associated with forecasts. 4. Forecasting waterfowl settling patterns in the northwestern United States and Canada is conducted by applying the hierarchical analog model to a breeding population survey dataset. Sea Surface Temperature (SST) in the Pacific ocean is used to help identify potential analogs for the waterfowl settling patterns.
• This paper is a tutorial for newcomers to the field of automated verification tools, though we assume the reader to be relatively familiar with Hoare-style verification. In this paper, besides introducing the most basic features of the language and verifier Dafny, we place special emphasis on how to use Dafny as an assistant in the development of verified programs. Our main aim is to encourage the software engineering community to make the move towards using formal verification tools.
• We formulate three current models of discrete-time quantum walks in a combinatorial way. These walks are shown to be closely related to rotation systems and 1-factorizations of graphs. For two of the models, we compute the traces and total entropies of the average mixing matrices for some cubic graphs. The trace captures how likely a quantum walk is to revisit the state it started with, and the total entropy measures how close the limiting distribution is to uniform. Our numerical results indicate three relations between quantum walks and graph structures: for the first model, rotation systems with higher genera give lower traces and higher entropies, and for the second model, the symmetric 1-factorizations always give the highest trace.
• How much can pruning algorithms teach us about the fundamentals of learning representations in neural networks? A lot, it turns out. Neural network model compression has become a topic of great interest in recent years, and many different techniques have been proposed to address this problem. In general, this is motivated by the idea that smaller models typically lead to better generalization. At the same time, the decision of what to prune and when to prune necessarily forces us to confront our assumptions about how neural networks actually learn to represent patterns in data. In this work we set out to test several long-held hypotheses about neural network learning representations and numerical approaches to pruning. To accomplish this we first reviewed the historical literature and derived a novel algorithm to prune whole neurons (as opposed to the traditional method of pruning weights) from optimally trained networks using a second-order Taylor method. We then set about testing the performance of our algorithm and analyzing the quality of the decisions it made. As a baseline for comparison we used a first-order Taylor method based on the Skeletonization algorithm and an exhaustive brute-force serial pruning algorithm. Our proposed algorithm worked well compared to a first-order method, but not nearly as well as the brute-force method. Our error analysis led us to question the validity of many widely-held assumptions behind pruning algorithms in general and the trade-offs we often make in the interest of reducing computational complexity. We discovered that there is a straightforward way, however expensive, to serially prune 40-70% of the neurons in a trained network with minimal effect on the learning representation and without any re-training.
• We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf-252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source's intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutron double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique's potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). These simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.
• We prove a downward separation for $\mathsf{\Sigma}_2$-time classes. Specifically, we prove that if $\Sigma_2$E does not have polynomial size non-deterministic circuits, then $\Sigma_2$SubEXP does not have \textitfixed polynomial size non-deterministic circuits. To achieve this result, we use Santhanam's technique on augmented Arthur-Merlin protocols defined by Aydinlioğlu and van Melkebeek. We show that augmented Arthur-Merlin protocols with one bit of advice do not have fixed polynomial size non-deterministic circuits. We also prove a weak unconditional derandomization of a certain type of promise Arthur-Merlin protocols. Using Williams' easy hitting set technique, we show that $\Sigma_2$-promise AM problems can be decided in $\Sigma_2$SubEXP with $n^c$ advice, for some fixed constant $c$.
• Recently a repeating fast radio burst (FRB) 121102 has been confirmed to be an extragalactic event and a persistent radio counterpart has been identified. While other possibilities are not ruled out, the emission properties are broadly consistent with theoretical suggestions of Murase et al. (2016) for quasi-steady nebula emission from a pulsar-driven supernova remnant as a counterpart of FRBs. Here we constrain the model parameters of such a young neutron star scenario for FRB 121102. If the associated supernova has a conventional ejecta mass of $M_{\rm ej}\gtrsim{\rm a \ few}\ M_\odot$, a neutron star with an age of $t_{\rm age} \sim 10-100 \ \rm yrs$, an initial spin period of $P_{\rm i} \lesssim$ a few ms, and a dipole magnetic field of $B_{\rm dip} \sim 10^{12-13} \ \rm G$ can be compatible with the observations. However, in this case, the magnetically-powered scenario may be more favored as an FRB energy source because of the efficiency problem in the rotation-powered scenario. On the other hand, if the associated supernova is an ultra-stripped one with $M_{\rm ej} \sim 0.1 \ M_\odot$, a younger neutron star with $t_{\rm age} \sim 1-10$ yrs can be the persistent radio source and might produce FRBs with the spin-down power. These possibilities could be distinguished by the decline rate of the quasi-steady radio counterpart.
• Electric-field noise from the surfaces of ion-trap electrodes couples to the ion's charge causing heating of the ion's motional modes. This heating limits the fidelity of quantum gates implemented in quantum information processing experiments. The exact mechanism that gives rise to electric-field noise from surfaces is not well-understood and remains an active area of research. In this work, we detail experiments intended to measure ion motional heating rates with exchangeable surfaces positioned in close proximity to the ion, as a sensor to electric-field noise. We have prepared samples with various surface conditions, characterized in situ with scanned probe microscopy and electron spectroscopy, ranging in degrees of cleanliness and structural order. The heating-rate data, however, show no significant differences between the disparate surfaces that were probed. These results suggest that the driving mechanism for electric-field noise from surfaces is due to more than just thermal excitations alone.
• The black hole information paradox presumes that quantum field theory in curved spacetime can provide unitary propagation from a near-horizon mode to an asymptotic Hawking quantum. Instead of invoking conjectural quantum gravity effects to modify such an assumption, we propose a self-consistency check. We establish an analogy to Feynman's analysis of a double-slit experiment. Feynman showed that unitary propagation of the interfering particles, namely ignoring the entanglement with the double-slit, becomes an arbitrarily reliable assumption when the screen upon which the interference pattern is projected is infinitely far away. We argue for an analogous self-consistency check for quantum field theory in curved spacetime. We apply it to the propagation of Hawking quanta and test whether ignoring the entanglement with the geometry also becomes arbitrarily reliable in the limit of a large black hole. We present curious results to suggest a negative answer, and we discuss how this loss of na?ive unitarity in QFT might be related to a solution of the paradox based on the soft-hair-memory effect.
• A method for measuring the real part of the weak (local) value of spin is presented using a variant on the original Stern-Gerlach apparatus. The experiment utilises metastable helium in the $\rm 2^{3}S_{1}$ state. A full simulation using the impulsive approximation has been carried out and it predicts a displacement of the beam by $\rm \Delta_{w} = 17 - 33\,\mu m$. This is on the limit of our detector resolution and we will discuss ways of increasing $\rm \Delta_{w}$. The simulation also indicates how we might observe the imaginary part of the weak value.
• We study positive solutions to the heat equation on graphs. We prove variants of the Li-Yau gradient estimate and the differential Harnack inequality. For some graphs, we can show the estimates to be sharp. We establish new computation rules for differential operators on discrete spaces and introduce a relaxation function that governs the time dependency in the differential Harnack estimate.
• We provide a formalism to calculate the cubic interaction vertices of the stable string bit model, in which string bits have $s$ spin degrees of freedom but no space to move. With the vertices, we obtain a formula for one-loop self-energy, i.e., the $\mathcal{O}\left(1/N^{2}\right)$ correction to the energy spectrum. A rough analysis shows that, when the bit number $M$ is large, the ground state one-loop self-energy $\Delta E_{G}$ should scale as $M^{5-s/4}$ for even $s$ and $M^{4-s/4}$ for odd $s$. Particularly, in the case of protostring, where the Grassmann dimension $s=24$, we have $\Delta E_{G}\sim1/M$, which resembles the Poincaré invariant relation of $1+1$ dimension $P^{-}\sim1/P^{+}$. We calculate analytically the one-loop correction for the ground energies with $M=3$ and $s=1,\,2$. We then numerically confirm that the large $M$ behavior holds for $s\leq4$ cases.
• On an asymptotically flat manifold $M^n$ with nonnegative scalar curvature, with outer minimizing boundary $\Sigma$, we prove a Penrose-like inequality in dimensions $n < 8$, under suitable assumptions on the mean curvature and the scalar curvature of $\Sigma$.
• We estimate the possible accuracies of measurements at the proposed CLIC $e^+e^-$ collider of Higgs and $W^+W^-$ production at centre-of-mass energies up to 3TeV, incorporating also Higgsstrahlung projections at higher energies that had not been considered previously, and use them to explore the prospective CLIC sensitivities to decoupled new physics. We present the resulting constraints on the Wilson coefficients of dimension-6 operators in a model-independent approach based on the Standard Model effective field theory (SM EFT). The higher centre-of-mass energy of CLIC, compared to other projects such as the ILC and CEPC, gives it greater sensitivity to the coefficients of some of the operators we study. We find that CLIC Higgs measurements may be sensitive to new physics scales $\Lambda = \mathcal{O}(10)$TeV for individual operators, reduced to $\mathcal{O}(1)$ TeV sensitivity for a global fit marginalising over the coefficients of all contributing operators. We give some examples of the corresponding prospective constraints on specific scenarios for physics beyond the SM, including stop quarks and the dilaton/radion.
• A promising approach of designing mesostructured materials with novel physical behavior is to combine unique optical and electronic properties of solid nanoparticles with long-range ordering and facile response of soft matter to weak external stimuli. Here we design, practically realize, and characterize orientationally ordered nematic liquid crystalline dispersions of rod-like upconversion nanoparticles. Boundary conditions on particle surfaces, defined through surface functionalization, promote spontaneous unidirectional self-alignment of the dispersed rod-like nanoparticles, mechanically coupled to the molecular ordering direction of the thermotropic nematic liquid crystal host. As host is electrically switched at low voltages~ 1V, nanorods rotate, yielding tunable upconversion and polarized luminescence properties of the composite. We characterize spectral and polarization dependencies, explain them through invoking models of electrical switching and upconversion dependence on crystalline matrices of nanorods, and discuss potential practical uses.
• We report on subarcsecond observations of complex organic molecules (COMs) in the high-mass protostar IRAS20126+4104 with the Plateau de Bure Interferometer in its most extended configurations. In addition to the simple molecules SO, HNCO and H2-13CO, we detect emission from CH3CN, CH3OH, HCOOH, HCOOCH3, CH3OCH3, CH3CH2CN, CH3COCH3, NH2CN, and (CH2OH)2. SO and HNCO present a X-shaped morphology consistent with tracing the outflow cavity walls. Most of the COMs have their peak emission at the putative position of the protostar, but also show an extension towards the south(east), coinciding with an H2 knot from the jet at about 800-1000 au from the protostar. This is especially clear in the case of H2-13CO and CH3OCH3. We fitted the spectra at representative positions for the disc and the outflow, and found that the abundances of most COMs are comparable at both positions, suggesting that COMs are enhanced in shocks as a result of the passage of the outflow. By coupling a parametric shock model to a large gas-grain chemical network including COMs, we find that the observed COMs should survive in the gas phase for about 2000 yr, comparable to the shock lifetime estimated from the water masers at the outflow position. Overall, our data indicate that COMs in IRAS20126+4104 may arise not only from the disc, but also from dense and hot regions associated with the outflow.
• We consider a general primitively polarized K3 surface $(S,H)$ of genus $g+1$ and a 1-nodal curve $\widetilde C\in |H|$. We prove that the normalization $C$ of $\widetilde C$ has surjective Wahl map provided $g=40,42$ or $\ge 44$.
• An analytical, single-parametric, complete and orthonormal basis set consisting of the hydrogen wave-functions is put forward for \textitab initio calculations of observable characteristics of an arbitrary many-electron atom. By introducing a single parameter for the whole basis set of a given atom, namely an effective charge $Z^{*}$, we find a sufficiently good analytical approximation for the atomic characteristics of all elements of the periodic table. The basis completeness allows us to perform a transition into the secondary-quantized representation for the construction of a regular perturbation theory, which includes in a natural way correlation effects and allows one to easily calculate the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our analytical zeroth-order approximation provides better accuracy than the Thomas-Fermi model and already in second-order perturbation theory our results become comparable with those via multi-configuration Hartree-Fock.
• The anti-Stokes scattering and Stokes scattering in stimulated Brillouin scattering (SBS) cascade have been researched by the Vlasov-Maxwell simulation. In the high-intensity laser-plasmas interaction, the stimulated anti-Stokes Brillouin scattering (SABS) will occur after the second stage SBS rescattering. The mechanism of SABS has been put forward to explain this phenomenon. And the SABS will compete with the SBS rescattering to determine the total SBS reflectivity. Thus, the SBS rescattering including the SABS is an important saturation mechanism of SBS, and should be taken into account in the high-intensity laser-plasmas interaction.
• This paper investigates oscillation-free stability conditions of numerical methods for linear parabolic partial differential equations with some example extrapolations to nonlinear equations. Not clearly understood, numerical oscillations can create infeasible results. Since oscillation-free behavior is not ensured by stability conditions, a more precise condition would be useful for accurate solutions. Using Von Neumann and spectral analyses, we find and explore oscillation-free conditions for several finite difference schemes. Further relationships between oscillatory behavior and eigenvalues is supported with numerical evidence and proof. Also, evidence suggests that the oscillation-free stability condition for a consistent linearization may be sufficient to provide oscillation-free stability of the nonlinear solution. These conditions are verified numerically for several example problems by visually comparing the analytical conditions to the behavior of the numerical solution for a wide range of mesh sizes.
• We consider row sequences of (type II) Hermite-Padé approximations with common denominator associated with a vector ${\bf f}$ of formal power expansions about the origin. In terms of the asymptotic behavior of the sequence of common denominators, we describe some analytic properties of ${\bf f}$ and restate some conjectures corresponding to questions once posed by A. A. Gonchar for row sequences of Padé approximants.
• We prove a result on non-clustering of particles in a two-dimensional Coulomb plasma, which holds provided that the inverse temperature $\beta$ satisfies $\beta>1$. As a consequence we obtain a result on crystallization as $\beta\to\infty$: the particles will, on a microscopic scale, appear at a certain distance from each other. The estimation of this distance is connected to Abrikosov's conjecture that the particles should freeze up according to a honeycomb lattice when $\beta\to\infty$.
• In systems having an anisotropic electronic structure, such as the layered materials graphite, graphene and cuprates, impulsive light excitation can coherently stimulate specific bosonic modes, with exotic consequences for the emergent electronic properties. Here we show that the population of E$_{2g}$ phonons in the multiband superconductor MgB$_2$ can be selectively enhanced by femtosecond laser pulses, leading to a transient control of the number of carriers in the \sigma-electronic subsystem. The nonequilibrium evolution of the material optical constants is followed in the spectral region sensitive to both the a- and c-axis plasma frequencies and modeled theoretically, revealing the details of the $\sigma$-$\pi$ interband scattering mechanism in MgB$_2$.
• Jan 18 2017 hep-ph arXiv:1701.04794v1
We propose supersymmetric Majoron inflation in which the Majoron field $\Phi$ responsible for generating right-handed neutrino masses may also be suitable for giving low scale "hilltop" inflation, with a discrete lepton number $Z_N$ spontaneously broken at the end of inflation, while avoiding the domain wall problem. In the framework of non-minimal supergravity, we show that a successful spectral index can result with small running together with small tensor modes. We show that a range of heaviest right-handed neutrino masses can be generated, $m_N\sim 10^1-10^{16}$ GeV, consistent with the constraints from reheating and domain walls.
• Quasirational presentations ($QR$- presentations) of (pro-$p$)groups are studied. Such presentations include , in particular, aspherical presentations of discrete groups and their subpresentations, as well as still mysterious pro-$p$-groups with a single defining relation. We provide a positive answer to the conjecture of O.V. Melnikov on the existence of a proper envelop $Env^p$ of aspherical presentations by showing a generalized equivalence of $\mathbb{F}_p$ and $\mathbb{Z}_p$ permutationality in the case of $QR$-presentations. Using schematization of $QR$- presentations we answer the question of Serre on one-relator pro-$p$-groups.
• Nowadays distributed computing approach has become very popular due to several advantages over the centralized computing approach as it also offers high performance computing at a very low cost. Each router implements some queuing mechanism for resources allocation in a best possible optimize manner and governs with packet transmission and buffer mechanism. In this paper, different types of queuing disciplines have been implemented for packet transmission when the bandwidth is allocated as well as packet dropping occurs due to buffer overflow. This gives result in latency in packet transmission, as the packet has to wait in a queue which is to be transmitted again. Some common queuing mechanisms are first in first out, priority queue and weighted fair queuing, etc. This targets simulation in heterogeneous environment through simulator tool to improve the quality of services by evaluating the performance of said queuing disciplines. This is demonstrated by interconnecting heterogeneous devices through step topology. In this paper, authors compared data packet, voice and video traffic by analyzing the performance based on packet dropped rate, delay variation, end to end delay and queuing delay and how the different queuing discipline effects the applications and utilization of network resources at the routers. Before evaluating the performance of the connected devices, a Unified Modeling Language class diagram is designed to represent the static model for evaluating the performance of step topology. Results are described by taking the various case studies.
• Asteroseismic parameters allow us to measure the basic stellar properties of field giants observed far across the Galaxy. Most of such determinations are, up to now, based on simple scaling relations involving the large frequency separation, ∆\nu, and the frequency of maximum power, \nu$_{max}$. In this work, we implement ∆\nu and the period spacing, ∆P, computed along detailed grids of stellar evolutionary tracks, into stellar isochrones and hence in a Bayesian method of parameter estimation. Tests with synthetic data reveal that masses and ages can be determined with typical precision of 5 and 19 per cent, respectively, provided precise seismic parameters are available. Adding independent information on the stellar luminosity, these values can decrease down to 3 and 10 per cent respectively. The application of these methods to NGC 6819 giants produces a mean age in agreement with those derived from isochrone fitting, and no evidence of systematic differences between RGB and RC stars. The age dispersion of NGC 6819 stars, however, is larger than expected, with at least part of the spread ascribable to stars that underwent mass-transfer events.
• In 2010, Joyce et. al defined the leverage centrality of vertices in a graph as a means to analyze functional connections within the human brain. In this metric a degree of a vertex is compared to the degrees of all it neighbors. We investigate this property from a mathematical perspective. We first outline some of the basic properties and then compute leverage centralities of vertices in different families of graphs. In particular, we show there is a surprising connection between the number of distinct leverage centralities in the Cartesian product of paths and the triangle numbers.

Zoltán Zimborás Jan 12 2017 20:38 UTC

Here is a nice description, with additional links, about the importance of this work if it turns out to be flawless (thanks a lot to Martin Schwarz for this link): [dichotomy conjecture][1].

[1]: http://processalgebra.blogspot.com/2017/01/has-feder-vardi-dichotomy-conjecture.html

Noon van der Silk Jan 05 2017 04:51 UTC

This is a cool paper!

Māris Ozols Dec 27 2016 19:34 UTC

Māris Ozols Dec 16 2016 15:38 UTC

Indeed, Schur complement is the answer to the ultimate question!

J. Smith Dec 14 2016 17:43 UTC

Very good Insight on android security problems and malware. Nice Work !

Stefano Pirandola Nov 30 2016 06:45 UTC

Dear Mark, thx for your comment. There are indeed missing citations to previous works by Rafal, Janek and Lorenzo that we forgot to add. Regarding your paper, I did not read it in detail but I have two main comments:

1- What you are using is completely equivalent to the tool of "quantum simulatio

...(continued)
Mark M. Wilde Nov 30 2016 02:18 UTC

An update http://arxiv.org/abs/1609.02160v2 of this paper has appeared, one day after the arXiv post http://arxiv.org/abs/1611.09165 . The paper http://arxiv.org/abs/1609.02160v2 now includes (without citation) some results for bosonic Gaussian channels found independently in http://arxiv.org/abs/16

...(continued)
Felix Leditzky Nov 29 2016 16:34 UTC

Thank you very much for the reply!

Martin Schwarz Nov 24 2016 13:53 UTC

Oded Regev writes [here][1]:

"Dear all,

Yesterday Lior Eldar and I found a flaw in the algorithm proposed
in the arXiv preprint. I do not see how to salvage anything from
the algorithm. The security of lattice-based cryptography against
quantum attacks therefore remains intact and uncha

...(continued)