- The class $\MIP^*$ of promise problems that can be decided through an interactive proof system with multiple entangled provers provides a complexity-theoretic framework for the exploration of the nonlocal properties of entanglement. Little is known about the power of this class. The only proposed approach for establishing upper bounds is based on a hierarchy of semidefinite programs introduced independently by Pironio et al. and Doherty et al. This hierarchy converges to a value that is only known to coincide with the provers' maximum success probability in a given proof system under a plausible but difficult mathematical conjecture, Connes' embedding conjecture. No bounds on the rate of convergence are known. We introduce a rounding scheme for the hierarchy, establishing that any solution to its $N$-th level can be mapped to a strategy for the provers in which measurement operators associated with distinct provers have pairwise commutator bounded by $O(\ell^2/\sqrt{N})$ in operator norm, where $\ell$ is the number of possible answers per prover. Our rounding scheme motivates the introduction of a variant of $\MIP^*$, called $\MIP_\delta^*$, in which the soundness property is required to hold as long as the commutator of operations performed by distinct provers has norm at most $\delta$. Our rounding scheme implies the upper bound $\MIP_\delta^* \subseteq \DTIME(\exp(\exp(\poly)/\delta^2))$. In terms of lower bounds we establish that $\MIP^*_{2^{-\poly}}$, with completeness $1$ and soundness $1-2^{-\poly}$, contains $\NEXP$. The relationship of $\MIP_\delta^*$ to $\MIPstar$ has connections with the mathematical literature on approximate commutation. Our rounding scheme gives an elementary proof that the Strong Kirchberg Conjecture implies that $\MIPstar$ is computable. We discuss applications to device-independent cryptography.
- We develop a framework for resource efficient compilation of higher-level programs into lower-level reversible circuits. Our main focus is on optimizing the memory footprint of the resulting reversible networks. This is motivated by the limited availability of qubits for the foreseeable future. We apply three main techniques to keep the number of required qubits small when computing classical, irreversible computations by means of reversible networks: first, wherever possible we allow the compiler to make use of in-place functions to modify some of the variables. Second, an intermediate representation is introduced that allows to trace data dependencies within the program, allowing to clean up qubits early. This realizes an analog to "garbage collection" for reversible circuits. Third, we use the concept of so-called pebble games to transform irreversible programs into reversible programs under space constraints, allowing for data to be erased and recomputed if needed. We introduce REVS, a compiler for reversible circuits that can translate a subset of the functional programming language F# into Toffoli networks which can then be further interpreted for instance in LIQui|>, a domain-specific language for quantum computing and which is also embedded into F#. We discuss a number of test cases that illustrate the advantages of our approach including reversible implementations of SHA-2 and other cryptographic hash-functions, reversible integer arithmetic, as well as a test-bench of combinational circuits used in classical circuit synthesis. Compared to Bennett's method, REVS can reduce space complexity by a factor of $4$ or more, while having an only moderate increase in circuit size as well as in the time it takes to compile the reversible networks.
- Oct 02 2015 quant-ph cond-mat.stat-mech arXiv:1510.00010v1Living organisms capitalize on their ability to predict their environment to maximize their available free energy, and invest this energy in turn to create new complex structures. Is there a preferred method by which this manipulation of structure should be done? Our intuition is "simpler is better," but this is only a guiding principal. Here, we substantiate this claim through thermodynamic reasoning. We present a new framework for the manipulation of patterns (structured sequences of data) by predictive devices. We identify the dissipative costs and how they can be minimized by the choice of memory in these predictive devices. For pattern generation, we see that simpler is indeed better. However, contrary to intuition, when it comes to extracting work from a pattern, any device capable of making statistically accurate predictions can recover all available energy.
- Oct 02 2015 quant-ph arXiv:1510.00219v1We propose a method to detect lower bounds to quantum capacities of a noisy quantum communication channel by means of few measurements. The method is easily implementable and does not require any knowledge about the channel. We test its efficiency by studying its performance for most well known single qubit noisy channels and for the generalised Pauli channel in arbitrary finite dimension.
- Oct 02 2015 quant-ph arXiv:1510.00121v1Continuous-time quantum error correction is a technique for protecting quantum information against decoherence, where both the decoherence and error correction processes are considered continuous in time. Given any [[$n,k,d$]] quantum stabilizer code, we formulate a class of protocols to implement CTQEC, involving weak coherent measurements and weak unitary corrections. Under this formalism, we show that the minimal required size of the ancillary system is $n-k+1$ qubits, and we propose one scheme that meets this minimal requirement. Furthermore, we compare our method with other known schemes, and show that a particular measure of performance described in this paper is better when using our method.
- Oct 02 2015 quant-ph arXiv:1510.00113v1We present quantum algorithms to efficiently perform discriminant analysis for dimensionality reduction and classification over an exponentially large input data set. Compared with the best-known classical algorithms, the quantum algorithms show an exponential speedup in both the number of training vectors $M$ and the feature space dimension $N$. We generalize the previous quantum algorithm for solving systems of linear equations [Phys. Rev. Lett. 103, 150502 (2009)] to efficiently implement a Hermitian chain product of $k$ normalized $N\times N$ Hermitian positive-semidefinite matrices with time complexity of $O(\log (N))$. Using this result, we perform linear as well as nonlinear Fisher discriminant analysis for dimensionality reduction over $M$ vectors, each in an $N$-dimensional feature space, in time $O(\log (MN)/\epsilon ^{3})$, where $\epsilon $ denotes the tolerance error. We also present a quantum discriminant analysis algorithm for data classification with time complexity $O(\log (MN)/\epsilon ^{3})$.
- Oct 02 2015 hep-th arXiv:1510.00376v1We review moduli stabilization in type IIB string theory compactification with fluxes. We focus on the KKLT and Large Volume Scenario (LVS). We show that the predicted soft SUSY breaking terms in KKLT model are not phenomenological viable. In LVS, the following result for scalar mass, gaugino mass, and trilinear term is obtained: $m_0 =m_{1/2}= - A_0=m_{3/2}$, which may account for Higgs mass limit if $m_{3/2} \sim {\cal O}(1.5)$ TeV. However, in this case the relic abundance of the lightest neutralino can not be consistent with the measured limits. We also study the cosmological consequences of moduli stabilization in both models. In particular, the associated inflation models such as racetrack inflation and Kähler inflation are analyzed. Finally the problem of moduli destabilization and the effect of string moduli backreaction on the inflation models are discussed.
- Oct 02 2015 cs.CC arXiv:1510.00354v1In this paper we present a graph property with sensitivity $\Theta(\sqrt{n})$, where $n={v\choose2}$ is the number of variables, and generalize it to a $k$-uniform hypergraph property with sensitivity $\Theta(\sqrt{n})$, where $n={v\choose k}$ is again the number of variables. This yields the smallest sensitivity yet achieved for a $k$-uniform hypergraph property. We then show that, for even $k$, there is a $k$-uniform hypergraph property that demonstrates a quadratic gap between sensitivity and block sensitivity. This matches the previously known largest gap found by Rubinstein (1995) for a general Boolean function and Chakraborty (2005) for a cyclically invariant Boolean function, and is the first known example of such a gap for a graph or hypergraph property.
- Oct 02 2015 hep-th arXiv:1510.00349v1AdS black holes with hyperbolic horizons provide strong-coupling descriptions of thermal CFT states on hyperboloids. The low-temperature limit of these systems is peculiar. In this note we show that, in addition to a large ground state degeneracy, these states also have an anomalously large holographic complexity, scaling logarithmically with the temperature. We speculate on whether this fact generalizes to other systems whose extreme infrared regime is formally controlled by Conformal Quantum Mechanics, such as various instances of near-extremal charged black holes.
- Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, We introduce a three stage pipeline: pruning, quantization and Huffman encoding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman encoding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory, which has 180x less access energy.
- The lack of reliable data in developing countries is a major obstacle towards sustainable development, food security, and disaster relief. Poverty data, for example, is typically scarce, sparse in coverage, and labor-intensive to obtain. Remote sensing data such as high-resolution satellite imagery, on the other hand, is becoming increasingly available and inexpensive. Unfortunately, such data is highly unstructured and we lack techniques to automatically extract useful insights to inform policy decisions and help direct humanitarian efforts. We propose a novel machine learning approach to extract large-scale socioeconomic indicators from high-resolution satellite imagery. The main challenge is that training data is very scarce, making it difficult to apply modern techniques such as Convolutional Neural Networks (CNN). We therefore propose a transfer learning approach where nighttime light intensities are used as a data-rich proxy. We train a fully convolutional CNN model to predict nighttime lights from daytime imagery, simultaneously learning features that are useful for poverty prediction. The model learns filters identifying different terrains and man-made structures, including roads, buildings, and farmlands, without any supervision beyond nighttime lights. We demonstrate that these learned features are highly informative for poverty mapping, even approaching the predictive performance of survey data collected in the field.
- Oct 02 2015 physics.optics cond-mat.mtrl-sci arXiv:1510.00016v1We introduce a solid material that is itself invisible, possessing identical electromagnetic properties as air (i.e. not a cloak) at a desired frequency. Such a material could provide improved mechanical stability, electrical conduction and heat dissipation to a system, without disturbing incident electromagnetic radiation. One immediate application would be towards perfect antenna radomes. Unlike cloaks, such a transparent and self-invisible material has yet to be demonstrated. Previous research has shown that a single sphere or cylinder coated with plasmonic or dielectric layers can have a dark-state with considerably suppressed scattering cross-section, due to the destructive interference between two resonances in one of its scattering channels. Nevertheless, a massive collection of these objects will have an accumulated and detectable disturbance to the original field distribution. Here we overcome this bottleneck by lining up the dark-state frequencies in different channels. Specifically, we derive analytically, verify numerically and demonstrate experimentally that deliberately designed corrugated metallic wires can have record-low scattering amplitudes, achieved by aligning the nodal frequencies of the first two scattering channels. This enables an arbitrary assembly of these wires to be omnidirectionally invisible and the effective constitutive parameters nearly identical to air. Measured transmission spectra at microwave frequencies reveal indistinguishable results for all the arrangements of the 3D-printed samples studied.
- In a variety of research areas, the bag of weighted vectors and the histogram are widely used descriptors for complex objects. Both can be expressed as discrete distributions. D2-clustering pursues the minimum total within-cluster variation for a set of discrete distributions subject to the Kantorovich-Wasserstein metric. D2-clustering has a severe scalability issue, the bottleneck being the computation of a centroid distribution that minimizes its sum of squared distances to the cluster members. In this paper, we develop three scalable optimization techniques, specifically, the subgradient descent method, ADMM, and modified Bregman ADMM, for computing the centroids of large clusters without compromising the objective function. The strengths and weaknesses of these techniques are examined through experiments; and scenarios for their respective usage are recommended. Moreover, we develop both serial and parallelized versions of the algorithms, collectively named the AD2-clustering. By experimenting with large-scale data, we demonstrate the computational efficiency of the new methods and investigate their convergence properties and numerical stability. The clustering results obtained on several datasets in different domains are highly competitive in comparison with some widely used methods' in the corresponding areas.
- Oct 02 2015 physics.chem-ph arXiv:1510.00381v1
- Oct 02 2015 cond-mat.quant-gas arXiv:1510.00380v1
- Oct 02 2015 math.AP physics.flu-dyn arXiv:1510.00379v1
- Oct 02 2015 hep-lat arXiv:1510.00371v1
- Oct 02 2015 physics.atom-ph cond-mat.quant-gas arXiv:1510.00368v1
- Oct 02 2015 math.NT arXiv:1510.00367v1
- Oct 02 2015 astro-ph.GA arXiv:1510.00366v1
- Oct 02 2015 math.GR arXiv:1510.00365v1
- Oct 02 2015 hep-th arXiv:1510.00364v1
- Oct 02 2015 cond-mat.str-el arXiv:1510.00363v1
- Oct 02 2015 cond-mat.mtrl-sci arXiv:1510.00361v1
- Oct 02 2015 math.LO arXiv:1510.00356v1
- Oct 02 2015 physics.comp-ph math.NA arXiv:1510.00353v1
- Oct 02 2015 physics.optics arXiv:1510.00348v1
- Oct 02 2015 astro-ph.CO arXiv:1510.00345v1
- Oct 02 2015 math.PR arXiv:1510.00339v1
- Oct 02 2015 math.LO arXiv:1510.00340v1
- Oct 02 2015 math.OC arXiv:1510.00338v1
- Oct 02 2015 astro-ph.SR arXiv:1510.00337v1
- Oct 02 2015 cond-mat.stat-mech arXiv:1510.00336v1
- Oct 02 2015 astro-ph.SR arXiv:1510.00334v1
- Oct 02 2015 q-bio.NC arXiv:1510.00332v1