- Quantum computers promise to efficiently solve not only problems believed to be intractable for classical computers, but also problems for which verifying the solution is also considered intractable. This raises the question of how one can check whether quantum computers are indeed producing correct results. This task, known as quantum verification, has been highlighted as a significant challenge on the road to scalable quantum computing technology. We review the most significant approaches to quantum verification and compare them in terms of structure, complexity and required resources. We also comment on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification. Finally, we discuss issues related to fault tolerance, experimental implementations and the outlook for future protocols.
- Sep 21 2017 quant-ph arXiv:1709.06678v1Fundamental questions in chemistry and physics may never be answered due to the exponential complexity of the underlying quantum phenomena. A desire to overcome this challenge has sparked a new industry of quantum technologies with the promise that engineered quantum systems can address these hard problems. A key step towards demonstrating such a system will be performing a computation beyond the capabilities of any classical computer, achieving so-called quantum supremacy. Here, using 9 superconducting qubits, we demonstrate an immediate path towards quantum supremacy. By individually tuning the qubit parameters, we are able to generate thousands of unique Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert-space. As the number of qubits in the algorithm is varied, the system continues to explore the exponentially growing number of states. Combining these large datasets with techniques from machine learning allows us to construct a model which accurately predicts the measured probabilities. We demonstrate an application of these algorithms by systematically increasing the disorder and observing a transition from delocalized states to localized states. By extending these results to a system of 50 qubits, we hope to address scientific questions that are beyond the capabilities of any classical computer.
- Sep 21 2017 quant-ph arXiv:1709.06648v1We improve the number of T gates needed to perform an n-bit adder from 8n + O(1) to 4n + O(1). We do so via a "temporary logical-AND" construction, which uses four T gates to store the logical-AND of two qubits into an ancilla and zero T gates to later erase the ancilla. Temporary logical-ANDs are a generally useful tool when optimizing T-counts. They can be applied to integer arithmetic, modular arithmetic, rotation synthesis, the quantum Fourier transform, Shor's algorithm, Grover oracles, and many other circuits. Because T gates dominate the cost of quantum computation based on the surface code, and temporary logical-ANDs are widely applicable, our constructions represent a significant reduction in projected costs of quantum computation. We also present an n-bit controlled adder circuit with T-count of 8n + O(1), a temporary adder that can be computed for the same cost as the normal adder but whose result can be kept until it is later uncomputed without using T gates, and discuss some other constructions whose T-count is improved by the temporary logical-AND.
- Sep 21 2017 quant-ph arXiv:1709.06988v1We introduce a measurement-device independent star network which is conveniently based on continuous variable systems and standard linear optics. Here an arbitrary number of users send modulated coherent states to an untrusted relay where a generalized Bell detection creates multi-partite secret correlations. These correlations are then distilled into a shared secret key to implement a completely-secure quantum conference or, alternatively, a protocol of quantum secret-sharing. Our scheme is composably secure and able to achieve high rates with cheap optical implementation.
- We explain how asymptotic safety arises in four-dimensional supersymmetric gauge theories. We provide asymptotically safe supersymmetric gauge theories together with their superconformal fixed points, R-charges, phase diagrams, and UV-IR connecting trajectories. Strict perturbative control is achieved in a Veneziano limit. Consistency with unitarity and the a-theorem is established. We find that supersymmetry enhances the predictivity of asymptotically safe theories.
- Sep 21 2017 quant-ph physics.atom-ph arXiv:1709.06952v1Quantum bits based on individual trapped atomic ions constitute a promising technology for building a quantum computer, with all the elementary operations having been achieved with the necessary precision. However, the essential two-qubit logic gate used for generating quantum entanglement has hitherto always been performed in an adiabatic regime, where the gate is slow compared with the characteristic motional frequencies of ions in the trap, giving logic speeds of order 10kHz. There have been numerous proposals for performing gates faster than this natural "speed limit" of the trap. We implement the method of Steane et al., which uses tailored laser pulses: these are shaped on 10ns timescales to drive the ions' motion along trajectories designed such that the gate operation is insensitive to the initial phase of the optical field. This permits fast (MHz-rate) quantum logic which is robust to this important source of experimental error. We demonstrate entanglement generation for gate times as short as 480ns; this is less than a single oscillation period of an ion in the trap, and 8 orders of magnitude shorter than the memory coherence time measured in similar calcium-43 hyperfine qubits. The method's power is most evident at intermediate timescales, where it yields more than an order of magnitude reduction in gate error compared with conventional techniques; for example, we achieve a 1.6$\mu$s gate with fidelity 99.8%. Still faster gates are possible at the price of higher laser intensity. The method requires only a single amplitude-shaped pulse and one pair of beams derived from a cw laser, and offers the prospect of combining the unrivalled coherence properties, optical connectivity and operation fidelities of trapped-ion qubits with the sub-microsecond logic speeds usually associated with solid state devices.
- We develop some basic results in a higher dimensional foliated Mori theory, and show how these results can be used to prove a structure theorem for the Kleiman-Mori cone of curves in terms of the numerical properties of $K_{\mathcal{F}}$ for rank 2 foliations on threefolds. We also make progress toward realizing a minimal model program for rank 2 foliations on threefolds.
- Sep 21 2017 cs.LG arXiv:1709.06617v1We analyze the generalization error of randomized learning algorithms -- focusing on stochastic gradient descent (SGD) -- using a novel combination of PAC-Bayes and algorithmic stability. Importantly, our risk bounds hold for all posterior distributions on the algorithm's random hyperparameters, including distributions that depend on the training data. This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime. We analyze this algorithm in the context of our risk bounds and evaluate it empirically on a benchmark dataset.
- Sep 21 2017 math.RT arXiv:1709.06589v1We revisit the definition of the Heisenberg category of level k. In level -1, this category was introduced originally by Khovanov, but with some additional cyclicity relations which we show here are unnecessary. In other negative levels, the definition is due to Mackaay and Savage, also with some redundant relations, while the level zero case is the affine oriented Brauer category of Brundan, Comes, Nash and Reynolds. We also discuss cyclotomic quotients.
- In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to maintaining this rapid progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results difficult to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines, and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field, by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
- Can textual data be compressed intelligently without losing accuracy in evaluating sentiment? In this study, we propose a novel evolutionary compression algorithm, PARSEC (PARts-of-Speech for sEntiment Compression), which makes use of Parts-of-Speech tags to compress text in a way that sacrifices minimal classification accuracy when used in conjunction with sentiment analysis algorithms. An analysis of PARSEC with eight commercial and non-commercial sentiment analysis algorithms on twelve English sentiment data sets reveals that accurate compression is possible with (0%, 1.3%, 3.3%) loss in sentiment classification accuracy for (20%, 50%, 75%) data compression with PARSEC using LingPipe, the most accurate of the sentiment algorithms. Other sentiment analysis algorithms are more severely affected by compression. We conclude that significant compression of text data is possible for sentiment analysis depending on the accuracy demands of the specific application and the specific sentiment analysis algorithm used.
- Deep reinforcement learning yields great results for a large array of problems, but models are generally retrained anew for each new problem to be solved. Prior learning and knowledge are difficult to incorporate when training new models, requiring increasingly longer training as problems become more complex. This is especially problematic for problems with sparse rewards. We provide a solution to these problems by introducing Concept Network Reinforcement Learning (CNRL), a framework which allows us to decompose problems using a multi-level hierarchy. Concepts in a concept network are reusable, and flexible enough to encapsulate feature extractors, skills, or other concept networks. With this hierarchical learning approach, deep reinforcement learning can be used to solve complex tasks in a modular way, through problem decomposition. We demonstrate the strength of CNRL by training a model to grasp a rectangular prism and precisely stack it on top of a cube using a gripper on a Kinova JACO arm, simulated in MuJoCo. Our experiments show that our use of hierarchy results in a 45x reduction in environment interactions compared to the state-of-the-art on this task.
- Sep 21 2017 cs.CG arXiv:1709.06965v1Given a drawing of a graph, its visual complexity is defined as the number of geometrical entities in the drawing, for example, the number of segments in a straight-line drawing or the number of arcs in a circular-arc drawing (in 2D). Recently, Chaplick et al. introduced a different measure for the visual complexity, the affine cover number, which is the minimum number of lines (or planes) that together cover a straight-line drawing of a graph $G$ in 2D (3D). In this paper, we introduce the spherical cover number, which is the number of circles (or spheres) that together cover a circular-arc drawing in 2D (or 3D). It turns out that spherical covers are sometimes significantly smaller than affine covers. Moreover, there are highly symmetric graphs that have symmetric optimum spherical covers but apparently no symmetric optimum affine cover. For complete, complete bipartite, and platonic graphs, we analyze their spherical cover numbers and compare them to their affine cover numbers as well as their segment and arc numbers. We also link the spherical cover number to other graph parameters such as chromatic number, treewidth, and linear arboricity.
- Sep 21 2017 cs.RO arXiv:1709.06948v1This paper presents a mutual information (MI) based algorithm for the estimation of full 6-degree-of-freedom (DOF) rigid body transformation between two overlapping point clouds. We first divide the scene into a 3D voxel grid and define simple to compute features for each voxel in the scan. The two scans that need to be aligned are considered as a collection of these features and the MI between these voxelized features is maximized to obtain the correct alignment of scans. We have implemented our method with various simple point cloud features (such as number of points in voxel, variance of z-height in voxel) and compared the performance of the proposed method with existing point-to-point and point-to- distribution registration methods. We show that our approach has an efficient and fast parallel implementation on GPU, and evaluate the robustness and speed of the proposed algorithm on two real-world datasets which have variety of dynamic scenes from different environments.
- One of the most interesting features of Bayesian optimization for direct policy search is that it can leverage priors (e.g., from simulation or from previous tasks) to accelerate learning on a robot. In this paper, we are interested in situations for which several priors exist but we do not know in advance which one fits best the current situation. We tackle this problem by introducing a novel acquisition function, called Most Likely Expected Improvement (MLEI), that combines the likelihood of the priors and the expected improvement. We evaluate this new acquisition function on a transfer learning task for a 5-DOF planar arm and on a possibly damaged, 6-legged robot that has to learn to walk on flat ground and on stairs, with priors corresponding to different stairs and different kinds of damages. Our results show that MLEI effectively identifies and exploits the priors, even when there is no obvious match between the current situations and the priors.
- The interests of individual internet users fall into a hierarchical structure which is useful in regards to building personalized searches and recommendations. Most studies on this subject construct the interest hierarchy of a single person from the document perspective. In this study, we constructed the user interest hierarchy via user profiles. We organized 433,397 user interests, referred to here as "attentions", into a user attention network (UAN) from 200 million user profiles; we then applied the Louvain algorithm to detect hierarchical clusters in these attentions. Finally, a 26-level hierarchy with 34,676 clusters was obtained. We found that these attention clusters were aggregated according to certain topics as opposed to the hyponymy-relation based conceptual ontologies. The topics can be entities or concepts, and the relations were not restrained by hyponymy. The concept relativity encapsulated in the user's interest can be captured by labeling the attention clusters with corresponding concepts.
- In knowledge bases such as Wikidata, it is possible to assert a large set of properties for entities, ranging from generic ones such as name and place of birth to highly profession-specific or background-specific ones such as doctoral advisor or medical condition. Determining a preference or ranking in this large set is a challenge in tasks such as prioritisation of edits or natural-language generation. Most previous approaches to ranking knowledge base properties are purely data-driven, that is, as we show, mistake frequency for interestingness. In this work, we have developed a human-annotated dataset of 350 preference judgments among pairs of knowledge base properties for fixed entities. From this set, we isolate a subset of pairs for which humans show a high level of agreement (87.5% on average). We show, however, that baseline and state-of-the-art techniques achieve only 61.3% precision in predicting human preferences for this subset. We then analyze what contributes to one property being rated as more important than another one, and identify that at least three factors play a role, namely (i) general frequency, (ii) applicability to similar entities and (iii) semantic similarity between property and entity. We experimentally analyze the contribution of each factor and show that a combination of techniques addressing all the three factors achieves 74% precision on the task. The dataset is available at www.kaggle.com/srazniewski/wikidatapropertyranking.
- Sep 21 2017 cs.CL arXiv:1709.06901v1The CEGS N-GRID 2016 Shared Task 1 in Clinical Natural Language Processing focuses on the de-identification of psychiatric evaluation records. This paper describes two participating systems of our team, based on conditional random fields (CRFs) and long short-term memory networks (LSTMs). A pre-processing module was introduced for sentence detection and tokenization before de-identification. For CRFs, manually extracted rich features were utilized to train the model. For LSTMs, a character-level bi-directional LSTM network was applied to represent tokens and classify tags for each token, following which a decoding layer was stacked to decode the most probable protected health information (PHI) terms. The LSTM-based system attained an i2b2 strict micro-F_1 measure of 89.86%, which was higher than that of the CRF-based system.
- This paper presents an evaluation of deep neural networks for recognition of digits entered by users on a smartphone touchscreen. A new large dataset of Arabic numerals was collected for training and evaluation of the network. The dataset consists of spatial and temporal touch data recorded for 80 digits entered by 260 users. Two neural network models were investigated. The first model was a 2D convolutional neural (ConvNet) network applied to bitmaps of the glpyhs created by interpolation of the sensed screen touches and its topology is similar to that of previously published models for offline handwriting recognition from scanned images. The second model used a 1D ConvNet architecture but was applied to the sequence of polar vectors connecting the touch points. The models were found to provide accuracies of 98.50% and 95.86%, respectively. The second model was much simpler, providing a reduction in the number of parameters from 1,663,370 to 287,690. The dataset has been made available to the community as an open source resource.
- Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.
- Sep 21 2017 cs.CV arXiv:1709.06868v1We propose a novel 3D shape parameterization by surface patches, that are oriented by 3D mesh quadrangulation of the shape. By encoding 3D surface detail on local patches, we learn a patch dictionary that identifies principal surface features of the shape. Unlike previous methods, we are able to encode surface patches of variable size as determined by the user. We propose novel methods for dictionary learning and patch reconstruction based on the query of a noisy input patch with holes. We evaluate the patch dictionary towards various applications in 3D shape inpainting, denoising and compression. Our method is able to predict missing vertices and inpaint moderately sized holes. We demonstrate a complete pipeline for reconstructing the 3D mesh from the patch encoding. We validate our shape parameterization and reconstruction methods on both synthetic shapes and real world scans. We show that our patch dictionary performs successful shape completion of complicated surface textures.
- Sep 21 2017 cs.CV arXiv:1709.06841v1We propose a novel monocular visual odometry (VO) system called UnDeepVO in this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks. There are two salient features of the proposed UnDeepVO: one is the unsupervised deep learning scheme, and the other is the absolute scale recovery. Specifically, we train UnDeepVO by using stereo image pairs to recover the scale but test it by using consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss function defined for training the networks is based on spatial and temporal dense information. A system overview is shown in Fig. 1. The experiments on KITTI dataset show our UnDeepVO outperforms other monocular VO methods in terms of pose accuracy.
- Sep 21 2017 quant-ph arXiv:1709.06829v1In this paper we show that all nodes can be found optimally for almost all random Erdős-Rényi ${\mathcal G}(n,p)$ graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires $p=\omega(\log^8(n)/n)$, while the seconds requires $p\geq(1+\varepsilon)\log (n)/n$, where $\varepsilon>0$. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the $\|\cdot\|_\infty $ norm. At the same time for $p<(1-\varepsilon)\log(n)/n$, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.
- Sep 21 2017 cs.RO arXiv:1709.06821v1Sparsity has been widely recognized as crucial for efficient optimization in graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect the set of incorporated measurements, many methods for sparsification have been proposed in hopes of reducing computation. These methods often focus narrowly on reducing edge count without regard for structure at a global level. Such structurally-naive techniques can fail to produce significant computational savings, even after aggressive pruning. In contrast, simple heuristics such as measurement decimation and keyframing are known to reliably produce significant computation reductions. To demonstrate why, we propose a quantitative metric called elimination complexity (EC) that bridges the existing analytic gap between graph structure and computation. EC quantifies the complexity of the primary computational bottleneck: the factorization step of a Gauss-Newton iteration. Using this metric, we show analytically that decimation and keyframing impose favorable global structures and therefore achieve computation reductions on the order of $r^2/9$ and $r^3$ , respectively, where $r$ is the pruning rate. We additionally present numerical results that show EC provides a good approximation of computation in both batch and incremental (iSAM2) optimization and demonstrate that pruning methods promoting global sparsity patterns outperform those that do not.
- The 2010 Silent Speech Challenge benchmark is updated with new results obtained in a Deep Learning strategy, using the same input features and decoding strategy as in the original article. A Word Error Rate of 6.4% is obtained, compared to the published value of 17.4%. Additional results comparing new auto-encoder-based features with the original features at reduced dimensionality, as well as decoding scenarios on two different language models, are also presented. The Silent Speech Challenge archive has been updated to contain both the original and the new auto-encoder features, in addition to the original raw data.
- We prove the spectral invariance of the algebra of classical pseudodifferential boundary value problems on manifolds with conical singularities in the Lp-setting. As a consequence we also obtain the spectral invariance of the classical Boutet de Monvel algebra of zero order operators with parameters. In order to establish these results, we show the equivalence of Fredholm property and ellipticity for both cases.
- For a bounded simply connected domain $\Omega\subset\mathbb{R}^2$, any point $z\in\Omega$ and any $0<\alpha<1$, we give a lower bound for the $\alpha$-dimensional Hausdorff content of the set of points in the boundary of $\Omega$ which can be joined to $z$ by a John curve with John constant depending only on $\alpha$, in terms of the distance of $z$ to $\partial\Omega$. In fact this set in the boundary contains the intersection $\partial\Omega_z\cap\partial\Omega$ of the boundary of a John sub-domain $\Omega_z$ of $\Omega$, centered at $z$, with the boundary of $\Omega$. This may be understood as a quantitative version of a result of Makarov. This estimate is then applied to obtain the pointwise version of a weighted Hardy inequality.
- Sep 21 2017 cs.AI arXiv:1709.06772v1Recently, evolving networks are becoming a suitable form to model many real-world complex systems, due to their peculiarities to represent the systems and their constituting entities, the interactions between the entities and the time-variability of their structure and properties. Designing computational models able to analyze evolving networks becomes relevant in many applications. The goal of this research project is to evaluate the possible contribution of temporal pattern mining techniques in the analysis of evolving networks. In particular, we aim at exploiting available snapshots for the recognition of valuable and potentially useful knowledge about the temporal dynamics exhibited by the network over the time, without making any prior assumption about the underlying evolutionary schema. Pattern-based approaches of temporal pattern mining can be exploited to detect and characterize changes exhibited by a network over the time, starting from observed snapshots.
- Sep 21 2017 cs.CV arXiv:1709.06770v1Rather than simply recognizing the action of a person individually, collective activity recognition aims to find out what a group of people is acting in a collective scene. Previ- ous state-of-the-art methods using hand-crafted potentials in conventional graphical model which can only define a limited range of relations. Thus, the complex structural de- pendencies among individuals involved in a collective sce- nario cannot be fully modeled. In this paper, we overcome these limitations by embedding latent variables into feature space and learning the feature mapping functions in a deep learning framework. The embeddings of latent variables build a global relation containing person-group interac- tions and richer contextual information by jointly modeling broader range of individuals. Besides, we assemble atten- tion mechanism during embedding for achieving more com- pact representations. We evaluate our method on three col- lective activity datasets, where we contribute a much larger dataset in this work. The proposed model has achieved clearly better performance as compared to the state-of-the- art methods in our experiments.
- Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions. In this paper, we address the problem of CNN-based semantic segmentation of crop fields separating sugar beet plants, weeds, and background solely based on RGB data. We propose a CNN that exploits existing vegetation indexes and provides a classification in real time. Furthermore, it can be effectively re-trained to so far unseen fields with a comparably small amount of training data. We implemented and thoroughly evaluated our system on a real agricultural robot operating in different fields in Germany and Switzerland. The results show that our system generalizes well, can operate at around 20Hz, and is suitable for online operation in the fields.
- Robotic learning in simulation environments provides a faster, more scalable, and safer training methodology than learning directly with physical robots. Also, synthesizing images in a simulation environment for collecting large-scale image data is easy, whereas capturing camera images in the real world is time consuming and expensive. However, learning from only synthetic images may not achieve the desired performance in real environments due to the gap between synthetic and real images. We thus propose a method that transfers learned capability of detecting object position from a simulation environment to the real world. Our method enables us to use only a very limited dataset of real images while leveraging a large dataset of synthetic images using multiple variational autoencoders. It detects object positions 6 to 7 times more precisely than the baseline of directly learning from the dataset of the real images. Object position estimation under varying environmental conditions forms one of the underlying requirement for standard robotic manipulation tasks. We show that the proposed method performs robustly in different lighting conditions or with other distractor objects present for this requirement. Using this detected object position, we transfer pick-and-place or reaching tasks learned in a simulation environment to an actual physical robot without re-training.
- Sep 21 2017 cs.CV arXiv:1709.06750v1This paper proposes an end-to-end trainable network, SegFlow, for simultaneously predicting pixel-wise object segmentation and optical flow in videos. The proposed SegFlow has two branches where useful information of object segmentation and optical flow is propagated bidirectionally in a unified framework. The segmentation branch is based on a fully convolutional network, which has been proved effective in image segmentation task, and the optical flow branch takes advantage of the FlowNet model. The unified framework is trained iteratively offline to learn a generic notion, and fine-tuned online for specific objects. Extensive experiments on both the video object segmentation and optical flow datasets demonstrate that introducing optical flow improves the performance of segmentation and vice versa, against the state-of-the-art algorithms.
- Graphs have been widely used to model different information networks, such as the Web, biological networks and social networks (e.g. Twitter). Due to the size and complexity of these graphs, how to explore and utilize these graphs has become a very challenging problem. In this paper, we propose, VCExplorer, a new interactive graph exploration framework that integrates the strengths of graph visualization and graph summarization. Unlike existing graph visualization tools where vertices of a graph may be clustered into a smaller collection of super/virtual vertices, VCExplorer displays a small number of actual source graph vertices (called hubs) and summaries of the information between these vertices. We refer to such a graph as a HA-graph (Hub-based Aggregation Graph). This allows users to appreciate the relationship between the hubs, rather than super/virtual vertices. Users can navigate through the HA- graph by "drilling down" into the summaries between hubs to display more hubs. We illustrate how the graph aggregation techniques can be integrated into the exploring framework as the consolidated information to users. In addition, we propose efficient graph aggregation algorithms over multiple subgraphs via computation sharing. Extensive experimental evaluations have been conducted using both real and synthetic datasets and the results indicate the effectiveness and efficiency of VCExplorer for exploration.
- Sep 21 2017 cs.CE arXiv:1709.06743v1Pandapower is a Python based, BSD-licensed power system analysis tool aimed at automation of static and quasi-static analysis and optimization of power systems. It is a full fledged power system analysis tool that provides power flow, optimal power flow, state estimation, topological graph searches and short circuit calculations according to IEC 60909. The pandapower network model is based on electric elements, which are defined by nameplate parameters and internally processed with equivalent circuit models. The tabular data structure used to define networks is based on the Python library pandas, which allows comfortable handling of input and output parameters. The implementation in Python makes pandapower easy to use and allows comfortable extension with third-party libraries. pandapower has been successfully applied in several grid studies and validated with real grid data.
- Sep 21 2017 cs.CR arXiv:1709.06724v1At EUROCRYPT 2011, Gentry and Halevi implemented a variant of Gentry's fully homomorphic encryption scheme. The core part in their key generation is to generate an odd-determinant ideal lattice having a particular type of Hermite Normal Form. However, they did not give a rigorous proof for the correctness. We present a better key generation algorithm, improving their algorithm from two aspects. -We show how to deterministically generate ideal lattices with odd determinant, thus increasing the success probability close to 1. -We give a rigorous proof for the correctness. To be more specific, we present a simpler condition for checking whether the ideal lattice has the desired Hermite Normal Form. Furthermore, our condition can be checked more efficiently. As a result, our key generation is about 1.5 times faster. We also give experimental results supporting our claims. Our optimizations are based on the properties of ideal lattices, which might be of independent interests.
- Sep 21 2017 cs.DB arXiv:1709.06723v1Applications in various domains rely on processing graph streams, e.g., communication logs of a cloud-troubleshooting system, road-network traffic updates, and interactions on a social network. A labeled-graph stream refers to a sequence of streamed edges that form a labeled graph. Label-aware applications need to filter the graph stream before performing a graph operation. Due to the large volume and high velocity of these streams, it is often more practical to incrementally build a lossy-compressed version of the graph, and use this lossy version to approximately evaluate graph queries. Challenges arise when the queries are unknown in advance but are associated with filtering predicates based on edge labels. Surprisingly common, and especially challenging, are labeled-graph streams that have highly skewed label distributions that might also vary over time. This paper introduces Self-Balanced Graph Sketch (SBG-Sketch, for short), a graphical sketch for summarizing and querying labeled-graph streams that can cope with all these challenges. SBG-Sketch maintains synopsis for both the edge attributes (e.g., edge weight) as well as the topology of the streamed graph. SBG-Sketch allows efficient processing of graph-traversal queries, e.g., reachability queries. Experimental results over a variety of real graph streams show SBG-Sketch to reduce the estimation errors of state-of-the-art methods by up to 99%.
- We present a new technique called contrastive principal component analysis (cPCA) that is designed to discover low-dimensional structure that is unique to a dataset, or enriched in one dataset relative to other data. The technique is a generalization of standard PCA, for the setting where multiple datasets are available -- e.g. a treatment and a control group, or a mixed versus a homogeneous population -- and the goal is to explore patterns that are specific to one of the datasets. We conduct a wide variety of experiments in which cPCA identifies important dataset-specific patterns that are missed by PCA, demonstrating that it is useful for many applications: subgroup discovery, visualizing trends, feature selection, denoising, and data-dependent standardization. We provide geometrical interpretations of cPCA and show that it satisfies desirable theoretical guarantees. We also extend cPCA to nonlinear settings in the form of kernel cPCA. We have released our code as a python package and documentation is on Github.
- Sep 21 2017 cs.LG arXiv:1709.06709v1The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learning can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.
- We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.
- Sep 21 2017 cs.LG arXiv:1709.06683v1Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.
- We propose learning deep models that are monotonic with respect to a user-specified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network. We implement the layers and projections with new computational graph nodes in TensorFlow and use the ADAM optimizer and batched stochastic gradients. Experiments on benchmark and real-world datasets show that six-layer monotonic deep lattice networks achieve state-of-the art performance for classification and regression with monotonicity guarantees.
- Sep 21 2017 cs.RO arXiv:1709.06675v1Due to the distributed nature of cooperative simultaneous localization and mapping (CSLAM), detecting inter-robot loop closures necessitates sharing sensory data with other robots. A naive approach to data sharing can easily lead to a waste of mission-critical resources. This paper investigates the logistical aspects of CSLAM. Particularly, we present a general resource-efficient communication planning framework that takes into account both the total amount of exchanged data and the induced division of labor between the participating robots. Compared to other state-of-the-art approaches, our framework is able to verify the same set of potential inter-robot loop closures while exchanging considerably less data and influencing the induced workloads. We present a fast algorithm for finding globally optimal communication policies, and theoretical analysis to characterize the necessary and sufficient conditions under which simpler strategies are optimal. The proposed framework is extensively evaluated with data from the KITTI odometry benchmark datasets.
- Sep 21 2017 cs.LO arXiv:1709.06672v1In previous work [Lewitzka, 2017], we presented a hierarchy of classical modal logics, along with algebraic semantics, for the reasoning about intuitionistic truth (i.e. proof), belief and knowledge. Interpreting $\square$ as a proof predicate, the systems also express properties of intuitionistic belief and knowledge established in [Artemov and Protopopescu, 2016] where epistemic principles are in line with Brouwer-Heyting-Kolmogorov (BHK) semantics. In this article, we further develop our approach and show that the S5-style systems of our hierarchy are complete w.r.t. a relational semantics based on intuitionistic general frames. This result can be seen as a formal justification of our modal axioms as adequate principles for the reasoning about proof combined with belief and knowledge. In fact, the semantics turns out to be a uniform framework able to describe also the intuitionistic epistemic logics of [Artemov and Protopopescu, 2016]. The relationship between intuitionistic epistemic principles and their representation by modal laws of our classical logics becomes explicit.
- Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks. However, their performances vary significantly across different tasks, implying that the word embeddings learnt by those methods capture complementary aspects of lexical semantics. Therefore, we believe that it is important to combine the existing word embeddings to produce more accurate and complete \emphmeta-embeddings of words. For this purpose, we propose an unsupervised locally linear meta-embedding learning method that takes pre-trained word embeddings as the input, and produces more accurate meta embeddings. Unlike previously proposed meta-embedding learning methods that learn a global projection over all words in a vocabulary, our proposed method is sensitive to the differences in local neighbourhoods of the individual source word embeddings. Moreover, we show that vector concatenation, a previously proposed highly competitive baseline approach for integrating word embeddings, can be derived as a special case of the proposed method. Experimental results on semantic similarity, word analogy, relation classification, and short-text classification tasks show that our meta-embeddings to significantly outperform prior methods in several benchmark datasets, establishing a new state of the art for meta-embeddings.
- Sep 21 2017 cs.RO arXiv:1709.06670v1Suction-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. This ability simplifies planning, and hand-coded heuristics such as targeting planar surfaces are often used to select suction grasps based on point cloud data. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and target object and determines whether or not the suction grasp can resist an external wrench (e.g. gravity) on the object. To evaluate a grasp, we measure robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We use this model to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels computed with 1,500 3D object models and we train a Grasp Quality Convolutional Neural Network (GQ-CNN) on this dataset to classify suction grasp robustness from point clouds. We evaluate the resulting system in 375 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When the object shape, pose, and mass properties are known, the model achieves 99$\%$ precision on a dataset of objects with Adversarial geometry such as sharply curved surfaces. Furthermore, a GQ-CNN-based policy trained on Dex-Net 3.0 achieves 99$\%$ and 97$\%$ precision respectively on a dataset of Basic and Typical objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net .
- Sep 21 2017 cs.RO arXiv:1709.06668v1Surgical debridement is the process of removing dead or damaged tissue to allow the remaining parts to heal. Automating this procedure could reduce surgical fatigue and facilitate teleoperation, but doing so is challenging for Robotic Surgical Assistants (RSAs) such as the da Vinci Research Kit (dVRK) due to inherent non-linearities in cable-driven systems. Consequently, we propose and evaluate a two-phase calibration process. In Phase I, the robot performs a set of open-loop trajectories to obtain abundant, cheap but coarse data of camera pixels and internal robot joint values, and learns a nonlinear transformation. In Phase II, the robot uses Phase I to move systematically to target points in a printed array. Each time, a human operator manually adjusts the end-effector position by direct contact (not through teleoperation), resulting in a small, high-quality dataset. Experiments suggest that without calibration, position errors are 4.55mm. Phase I alone can reduce average error to 2.14mm, but the combination of Phase I and Phase II reduces average error to 1.08mm. We apply this combination to an approximation of debridement with randomly oriented raisins and pumpkin seeds. Using an endoscopic stereo camera with standard edge detection, experiments with 120 trials achieved success rates of 91.7% to 99.2%, slightly exceeding prior results (89.4%) and more than 2.1x faster, decreasing the time per fragment from 15.8 seconds to 7.3 seconds. Source code, data, and videos are available at https://sites.google.com/view/calib-icra/.
- Sep 21 2017 cs.CV arXiv:1709.06664v1Visual attributes, from simple objects (e.g., backpacks, hats) to soft-biometrics (e.g., gender, height, clothing) have proven to be a powerful representational approach for many applications such as image description and human identification. In this paper, we introduce a novel method to combine the advantages of both multi-task and curriculum learning in a visual attribute classification framework. Individual tasks are grouped after performing hierarchical clustering based on their correlation. The clusters of tasks are learned in a curriculum learning setup by transferring knowledge between clusters. The learning process within each cluster is performed in a multi-task classification setup. By leveraging the acquired knowledge, we speed-up the process and improve performance. We demonstrate the effectiveness of our method via ablation studies and a detailed analysis of the covariates, on a variety of publicly available datasets of humans standing with their full-body visible. Extensive experimentation has proven that the proposed approach boosts the performance by 4% to 10%.
- Sep 21 2017 cs.SD arXiv:1709.06663v1It is shown how binary sequences can be associated with automatic composition of monophonic pieces. We are concerned with the composition of e-music from finite field structures. The information at the input may be either random or information from a black-and-white, grayscale or color picture. New e-compositions and music score are made available, including a new piece from the famous Lenna picture: the score of the e-music <<Between Lenna's eyes in C major.>> The corresponding stretch of music score are presented. Some particular structures, including clock arithmetic (mod 12), GF(7), GF(8), GF(13) and GF(17) are addressed. Further, multilevel block-codes are also used in a new approach of e-music composition, engendering a particular style as an e-composer. As an example, Pascal multilevel block codes recently introduced are handled to generate a new style of electronic music over GF(13).
- Sep 21 2017 astro-ph.IM arXiv:1709.06660v1The detection of Earth-like exoplanets in the habitable zone of their stars, and their spectroscopic characterization in a search for biosignatures, requires starlight suppression that exceeds the current best ground-based performance by orders of magnitude. The required planet/star brightness ratio of order 1e-10 at visible wavelengths can be obtained by blocking stellar photons with an occulter, either externally (a starshade) or internally (a coronagraph) to the telescope system, and managing diffracted starlight, so as to directly image the exoplanet in reected starlight. Coronagraph instruments require advancement in telescope aperture (either monolithic or segmented), aperture obscurations (obscured by secondary mirror and its support struts), and wavefront error sensitivity (e.g. line-of-sight jitter, telescope vibration, polarization). The starshade, which has never been used in a science application, benefits a mission by being decoupled from the telescope, allowing a loosening of telescope stability requirements. In doing so, it transfers the difficult technology from the telescope system to a large deployable structure (tens of meters to greater than 100 m in diameter) that must be positioned precisely at a distance of tens of thousands of kilometers from the telescope. We describe in this paper a roadmap to achieving the technological capability to search for biosignatures on an Earth-like exoplanet from a future space telescope. Two of these studies, HabEx and LUVOIR, include the direct imaging of Earth-sized habitable exoplanets as a central science theme.
- The incorporation of macro-actions (temporally extended actions) into multi-agent decision problems has the potential to address the curse of dimensionality associated with such decision problems. Since macro-actions last for stochastic durations, multiple agents executing decentralized policies in cooperative environments must act asynchronously. We present an algorithm that modifies Generalized Advantage Estimation for temporally extended actions, allowing a state-of-the-art policy optimization algorithm to optimize policies in Dec-POMDPs in which agents act asynchronously. We show that our algorithm is capable of learning optimal policies in two cooperative domains, one involving real-time bus holding control and one involving wildfire fighting with unmanned aircraft. Our algorithm works by framing problems as "event-driven decision processes," which are scenarios where the sequence and timing of actions and events are random and governed by an underlying stochastic process. In addition to optimizing policies with continuous state and action spaces, our algorithm also facilitates the use of event-driven simulators, which do not require time to be discretized into time-steps. We demonstrate the benefit of using event-driven simulation in the context of multiple agents taking asynchronous actions. We show that fixed time-step simulation risks obfuscating the sequence in which closely-separated events occur, adversely affecting the policies learned. Additionally, we show that arbitrarily shrinking the time-step scales poorly with the number of agents.