# Top arXiv papers

• This paper studies how to efficiently learn an optimal latent variable model online from large streaming data. Latent variable models can explain the observed data in terms of unobserved concepts. They are traditionally studied in the unsupervised learning setting, and learned by iterative methods such as the EM. Very few online learning algorithms for latent variable models have been developed, and the most popular one is online EM. Though online EM is computationally efficient, it typically converges to a local optimum. In this work, we motivate and develop SpectralFPL, a novel algorithm to learn latent variable models online from streaming data. SpectralFPL is computationally efficient, and we prove that it quickly learns the global optimum under a bag-of-words model by deriving an $O(\sqrt n)$ regret bound. Experiment results also demonstrate a consistent performance improvement of SpectralFPL over online EM: in both synthetic and real-world experiments, SpectralFPL's performance is similar with or even better than online EM with optimally tuned parameters.
• We present a semi-automated system for sizing nasal Positive Airway Pressure (PAP) masks based upon a neural network model that was trained with facial photographs of both PAP mask users and non-users. It demonstrated an accuracy of 72% in correctly sizing a mask and 96% accuracy sizing to within 1 mask size group. The semi-automated system performed comparably to sizing from manual measurements taken from the same images which produced 89% and 100% accuracy respectively.
• We prove the boundedness of a class of tri-linear operators consisting of a quasi piece of bilinear Hilbert transform whose scale equals to or dominates the scale of its linear counter part. Such type of operators is motivated by the tri-linear Hilbert transform and its curved versions.
• This paper presents SceneCut, a novel approach to jointly discover previously unseen objects and non-object surfaces using a single RGB-D image. SceneCut's joint reasoning over scene semantics and geometry allows a robot to detect and segment object instances in complex scenes where modern deep learning-based methods either fail to separate object instances, or fail to detect objects that were not seen during training. SceneCut automatically decomposes a scene into meaningful regions which either represent objects or scene surfaces. The decomposition is qualified by an unified energy function over objectness and geometric fitting. We show how this energy function can be optimized efficiently by utilizing hierarchical segmentation trees. Moreover, we leverage a pre-trained convolutional oriented boundary network to predict accurate boundaries from images, which are used to construct high-quality region hierarchies. We evaluate SceneCut on several different indoor environments, and the results show that SceneCut significantly outperforms all the existing methods.
• We consider the Beris-Edwards model describing nematic liquid crystal dynamics and restrict to a shear flow and spatially homogeneous situation. We analyze the dynamics focusing on the effect of the flow. We show that in the co-rotational case one has gradient dynamics, up to a periodic eigenframe rotation, while in the non-co-rotational case we identify the short and long time regime of the dynamics. We express these in terms of the physical variables and compare with the predictions of other models of liquid crystal dynamics.
• In this paper, we propose a new system design framework for large vocabulary automatic chord estimation. Our approach is based on an integration of traditional sequence segmentation processes and deep learning chord classification techniques. We systematically explore the design space of the proposed framework for a range of parameters, namely deep neural nets, network configurations, input feature representations, segment tiling schemes, and training data sizes. Experimental results show that among the three proposed deep neural nets and a baseline model, the recurrent neural network based system has the best average chord quality accuracy that significantly outperforms the other considered models. Furthermore, our bias-variance analysis has identified a glass ceiling as a potential hindrance to future improvements of large vocabulary automatic chord estimation systems.
• Motivated by Gordan Žitković's idea of convex compactness for a convex set of a linear topological space, we introduce the concept of $L^0$--convex compactness for an $L^0$--convex set of a topological module over the topological algebra $L^0$, where $L^0$ is the algebra of equivalence classes of real--valued random variables on a given probability space $(\Omega,\mathcal{F},P)$ and endowed with the topology of convergence in probability. This paper continues to develop the theory of $L^0$--convex compactness by establishing various kinds of characterization theorems for $L^0$--convex subsets of a class of important topological modules--random reflexive random normed modules. As applications, we successfully generalize some basic theorems of classical convex optimization and variational inequalities from a convex function on a reflexive Banach space to an $L^0$--convex function on a random reflexive random normed module. Since the usual weak compactness method fails to be valid, we are forced to use the $L^0$--convex compactness method so that a series of new skills can be discovered. These new skills actually also provide a new proof for the corresponding classical case and thus they are of new interest themselves.
• In this paper, we propose a novel recurrent neural network architecture for speech separation. This architecture is constructed by unfolding the iterations of a sequential iterative soft-thresholding algorithm (ISTA) that solves the optimization problem for sparse nonnegative matrix factorization (NMF) of spectrograms. We name this network architecture deep recurrent NMF (DR-NMF). The proposed DR-NMF network has three distinct advantages. First, DR-NMF provides better interpretability than other deep architectures, since the weights correspond to NMF model parameters, even after training. This interpretability also provides principled initializations that enable faster training and convergence to better solutions compared to conventional random initialization. Second, like many deep networks, DR-NMF is an order of magnitude faster at test time than NMF, since computation of the network output only requires evaluating a few layers at each time step. Third, when a limited amount of training data is available, DR-NMF exhibits stronger generalization and separation performance compared to sparse NMF and state-of-the-art long-short term memory (LSTM) networks. When a large amount of training data is available, DR-NMF achieves lower yet competitive separation performance compared to LSTM networks.
• In this paper, we investigate three geometrical invariants of knots, the height, the trunk and the representativity. First, we give a conterexample for the conjecture which states that the height is additive under connected sum of knots. We also define the minimal height of a knot and give a potential example which has a gap between the height and the minimal height. Next, we show that the representativity is bounded above by a half of the trunk. We also define the trunk of a tangle and show that if a knot has an essential tangle decomposition, then the representativity is bounded above by half of the trunk of either of the two tangles. Finally, we remark on the difference among Gabai's thin position, ordered thin position and minimal critical position. We also give an example of a knot which bounds an essential non-orientable spanning surface, but has arbitrarily large representativity.
• Aiming to augment generative models with external memory, we interpret the output of a memory module with stochastic addressing as a conditional mixture distribution, where a read operation corresponds to sampling a discrete memory address and retrieving the corresponding content from memory. This perspective allows us to apply variational inference to memory addressing, which enables effective training of the memory module by using the target information to guide memory lookups. Stochastic addressing is particularly well-suited for generative models as it naturally encourages multimodality which is a prominent aspect of most high-dimensional datasets. Treating the chosen address as a latent variable also allows us to quantify the amount of information gained with a memory lookup and measure the contribution of the memory module to the generative process. To illustrate the advantages of this approach we incorporate it into a variational autoencoder and apply the resulting model to the task of generative few-shot learning. The intuition behind this architecture is that the memory module can pick a relevant template from memory and the continuous part of the model can concentrate on modeling remaining variations. We demonstrate empirically that our model is able to identify and access the relevant memory contents even with hundreds of unseen Omniglot characters in memory
• In this paper we consider steady vortex flows for the incompressible Euler equations in a planar bounded domain. By solving a variational problem for the vorticity, we construct steady double vortex patches with opposite signs concentrating at a strict local minimum point of the Kirchhoff-Routh function with $k=2$. Moreover we show that such steady solutions are in fact local maximizers of the kinetic energy among isovortical patches, which correlates stability to uniqueness.
• A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting.
• This paper presents an empirical study of two machine translation-based approaches for Vietnamese diacritic restoration problem, including phrase-based and neural-based machine translation models. This is the first work that applies neural-based machine translation method to this problem and gives a thorough comparison to the phrase-based machine translation method which is the current state-of-the-art method for this problem. On a large dataset, the phrase-based approach has an accuracy of 97.32% while that of the neural-based approach is 96.15%. While the neural-based method has a slightly lower accuracy, it is about twice faster than the phrase-based method in terms of inference speed. Moreover, neural-based machine translation method has much room for future improvement such as incorporating pre-trained word embeddings and collecting more training data.
• Modern malware families often rely on domain-generation algorithms (DGAs) to determine rendezvous points to their command-and-control server. Traditional defence strategies (such as blacklisting domains or IP addresses) are inadequate against such techniques due to the large and continuously changing list of domains produced by these algorithms. This paper demonstrates that a machine learning approach based on recurrent neural networks is able to detect domain names generated by DGAs with high precision. The neural models are estimated on a large training set of domains generated by various malwares. Experimental results show that this data-driven approach can detect malware-generated domain names with a F_1 score of 0.971. To put it differently, the model can automatically detect 93 % of malware-generated domain names for a false positive rate of 1:100.
• In recent years, fuzz testing has proven itself to be one of the most effective techniques for finding correctness bugs and security vulnerabilities in practice. One particular fuzz testing tool, American Fuzzy Lop or AFL, has become popular thanks to its ease-of-use and bug-finding power. However, AFL remains limited in the depth of program coverage it achieves, in particular because it does not consider which parts of program inputs should not be mutated in order to maintain deep program coverage. We propose an approach, FairFuzz, that helps alleviate this limitation in two key steps. First, FairFuzz automatically prioritizes inputs exercising rare parts of the program under test. Second, it automatically adjusts the mutation of inputs so that the mutated inputs are more likely to exercise these same rare parts of the program. We conduct evaluation on real-world programs against state-of-the-art versions of AFL, thoroughly repeating experiments to get good measures of variability. We find that on certain benchmarks FairFuzz shows significant coverage increases after 24 hours compared to state-of-the-art versions of AFL, while on others it achieves high program coverage at a significantly faster rate.
• Topological Data Analysis (TDA) is a recent and growing branch of statistics devoted to the study of the shape of the data. In this work we investigate the predictive power of TDA in the context of supervised learning. Since topological summaries, most noticeably the Persistence Diagram, are typically defined in complex spaces, we adopt a kernel approach to translate them into more familiar vector spaces. We define a topological exponential kernel, we characterize it, and we show that, despite not being positive semi-definite, it can be successfully used in regression and classification tasks.
• Operationalizing machine learning based security detections is extremely challenging, especially in a continuously evolving cloud environment. Conventional anomaly detection does not produce satisfactory results for analysts that are investigating security incidents in the cloud. Model evaluation alone presents its own set of problems due to a lack of benchmark datasets. When deploying these detections, we must deal with model compliance, localization, and data silo issues, among many others. We pose the problem of "attack disruption" as a way forward in the security data science space. In this paper, we describe the framework, challenges, and open questions surrounding the successful operationalization of machine learning based security detections in a cloud environment and provide some insights on how we have addressed them.
• In this paper we design and evaluate a Deep-Reinforcement Learning agent that optimizes routing. Our agent adapts automatically to current traffic conditions and proposes tailored configurations that attempt to minimize the network delay. Experiments show very promising performance. Moreover, this approach provides important operational advantages with respect to traditional optimization algorithms.
• We consider image classification with estimated depth. This problem falls into the domain of transfer learning, since we are using a model trained on a set of depth images to generate depth maps (additional features) for use in another classification problem using another disjoint set of images. It's challenging as no direct depth information is provided. Though depth estimation has been well studied, none have attempted to aid image classification with estimated depth. Therefore, we present a way of transferring domain knowledge on depth estimation to a separate image classification task over a disjoint set of train, and test data. We build a RGBD dataset based on RGB dataset and do image classification on it. Then evaluation the performance of neural networks on the RGBD dataset compared to the RGB dataset. From our experiments, the benefit is significant with shallow and deep networks. It improves ResNet-20 by 0.55% and ResNet-56 by 0.53%. Our code and dataset are available publicly.
• We study the regularity of the solution of the double obstacle problem form for fully non linear parabolic and elliptic operators. We show that when the obstacles are sufficiently regular the solution is $C^{1,\alpha}$ in the interior for both the parabolic and the elliptic cases.
• Sep 22 2017 cs.CV arXiv:1709.07065v1
In this paper, we propose a pipeline for multi-target visual tracking under multi-camera system. For multi-camera system tracking problem, efficient data association across cameras, and at the same time, across frames becomes more important than single-camera system tracking. However, most of the multi-camera tracking algorithms emphasis on single camera across frame data association. Thus in our work, we model our tracking problem as a global graph, and adopt Generalized Maximum Multi Clique optimization problem as our core algorithm to take both across frame and across camera data correlation into account all together. Furthermore, in order to compute good similarity scores as the input of our graph model, we extract both appearance and dynamic motion similarities. For appearance feature, Local Maximal Occurrence Representation(LOMO) feature extraction algorithm for ReID is conducted. When it comes to capturing the dynamic information, we build Hankel matrix for each tracklet of target and apply rank estimation with Iterative Hankel Total Least Squares(IHTLS) algorithm to it. We evaluate our tracker on the challenging Terrace Sequences from EPFL CVLAB as well as recently published Duke MTMC dataset.
• The Gaussian process is a standard tool for building emulators for both deterministic and stochastic computer experiments. However, application of Gaussian process models is greatly limited in practice, particularly for large-scale and many-input computer experiments that have become typical. We propose a multi-resolution functional ANOVA model as a computationally feasible emulation alternative. More generally, this model can be used for large-scale and many-input non-linear regression problems. An overlapping group lasso approach is used for estimation, ensuring computational feasibility in a large-scale and many-input setting. New results on consistency and inference for the (potentially overlapping) group lasso in a high-dimensional setting are developed and applied to the proposed multi-resolution functional ANOVA model. Importantly, these results allow us to quantify the uncertainty in our predictions. Numerical examples demonstrate that the proposed model enjoys marked computational advantages. Data capabilities, both in terms of sample size and dimension, meet or exceed best available emulation tools while meeting or exceeding emulation accuracy.
• We present a distributed algorithm for a swarm of active particles to camouflage in an environment. Each particle is equipped with sensing, computation and communication, allowing the system to take color and gradient information from the environment and self-organize into an appropriate pattern. Current artificial camouflage systems are either limited to static patterns, which are adapted for specific environments, or rely on back-projection, which depend on the viewer's point of view. Inspired by the camouflage abilities of the cuttlefish, we propose a distributed estimation and pattern formation algorithm that allows to quickly adapt to different environments. We present convergence results both in simulation as well as on a swarm of miniature robots "Droplets" for a variety of patterns.
• Sep 22 2017 hep-ph arXiv:1709.07039v1
The solution to the Strong CP problem is analysed within the Minimal Flavour Violation (MFV) context. An Abelian factor of the complete flavour symmetry of the fermionic kinetic terms may play the role of the Peccei-Quinn symmetry in traditional axion models. Its spontaneous breaking, due to the addition of a complex scalar field to the Standard Model scalar spectrum, generates the MFV axion, which may redefine away the QCD theta parameter. It differs from the traditional QCD axion for its couplings that are governed by the fermion charges under the axial Abelian symmetry. It is also distinct from the so-called Axiflavon, as the MFV axion does not describe flavour violation, while it does induce flavour non-universality effects. The MFV axion phenomenology is discussed considering astrophysical, collider and flavour data.
• We investigate robotic assistants for dressing that can anticipate the motion of the person who is being helped. To this end, we use reinforcement learning to create models of human behavior during assistance with dressing. To explore this kind of interaction, we assume that the robot presents an open sleeve of a hospital gown to a person, and that the person moves their arm into the sleeve. The controller that models the person's behavior is given the position of the end of the sleeve and information about contact between the person's hand and the fabric of the gown. We simulate this system with a human torso model that has realistic joint ranges, a simple robot gripper, and a physics-based cloth model for the gown. Through reinforcement learning (specifically the TRPO algorithm) the system creates a model of human behavior that is capable of placing the arm into the sleeve. We aim to model what humans are capable of doing, rather than what they typically do. We demonstrate successfully trained human behaviors for three robot-assisted dressing strategies: 1) the robot gripper holds the sleeve motionless, 2) the gripper moves the sleeve linearly towards the person from the front, and 3) the gripper moves the sleeve linearly from the side.
• The goal of this paper is to present an end-to-end, data-driven framework to control Autonomous Mobility-on-Demand systems (AMoD, i.e. fleets of self-driving vehicles). We first model the AMoD system using a time-expanded network, and present a formulation that computes the optimal rebalancing strategy (i.e., preemptive repositioning) and the minimum feasible fleet size for a given travel demand. Then, we adapt this formulation to devise a Model Predictive Control (MPC) algorithm that leverages short-term demand forecasts based on historical data to compute rebalancing strategies. We test the end-to-end performance of this controller with a state-of-the-art LSTM neural network to predict customer demand and real customer data from DiDi Chuxing: we show that this approach scales very well for large systems (indeed, the computational complexity of the MPC algorithm does not depend on the number of customers and of vehicles in the system) and outperforms state-of-the-art rebalancing strategies by reducing the mean customer wait time by up to to 89.6%.
• The arXiv is the most popular preprint repository in the world. Since its inception in 1991, the arXiv has allowed researchers to freely share publication-ready articles prior to formal peer review. The growth and the popularity of the arXiv emerged as a result of new technologies that made document creation and dissemination easy, and cultural practices where collaboration and data sharing were dominant. The arXiv represents a unique place in the history of research communication and the Web itself, however it has arguably changed very little since its creation. Here we look at the strengths and weaknesses of arXiv in an effort to identify what possible improvements can be made based on new technologies not previously available. Based on this, we argue that a modern arXiv might in fact not look at all like the arXiv of today.
• Anyons are exotic quasi-particles with fractional charge that can emerge as fundamental excitations of strongly interacting topological quantum phases of matter. Unlike ordinary fermions and bosons, they may obey non-abelian statistics--a property that would help realize fault tolerant quantum computation. Non-abelian anyons have long been predicted to occur in the fractional quantum Hall (FQH) phases that form in two-dimensional electron gases (2DEG) in the presence of a large magnetic field, su ch as the $\nu=\tfrac{5}{2}$ FQH state. However, direct experimental evidence of anyons and tests that can distinguish between abelian and non-abelian quantum ground states with such excitations have remained elusive. Here we propose a new experimental approach to directly visualize the structure of interacting electronic states of FQH states with the scanning tunneling microscope (STM). Our theoretical calculations show how spectroscopy mapping with the STM near individual impurity defects can be used to image fractional statistics in FQH states, identifying unique signatures in such measurements that can distinguish different proposed ground states. The presence of locally trapped anyons should leave distinct signatures in STM spectroscopic maps, and enables a new approach to directly detect - and perhaps ultimately manipulate - these exotic quasi-particles.
• It is demonstrated that fermionic/bosonic symmetry-protected topological (SPT) phases across different dimensions and symmetry classes can be organized using geometric constructions that increase dimensions and symmetry-forgetting maps that change symmetry groups. Specifically, it is shown that the interacting classifications of SPT phases with and without glide symmetry fit into a short exact sequence, so that the classification with glide is constrained to be a direct sum of cyclic groups of order 2 or 4. Applied to fermionic SPT phases in the Wigner-Dyson class AII, this implies that the complete interacting classification in the presence of glide is ${\mathbb Z}_4{\oplus}{\mathbb Z}_2{\oplus}{\mathbb Z}_2$ in 3 dimensions. In particular, the hourglass-fermion phase recently realized in the band insulator KHgSb must be robust to interactions. Generalizations to spatiotemporal glide symmetries are discussed.
• We study the problem of learning description logic (DL) ontologies in Angluin et al.'s framework of exact learning via queries. We admit membership queries ("is a given subsumption entailed by the target ontology?") and equivalence queries ("is a given ontology equivalent to the target ontology?"). We present three main results: (1) ontologies formulated in (two relevant versions of) the description logic DL-Lite can be learned with polynomially many queries of polynomial size; (2) this is not the case for ontologies formulated in the description logic EL, even when only acyclic ontologies are admitted; and (3) ontologies formulated in a fragment of EL related to the web ontology language OWL 2 RL can be learned in polynomial time. We also show that neither membership nor equivalence queries alone are sufficient in cases (1) and (3).
• Although deep Convolutional Neural Network (CNN) has shown better performance in various machine learning tasks, its application is accompanied by a significant increase in storage and computation. Among CNN simplification techniques, parameter pruning is a promising approach which aims at reducing the number of weights of various layers without intensively reducing the original accuracy. In this paper, we propose a novel progressive parameter pruning method, named Structured Probabilistic Pruning (SPP), which efficiently prunes weights of convolutional layers in a probabilistic manner. Unlike existing deterministic pruning approaches, in which the pruned weights of a well-trained model are permanently eliminated, SPP utilizes the relative importance of weights during training iterations, which makes the pruning procedure more accurate by leveraging the accumulated weight importance. Specifically, we introduce an effective weight competition mechanism to emphasize the important weights and gradually undermine the unimportant ones. Experiments indicate that our proposed method has obtained superior performance on ConvNet and AlexNet compared with existing pruning methods. Our pruned AlexNet achieves 4.0 $\sim$ 8.9x (averagely 5.8x) layer-wise speedup in convolutional layers with only 1.3\% top-5 error increase on the ImageNet-2012 validation dataset. We also prove the effectiveness of our method on transfer learning scenarios using AlexNet.
• Social networks and interactions in social media involve both positive and negative relationships. Signed graphs capture both types of relationships: positive edges correspond to pairs of "friends", and negative edges to pairs of "foes". The edge sign prediction problem, that aims to predict whether an interaction between a pair of nodes will be positive or negative, is an important graph mining task for which many heuristics have recently been proposed [Leskovec 2010]. We model the edge sign prediction problem as follows: we are allowed to query any pair of nodes whether they belong to the same cluster or not, but the answer to the query is corrupted with some probability $0<q<\frac{1}{2}$. Let $\delta=1-2q$ be the bias. We provide an algorithm that recovers all signs correctly with high probability in the presence of noise for any constant gap $\delta$ with $O(\frac{n\log n}{\delta^4})$ queries. Our algorithm uses breadth first search as its main algorithmic primitive. A byproduct of our proposed learning algorithm is the use of $s-t$ paths as an informative feature to predict the sign of the edge $(s,t)$. As a heuristic, we use edge disjoint $s-t$ paths of short length as a feature for predicting edge signs in real-world signed networks. Our findings suggest that the use of paths improves the classification accuracy, especially for pairs of nodes with no common neighbors.
• Big data analytics has an extremely significant impact on many areas in all businesses and industries including hospitality. This study aims to guide information technology (IT) professionals in hospitality on their big data expedition. In particular, the purpose of this study is to identify the maturity stage of the big data in hospitality industry in an objective way so that hotels be able to understand their progress, and realize what it will take to get to the next stage of big data maturity through the scores they will receive based on the survey.
• We present X-ray light curves of Cygnus X-3 as measured by the recently launched AstroSat satellite. The light curve folded over the binary period of 4.8 hours shows a remarkable stability over the past 45 years and we find that we can use this information to measure the zero point to better than 100 s. We revisit the historical binary phase measurements and examine the stability of the binary period over 45 years. We present a new binary ephemeris with the period and period derivative determined to an accuracy much better than previously reported. We do not find any evidence for a second derivative in the period variation. The precise binary period measurements, however, indicate a hint of short term episodic variations in periods. Interestingly, these short term period variations coincide with the period of enhanced jet activity exhibited by the source. We discuss the implications of these observations on the nature of the binary system.
• Sep 22 2017 hep-ph nucl-th arXiv:1709.07440v1
We reexamine the structure of the $n=2$ levels of muonic hydrogen using a two-body potential that includes all relativistic and one loop corrections. The potential was originally derived from QED to describe the muonium atom and accounts for all contributions to order $\alpha^5$. Since one loop corrections are included, the anomalous magnetic moment contributions of the muon can be identified and replaced by the proton anomalous magnetic moment to describe muonic hydrogen with a point-like proton. This serves as a convenient starting point to include the dominant electron vacuum polarization corrections to the spectrum and extract the proton's mean squared radius $\langle r^2\rangle$. Our results are consistent with other theoretical calculations that find that the muonic hydrogen value for $\langle r^2\rangle$ is smaller than the result obtained from electron scattering.
• Sep 22 2017 hep-th hep-lat hep-ph arXiv:1709.07436v1
We investigate the short distance fate of distinct classes of not asymptotically free supersymmetric gauge theories. Examples include super QCD with two adjoint fields and generalised superpotentials, gauge theories without superpotentials and with two types of matter representation and quiver theories. We show that an asymptotically safe scenario is nonperturbatively compatible with all known constraints.
• We evaluate the master integrals for the two-loop, planar box-diagrams contributing to the elastic scattering of muons and electrons at next-to-next-to leading-order in QED. We adopt the method of differential equations and the Magnus exponential series to determine a canonical set of integrals, finally expressed as a Taylor series around four space-time dimensions, with coefficients written as combination of generalised polylogarithms. The electron is treated as massless, while we retain full dependence on the muon mass. The considered integrals are also relevant for crossing-related processes, such as di-muon production at $e^+ e^-$-colliders, as well as for the QCD corrections to $top$-pair production at hadron colliders.
• Social media serves as a unified platform for users to express their thoughts on subjects ranging from their daily lives to their opinion on consumer brands and products. These users wield an enormous influence in shaping the opinions of other consumers and influence brand perception, brand loyalty and brand advocacy. In this paper, we analyze the opinion of 19M Twitter users towards 62 popular industries, encompassing 12,898 enterprise and consumer brands, as well as associated subject matter topics, via sentiment analysis of 330M tweets over a period spanning a month. We find that users tend to be most positive towards manufacturing and most negative towards service industries. In addition, they tend to be more positive or negative when interacting with brands than generally on Twitter. We also find that sentiment towards brands within an industry varies greatly and we demonstrate this using two industries as use cases. In addition, we discover that there is no strong correlation between topic sentiments of different industries, demonstrating that topic sentiments are highly dependent on the context of the industry that they are mentioned in. We demonstrate the value of such an analysis in order to assess the impact of brands on social media. We hope that this initial study will prove valuable for both researchers and companies in understanding users' perception of industries, brands and associated topics and encourage more research in this field.
• Boundary conformal field theories have several additional terms in the trace anomaly of the stress tensor associated purely with the boundary. We constrain the corresponding boundary central charges in three- and four-dimensional conformal field theories in terms of two- and three-point correlation functions of the displacement operator. We provide a general derivation by comparing the trace anomaly with scale dependent contact terms in the correlation functions. We conjecture a relation between the a-type boundary charge in three dimensions and the stress tensor two-point function near the boundary. We check our results for several free theories.
• At the beginning of 2016, LIGO reported the first-ever direct detection of gravitational waves. The measured signal was compatible with the merger of two black holes of about 30 solar masses, releasing about 3 solar masses of energy in gravitational waves. We consider the possible neutrino emission from a binary black hole merger relative to the energy released in gravitational waves and investigate the constraints coming from the non-detection of counterpart neutrinos, focusing on IceCube and its energy range. The information from searches for counterpart neutrinos is combined with the diffuse astrophysical neutrino flux in order to put bounds on neutrino emission from binary black hole mergers. Prospects for future LIGO observation runs are shown and compared with model predictions.
• Neutron stars in X-ray binary systems are fascinating objects that display a wide range of timing and spectral phenomena in the X-rays. Not only parameters of the neutron stars, like magnetic field strength and spin period evolve in their active binary phase, the neutron stars also affect the binary systems and their immediate surroundings in many ways. Here we discuss some aspects of the interactions of the neutron stars with their environments that are revelaed from their X-ray emission. We discuss some recent developments involving the process of accretion onto high magnetic field neutron stars: accretion stream structure and formation, shape of pulse profile and its changes with accretion torque. Various recent studies of reprocessing of X-rays in the accretion disk surface, vertical structures of the accretion disk and wind of companion star are also discussed here. The X-ray pulsars among the binary neutron stars provide excellent handle to make accurate measurement of the orbital parameters and thus also evolution of the binray orbits that take place over time scale of a fraction of a million years to tens of millions of years. The orbital period evolution of X-ray binaries have shown them to be rather complex systems. Orbital evolution of X-ray binaries can also be carried out from timing of the X-ray eclipses and there have been some surprising results in that direction, including orbital period glitches in two X-ray binaries and possible detection of the most massive circum-binary planet around a Low Mass X-ray Binary.
• We present a comprehensive renormalisation group analysis of the Littlest Seesaw model involving two right-handed neutrinos and a very constrained Dirac neutrino Yukawa coupling matrix. We perform the first $\chi^2$ analysis of the low energy masses and mixing angles, in the presence of renormalisation group corrections, for various right-handed neutrino masses and mass orderings, both with and without supersymmetry. We find that the atmospheric angle, which is predicted to be near maximal in the absence of renormalisation group corrections, may receive significant corrections for some non-supersymmetric cases, bringing it into close agreement with the current best fit value in the first octant. By contrast, in the presence of supersymmetry, the renormalisation group corrections are relatively small, and the prediction of a near maximal atmospheric mixing angle is maintained, for the studied cases. Forthcoming results from T2K and NOvA will decisively test these models at a precision comparable to the renormalisation group corrections we have calculated.
• We present results from a study of seven large known head-tail radio galaxies based on observations using the Giant Metrewave Radio Telescope at 240 and 610 MHz. These observations are used to study the radio morphologies and distribution of the spectral indices across the sources. The overall morphology of the radio tails of these sources is suggestive of random motions of the optical host around the cluster potential. The presence of the multiple bends an d wiggles in several head-tail sources is possibly due to the precessing radio jets. We find steepening of the spectral index along the radio tails. The prevailing equipartition magnetic field also decreases a long the radio tails of these sources. These steepening trends are attributed to the synchrotron aging of plasma toward the ends of the tails. The dynamical ages of these sample sources have been estimated to be ~100 Myr, which is a factor of six more than the age estimates from the radiative losses due to synchrotron cooling.
• We show that an everywhere regular foliation $\mathcal F$ with compact canonically polarized leaves on a quasi-projective manifold $X$ has isotrivial family of leaves when the orbifold base of this family is special. By a recent work of Berndtsson, Paun and Wang, the same proof works in the case where the leaves have trivial canonical bundle. The specialness condition means that the $p$-th exterior power of the logarithmic extension of its conormal bundle does not contain any rank-one subsheaf of maximal Kodaira dimension $p$, for any $p>0$. This condition is satisfied, for example, in the very particular case when the Kodaira dimension of the determinant of the Logarithmic extension of the conormal bundle vanishes. Motivating examples are given by the `algebraically coisotropic' submanifolds of irreducible hyperkähler projective manifolds.
• Lifetimes of complexes formed during ultracold collisions are of current experimental interest as a possible cause of trap loss in ultracold gases of alkali-dimers. Microsecond lifetimes for complexes formed during ultracold elastic collisions of K2 with Rb are reported, from numerically-exact quantum-scattering calculations. The reported lifetimes are compared with those calculated using a simple density-of-states approach, which are shown to be reasonable. Long-lived complexes correspond to narrow scattering resonances which we examine for the statistical signatures of quantum chaos, finding that the positions and widths of the resonances follow the Wigner-Dyson and Porter-Thomas distributions respectively.
• Sep 22 2017 astro-ph.HE arXiv:1709.07418v1
This review focuses on the physics of Gamma Ray Bursts probed through their radio afterglow emission. Even though radio band is the least explored of the afterglow spectrum, it has played an important role in the progress of GRB physics, specifically in confirming the hypothesized relativistic effects. Currently radio astronomy is in the beginning of a revolution. The high sensitive Square Kilometer Array (SKA) is being planned, its precursors and pathfinders are about to be operational, and several existing instruments are undergoing upgradation. Thus, the afterglow detection statistics and results from follow up programs are expected to improve in the coming years. We list a few avenues unique to radio band which if explored to full potential have the promise to greatly contribute to the future of GRB physics.
• In this note, we prove the completeness of Bethe vectors for the six vertex model with diagonal reflecting boundary conditions. We show that as inhomogeneity parameters get sent to infinity in a successive order the Bethe vectors give a complete basis of the space of states.
• In this document, we present the global QCD analysis of parton-to-kaon fragmentation functions at next-to-leading order accuracy using the latest experimental information on single-inclusive kaon production in electron-positron annihilation, lepton-nucleon deep-inelastic scattering, and proton-proton collisions. An extended analysis of this work can be found in Ref.[1].
• This paper is the first from a series of papers that establish a common analogue of the strong component and basilica decompositions for bidirected graphs. A bidirected graph is a graph in which a sign $+$ or $-$ is assigned to each end of each edge, and therefore is a common generalization of digraphs and signed graphs. Unlike digraphs, the reachabilities between vertices by directed trails and paths are not equal in general bidirected graphs. In this paper, we set up an analogue of the strong connectivity theory for bidirected graphs regarding directed trails, motivated by factor theory. We define the new concepts of \em circular connectivity and \em circular components as generalizations of the strong connectivity and strong components. In our main theorem, we characterize the inner structure of each circular component; we define a certain binary relation between vertices in terms of the circular connectivity and prove that this relation is an equivalence relation. The nontrivial aspect of this structure arises from directed trails starting and ending with the same sign, and is therefore characteristic to bidirected graphs that are not digraphs. This structure can be considered as an analogue of the general Kotzig-Lovász decomposition, a known canonical decomposition in $1$-factor theory. From our main theorem, we also obtain a new result in $b$-factor theory, namely, a $b$-factor analogue of the general Kotzig-Lovász decomposition.

James Wootton Sep 21 2017 05:41 UTC

What does this imply for https://scirate.com/arxiv/1608.00263? I'm guessing they still regard it as valid (it is ref [14]), but just too hard to implement for now.

Ben Criger Sep 08 2017 08:09 UTC

Oh look, there's another technique for decoding surface codes subject to X/Z correlated errors: https://scirate.com/arxiv/1709.02154

Aram Harrow Sep 06 2017 07:54 UTC

The paper only applies to conformal field theories, and such a result cannot hold for more general 1-D systems by 0705.4077 and other papers (assuming standard complexity theory conjectures).

Felix Leditzky Sep 05 2017 21:27 UTC

Thanks for the clarification, Philippe!

Philippe Faist Sep 05 2017 21:09 UTC

Hi Felix, thanks for the good question.

We've found it more convenient to consider trace-nonincreasing and $\Gamma$-sub-preserving maps (and this is justified by the fact that they can be dilated to fully trace-preserving and $\Gamma$-preserving maps on a larger system). The issue arises because

...(continued)
Felix Leditzky Sep 05 2017 19:02 UTC

What is the reason/motivation to consider trace-non-increasing maps instead of trace-preserving maps in your framework and the definition of the coherent relative entropy?

Steve Flammia Aug 30 2017 22:30 UTC

Thanks for the reference Ashley. If I understand your paper, you are still measuring stabilizers of X- and Z-type at the top layer of the code. So it might be that we can improve on the factor of 2 that you found if we tailor the stabilizers to the noise bias at the base level.

Ashley Aug 30 2017 22:09 UTC

We followed Aliferis and Preskill's approach in [https://arxiv.org/abs/1308.4776][1] and found that the fault-tolerant threshold for the surface code was increased by approximately a factor of two, from around 0.75 per cent to 1.5 per cent for a bias of 10 to 100.

[1]: https://arxiv.org/abs/1308.

...(continued)
Stephen Bartlett Aug 30 2017 21:55 UTC

Following on from Steve's comments, it's possible to use the bias-preserving gate set in Aliferis and Preskill directly to do the syndrome extraction, as you build up a CNOT gadget, but such a direct application of your methods would be very complicated and involve a lot of gate teleportation. If y

...(continued)
Steve Flammia Aug 30 2017 21:38 UTC

We agree that finding good syndrome extraction circuits if an important question. At the moment we do not have such circuits, though we have started to think about them. We are optimistic that this can be done in principle, but it remains to be seen if the circuits can be made sufficiently simple to

...(continued)