# Top arXiv papers

• It is universally accepted that the quantum no-cloning theorem was not officially discovered until 1982. I show here that an article published in 1970 [J. L. Park, Foundations of Physics, 1, 23-33 (1970)] contained an explicit proof of the theorem. Park's demonstration has been overlooked until now and the paper remains virtually unknown. Reasons and implications of this fact are analyzed in the light of existing explanations concerning the genesis of the theorem.
• We present an example of a Thermal Operation for a system of $d>1$ energy levels, which cannot be performed without an instant access to the whole energy space. Pursuing the question about the decomposability of global Thermal Operations into convex combinations of processes acting non-trivially on smaller subspaces, we investigate the set of Thermal Operations for transitions within the subspace of states diagonal in the energy basis. For 3 level systems, we determine the set of extremal points of these operations and connect it with thermo-majorization criterion. In particular, we show that the structure of the set depends on temperature. Finally, we show the connection between a low temperature realization in 3 level systems of the non-decomposable operation introduced in the beginning with higher temperature extremal points.
• We discuss the connection between the incompatibility of quantum measurements, as captured by the notion of joint measurability, and the violation of Bell inequalities. Specifically, we present explicitly a given a set of non jointly measurable POVMs $\mathcal{M}_A$ with the following property. Considering a bipartite Bell test where Alice uses $\mathcal{M}_A$, then for any possible shared entangled state $\rho$ and any set of (possibly infinitely many) POVMs $\mathcal{N}_B$ performed by Bob, the resulting statistics admits a local model, and can thus never violate any Bell inequality. This shows that quantum measurement incompatibility does not imply Bell nonlocality in general.
• To realize long-distance quantum communication and quantum network, it is required to have multiplexed quantum memory with many memory cells. Each memory cell needs to be individually addressable and independently accessible. Here we report an experiment that realizes a multiplexed DLCZ-type quantum memory with 225 individually accessible memory cells in a macroscopic atomic ensemble. As a key element for quantum repeaters, we demonstrate that entanglement with flying optical qubits can be stored into any neighboring memory cells and read out after a programmable time with high fidelity. Experimental realization of a multiplexed quantum memory with many individually accessible memory cells and programmable control of its addressing and readout makes an important step for its application in quantum information technology.
• We derive a bound on the ability of a linear optical network to estimate a linear combination of independent phase shifts by using an arbitrary non-classical but unentangled input state, thereby elucidating the quantum resources required to obtain the Heisenberg limit with a multi-port interferometer. Our bound reveals that while linear networks can generate highly entangled states, they cannot effectively combine quantum resources that are well distributed across multiple modes for the purposes of metrology: in this sense linear networks endowed with well-distributed quantum resources behave classically. Conversely, our bound shows that linear networks can achieve the Heisenberg limit for distributed metrology when the input photons are hoarded in a small number of input modes, and we present an explicit scheme for doing so. Our results also have implications for measures of non-classicality.
• Many-body-localized (MBL) systems do not thermalize under their intrinsic dynamics. The athermality of MBL, we propose, can be harnessed for thermodynamic tasks. We illustrate by formulating an Otto engine cycle for a quantum many-body system. The system is ramped between a strongly localized MBL regime and a thermal (or weakly localized) regime. MBL systems' energy-level correlations differ from thermal systems'. This discrepancy enhances the engine's reliability, suppresses worst-case trials, and enables mesoscale engines to run in parallel in the thermodynamic limit. We estimate analytically and calculate numerically the engine's efficiency and per-cycle power. The efficiency mirrors the efficiency of the conventional thermodynamic Otto engine. The per-cycle power scales linearly with the system size and inverse-exponentially with a localization length. This work introduces a thermodynamic lens onto MBL, which, having been studied much recently, can now be applied in thermodynamic tasks.
• Exploiting the relative entropy of coherence, we isolate the coherent contribution in the energetics of a driven non-equilibrium quantum system. We prove that a division of the irreversible work can be made into a coherent and incoherent part, which provides an operational criterion for quantifying the coherent contribution in a generic non-equilibrium transformation on a closed quantum system. We then study such a contribution in two physical models of a driven qubit and kicked rotor. In addition, we also show that coherence generation is connected to the non-adiabaticity of a processes, for which it gives the dominant contribution for slow-enough transformation. The amount of generated coherence in the energy eigenbasis is equivalent to the change in diagonal entropy, and here we show that it fulfills a fluctuation theorem.
• The efficient representation of quantum many-body states with classical resources is a key challenge in quantum many-body theory. In this work we analytically construct classical networks for the description of the quantum dynamics in transverse-field Ising models that can be solved efficiently using Monte-Carlo techniques. Our perturbative construction encodes time-evolved quantum states of spin-1/2 systems in a network of classical spins with local couplings and can be directly generalized to other spin systems and higher spins. Using this construction we compute the transient dynamics in one, two, and three dimensions including local observables, entanglement production, and Loschmidt amplitudes using Monte-Carlo algorithms and demonstrate the accuracy of this approach by comparisons to exact results. We include a mapping to equivalent artificial neural networks as introduced in [G. Carleo and M. Troyer, Science 355 (2017)].
• This experiment was conceived of as a method of transmitting information from inside a black hole to the outside. As it turns out, it doesn't work in the form described (and possibly not in any form), but the way in which Nature prevents quantum-mechanical effects from transmitting usable information using quantum correlations is illuminating. In the process, one can learn some quantum theory, as well as quantum optics. The proposed scheme uses a double-slit experiment, in the manner of the Delayed Choice set up \citeScully1 \citeScully2, where the region where the interference takes place (between "signal" photons) is spatially separated from the region where the Delayed Choice (with "idler" photons) is made. Indeed, this Double-Delayed Choice, which is this thought experiment, has one of the idler photons slip inside the event horizon and serves as the method to attempt to communicate from the inside to the outside.
• Falls are serious and costly for elderly people. The Centers for Disease Control and Prevention of the US reports that millions of older people, 65 and older, fall each year at least once. Serious injuries such as; hip fractures, broken bones or head injury, are caused by 20% of the falls. The time it takes to respond and treat a fallen person is crucial. With this paper we present a new , non-invasive system for fallen people detection. Our approach uses only stereo camera data for passively sensing the environment. The key novelty is a human fall detector which uses a CNN based human pose estimator in combination with stereo data to reconstruct the human pose in 3D and estimate the ground plane in 3D. We have tested our approach in different scenarios covering most activities elderly people might encounter living at home. Based on our extensive evaluations, our system shows high accuracy and almost no miss-classification. To reproduce our results, the implementation will be made publicly available to the scientific community.
• Motivated by the gate set tomography we study quantum channels from the perspective of information which is invariant with respect to the gauge realized through similarity of matrices representing channel superoperators. We thus use the complex spectrum of the superoperator to provide necessary conditions relevant for complete positivity of qubit channels and to express various metrics such as average gate fidelity.
• We study perfect state transfer in a discrete quantum walk. In particular, we show that there are infinitely many $4$-regular circulant graphs that admit perfect state transfer between antipodal vertices. To the best of our knowledge, previously there was no infinite family of $k$-regular graphs with perfect state transfer, for any $k\ge 3$.
• We present a non-commutative algorithm for multiplying $5 \times 5$ matrices using 99 multiplications. This algorithm is a minor modification of Makarov algorithm [4].
• With the ever-growing volume, complexity and dynamicity of online information, recommender system is an effective key solution to overcome such information overload. In recent years, deep learning's revolutionary advances in speech recognition, image analysis and natural language processing have drawn significant attention. Meanwhile, recent studies also demonstrate its effectiveness in coping with information retrieval and recommendation tasks. Applying deep learning techniques into recommender system has been gaining momentum due to its state-of-the-art performances and high-quality recommendations. In contrast to traditional recommendation models, deep learning provides a better understanding of user's demands, item's characteristics and historical interactions between them. This article provides a comprehensive review of recent research efforts on deep learning based recommender systems towards fostering innovations of recommender system research. A taxonomy of deep learning based recommendation models is presented and used to categorise surveyed articles. Open problems are identified based on the insightful analytics of the reviewed works and potential solutions discussed.
• Nanoscopic protein machines can store and manipulate information. We show that it happens if their intramolecular stochastic dynamics of conformational transitions enable performing the work in many randomly selected ways. A sample model of such dynamics, specified by a critical complex network, is investigated by computer simulations. For this model, the generalized fluctuation theorem is proven to be held with a possible entropy reduction at the expense of information creation. Information creation and storage takes place in the transient, nonergodic stages of dynamics before completing the free energy transduction cycle. From the biological perspective, the suppositions could be of especial importance that (1) a partial compensation of entropy production by information creation is the reason for most protein machines to operate as dimers or higher organized assemblies and (2) nonergodicity is essential for transcription factors in search for the target on DNA. From a broader physical perspective, it is worth emphasizing the guess that, similarly as work and heat are changes in energy, information could be considered as a change in fluctuating organization, which is also an adequately defined thermodynamic function of state.
• Blackbody radiation, emitted from a furnace and described by a Planck spectrum, contains (on average) an entropy of $3.9\pm 2.5$ bits per photon. Since normal physical burning is a unitary process, this amount of entropy is compensated by the same amount of "hidden information" in correlations between the photons. The importance of this result lies in the posterior extension of this argument to the Hawking radiation from black holes, demonstrating that the assumption of unitarity leads to a perfectly reasonable entropy/information budget for the evaporation process. In order to carry out this calculation we adopt a variant of the "average subsystem" approach, but consider a tripartite pure system that includes the influence of the rest of the universe, and which allows "young" black holes to still have a non-zero entropy; which we identify with the standard Bekenstein entropy.
• In this paper, we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain are the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We also give the precise mathematical conditions for mRNAs and proteins to yield random bursts. It turns out the the random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind random bursts as an emergent behavior from the complex biochemical reaction kinetics.
• Jul 25 2017 quant-ph arXiv:1707.07340v1
The expected indefinite causal structure in quantum gravity poses a challenge to the notion of entanglement: If two parties are in an indefinite causal relation of being spacelike and timelike, can they still be entangled? If so, how does one measure the amount of entanglement? We propose to generalize the notions of entanglement and entanglement measure to address these questions. Incidentally but importantly, the generalization opens the path to study quantum entanglement of states, channels, networks and processes with definite or indefinite causal structure in a unified fashion, e.g., we show that the entanglement distillation capacity of a state, the quantum communication capacity of a channel, and the entanglement generation capacity of a network or a process are different manifestations of one and the same entanglement measure.
• Current state-of-the-art human action recognition is focused on the classification of temporally trimmed videos in which only one action occurs per frame. In this work we address the problem of action localisation and instance segmentation in which multiple concurrent actions of the same class may be segmented out of an image sequence. We cast the action tube extraction as an energy maximisation problem in which configurations of region proposals in each frame are assigned a cost and the best action tubes are selected via two passes of dynamic programming. One pass associates region proposals in space and time for each action category, and another pass is used to solve for the tube's temporal extent and to enforce a smooth label sequence through the video. In addition, by taking advantage of recent work on action foreground-background segmentation, we are able to associate each tube with class-specific segmentations. We demonstrate the performance of our algorithm on the challenging LIRIS-HARL dataset and achieve a new state-of-the-art result which is 14.3 times better than previous methods.
• The motion of social insects constitute beautiful examples of adaptive collective dynamics born out of apparent purposeless individual behavior. In this paper we revisit the topic of the ruling laws behind burst of activity in ants. The analysis, done over previously reported data, reconsider the proposed causation arrows, not finding any link between the duration of the ants activity and its moving speed. Secondly, synthetic trajectories created from steps of different ants, demonstrate that an additive stochastic process can explain the previously reported speed shape profile. Finally we show that as more ants enter the nest, the faster they move, which implies a collective property. Overall these results provides a mechanistic explanation for the reported behavioral laws, and suggest a formal way to further study the collective properties in these scenarios.
• We revisit the generation of dark matter isocurvature perturbations in the curvaton model in greater detail, both analytically and numerically. As concrete examples, we investigate the cases of thermally decoupled dark matter and axionic dark matter. We show that the radiation produced by the decay of the curvaton, which has not been taken into account in previous analytical studies, can significantly affect the amplitude of isocurvature perturbations. In particular, we find that they are drastically suppressed even when the dark matter freeze-out (or the onset of the axion oscillations for axionic dark matter) occurs before the curvaton decays, provided the freeze-out takes place deep in the curvaton-dominated Universe. As a consequence, we show that the current observational isocurvature constraints on the curvaton parameters are not as severe as usually thought.
• Quantum networks are natural scenarios for the communication of information among distributed parties, and the arena of promising schemes for distributed quantum computation. Measurement-based quantum computing is a prominent example of how quantum networking, embodied by the generation of a special class of multipartite states called cluster states, can be used to achieve a powerful paradigm for quantum information processing. Here we analyze randomly generated cluster states in order to address the emergence of multipartite correlations as a function of the density of edges in a given underlying graph. We find that the most widespread multipartite entanglement does not correspond to the highest amount of edges in the cluster. We extend the analysis to higher dimensions, finding similar results, which suggest the establishment of small world structures in the entanglement sharing of randomised cluster states, which can be exploited in engineering more efficient quantum information carriers.
• We address the challenge of computing search paths in real-time for subsea applications where the goal is to locate an unknown number of targets on the seafloor. Our approach maximizes a formal definition of search effectiveness given finite search effort. We account for false positive measurements and variation in the performance of the search sensor due to geographic variation of the seafloor. We compare near-optimal search paths that can be computed in real-time with optimal search paths for which real-time computation is infeasible. We show how sonar data acquired for locating targets at a specific location can also be used to characterize the performance of the search sonar at that location. Our approach is illustrated with numerical experiments where search paths are planned using sonar data previously acquired from Boston Harbor.
• Discussion forums are an important source of information. They are often used to answer speci c questions a user might have and to discover more about a topic of interest. Discussions in these forums may evolve in intricate ways, making it di cult for users to follow the ow of ideas. We propose a novel approach for auto- matically identifying the underlying thread structure of a forum discussion. Our approach is based on a neural model that computes coherence scores of possible reconstructions and then selects the highest scoring, i.e., the most coherent one. Preliminary experi- ments demonstrate promising results outperforming a number of strong baseline methods.
• We present a quantum algorithm to compute the entanglement spectrum of arbitrary quantum states. The interesting universal part of the entanglement spectrum is typically contained in the largest eigenvalues of the density matrix which can be obtained from the lower Renyi entropies through the Newton-Girard method. Obtaining the $p$ largest eigenvalues ($\lambda_1>\lambda_2\ldots>\lambda_p$) requires a parallel circuit depth of $\mathcal{O}(p(\lambda_1/\lambda_p)^p)$ and $\mathcal{O}(p\log(N))$ qubits where up to $p$ copies of the quantum state defined on a Hilbert space of size $N$ are needed as the input. We validate this procedure for the entanglement spectrum of the topologically-ordered Laughlin wave function corresponding to the quantum Hall state at filling factor $\nu=1/3$. Our scaling analysis exposes the tradeoffs between time and number of qubits for obtaining the entanglement spectrum in the thermodynamic limit using finite-size digital quantum computers. We also illustrate the utility of the second Renyi entropy in predicting a topological phase transition and in extracting the localization length in a many-body localized system.
• The computational complexity of solving nonlinear support vector machine (SVM) is prohibitive on large-scale data. In particular, this issue becomes very sensitive when the data represents additional difficulties such as highly imbalanced class sizes. Typically, nonlinear kernels produce significantly higher classification quality to linear kernels but introduce extra kernel and model parameters. Thus, the parameter fitting is required to increase the quality but it reduces the performance dramatically. We introduce a generalized fast multilevel framework for SVM and discuss several versions of its algorithmic components that lead to a good trade-off between quality and time. Our framework is implemented using PETSc which allows integration with scientific computing tasks. The experimental results demonstrate significant speed up compared to the state-of-the-art SVM libraries.
• Is it possible to generally construct a dynamical system to simulate a black system without recovering the equations of motion of the latter? Here we show that this goal can be approached by a learning machine. Trained by a set of input-output responses or a segment of time series of a black system, a learning machine can be served as a copy system to mimic the dynamics of various black systems. It can not only behave as the black system at the parameter set that the training data are made, but also recur the evolution history of the black system. As a result, the learning machine provides an effective way for prediction, and enables one to probe the global dynamics of a black system. These findings have significance for practical systems whose equations of motion cannot be approached accurately. Examples of copying the dynamics of an artificial neural network, the Lorenz system, and a variable star are given. Our idea paves a possible way towards copy a living brain.
• It has been shown that increasing model depth improves the quality of neural machine translation. However, different architectural variants to increase model depth have been proposed, and so far, there has been no thorough comparative study. In this work, we describe and evaluate several existing approaches to introduce depth in neural machine translation. Additionally, we explore novel architectural variants, including deep transition RNNs, and we vary how attention is used in the deep decoder. We introduce a novel "BiDeep" RNN architecture that combines deep transition RNNs and stacked RNNs. Our evaluation is carried out on the English to German WMT news translation dataset, using a single-GPU machine for both training and inference. We find that several of our proposed architectures improve upon existing approaches in terms of speed and translation quality. We obtain best improvements with a BiDeep RNN of combined depth 8, obtaining an average improvement of 1.5 BLEU over a strong shallow baseline. We release our code for ease of adoption.
• There have been some works that learn a lexicon together with the corpus to improve the word embeddings. However, they either model the lexicon separately but update the neural networks for both the corpus and the lexicon by the same likelihood, or minimize the distance between all of the synonym pairs in the lexicon. Such methods do not consider the relatedness and difference of the corpus and the lexicon, and may not be the best optimized. In this paper, we propose a novel method that considers the relatedness and difference of the corpus and the lexicon. It trains word embeddings by learning the corpus to predicate a word and its corresponding synonym under the context at the same time. For polysemous words, we use a word sense disambiguation filter to eliminate the synonyms that have different meanings for the context. To evaluate the proposed method, we compare the performance of the word embeddings trained by our proposed model, the control groups without the filter or the lexicon, and the prior works in the word similarity tasks and text classification task. The experimental results show that the proposed model provides better embeddings for polysemous words and improves the performance for text classification.
• Jul 25 2017 cs.DB arXiv:1707.07623v1
To realize the premise of the Semantic Web towards knowledgeable machines, one might often integrate an application with emerging RDF graphs. Nevertheless, capturing the content of a rich and open RDF graph by existing tools requires both time and expertise. We demonstrate eLinda - an explorer for Linked Data. The challenge addressed by eLinda is that of understanding the rich content of a given RDF graph. The core functionality is an exploration path, where each step produces a bar chart (histogram) that visualizes the distribution of classes in a set of nodes (URIs). In turn, each bar represents a set of nodes that can be further expanded through the bar chart in the path. We allow three types of explorations: subclass distribution, property distribution, and object distribution for a property of choice.
• This technical report deals with the concept of an artificial DNA which contains a blueprint of the structure and organization of an embedded system. This blueprint can be used to build up the embedded system in a self-organizing manner at run-time. The report describes in detail the basic principles of the the artificial DNA and its relationship to standard design methods for embedded systems. A prototypic implementation is presented and evaluated. Additionally, future work is described and a conclusion is given.
• Purpose: To develop a deep learning approach to digitally-stain optical coherence tomography (OCT) images of the optic nerve head (ONH). Methods: A horizontal B-scan was acquired through the center of the ONH using OCT (Spectralis) for 1 eye of each of 100 subjects (40 normal & 60 glaucoma). All images were enhanced using adaptive compensation. A custom deep learning network was then designed and trained with the compensated images to digitally stain (i.e. highlight) 6 tissue layers of the ONH. The accuracy of our algorithm was assessed (against manual segmentations) using the Dice coefficient, sensitivity, and specificity. We further studied how compensation and the number of training images affected the performance of our algorithm. Results: For images it had not yet assessed, our algorithm was able to digitally stain the retinal nerve fiber layer + prelamina, the retinal pigment epithelium, all other retinal layers, the choroid, and the peripapillary sclera and lamina cribrosa. For all tissues, the mean dice coefficient was $0.84 \pm 0.03$, the mean sensitivity $0.92 \pm 0.03$, and the mean specificity $0.99 \pm 0.00$. Our algorithm performed significantly better when compensated images were used for training. Increasing the number of images (from 10 to 40) to train our algorithm did not significantly improve performance, except for the RPE. Conclusion. Our deep learning algorithm can simultaneously stain neural and connective tissues in ONH images. Our approach offers a framework to automatically measure multiple key structural parameters of the ONH that may be critical to improve glaucoma management.
• This article brings forward an estimation of the proportion of homonyms in large scale groups based on the distribution of first names and last names in a subset of these groups. The estimation is based on the generalization of the "birthday paradox problem". The main results is that, in societies such as France or the United States, identity collisions (based on first + last names) are frequent. The large majority of the population has at least one homonym. But in smaller settings, it is much less frequent : even if small groups of a few thousand people have at least one couple of homonyms, only a few individuals have an homonym.
• Deep neural networks have become a primary tool for solving problems in many fields. They are also used for addressing information retrieval problems and show strong performance in several tasks. Training these models requires large, representative datasets and for most IR tasks, such data contains sensitive information from users. Privacy and confidentiality concerns prevent many data owners from sharing the data, thus today the research community can only benefit from research on large-scale datasets in a limited manner. In this paper, we discuss privacy preserving mimic learning, i.e., using predictions from a privacy preserving trained model instead of labels from the original sensitive training data as a supervision signal. We present the results of preliminary experiments in which we apply the idea of mimic learning and privacy preserving mimic learning for the task of document re-ranking as one of the core IR tasks. This research is a step toward laying the ground for enabling researchers from data-rich environments to share knowledge learned from actual users' data, which should facilitate research collaborations.
• In this paper we propose a model to learn multimodal multilingual representations for matching images and sentences in different languages, with the aim of advancing multilingual versions of image search and image understanding. Our model learns a common representation for images and their descriptions in two different languages (which need not be parallel) by considering the image as a pivot between two languages. We introduce a new pairwise ranking loss function which can handle both symmetric and asymmetric similarity between the two modalities. We evaluate our models on image-description ranking for German and English, and on semantic textual similarity of image descriptions in English. In both cases we achieve state-of-the-art performance.
• Jul 25 2017 math.DG arXiv:1707.07595v1
We prove in a direct, geometric way that for any compatible Riemannian metric on a Lie manifold the injectivity radius is positive
• This work addresses the task of generating English sentences from Abstract Meaning Representation (AMR) graphs. To cope with this task, we transform each input AMR graph into a structure similar to a dependency tree and annotate it with syntactic information by applying various predefined actions to it. Subsequently, a sentence is obtained from this tree structure by visiting its nodes in a specific order. We train maximum entropy models to estimate the probability of each individual action and devise an algorithm that efficiently approximates the best sequence of actions to be applied. Using a substandard language model, our generator achieves a Bleu score of 27.4 on the LDC2014T12 test set, the best result reported so far without using silver standard annotations from another corpus as additional training data.
• Foreground segmentation in video sequences is a classic topic in computer vision. Due to the lack of semantic and prior knowledge, it is difficult for existing methods to deal with sophisticated scenes well. Therefore, in this paper, we propose an end-to-end two-stage deep convolutional neural network (CNN) framework for foreground segmentation in video sequences. In the first stage, a convolutional encoder-decoder sub-network is employed to reconstruct the background images and encode rich prior knowledge of background scenes. In the second stage, the reconstructed background and current frame are input into a multi-channel fully-convolutional sub-network (MCFCN) for accurate foreground segmentation. In the two-stage CNN, the reconstruction loss and segmentation loss are jointly optimized. The background images and foreground objects are output simultaneously in an end-to-end way. Moreover, by incorporating the prior semantic knowledge of foreground and background in the pre-training process, our method could restrain the background noise and keep the integrity of foreground objects at the same time. Experiments on CDNet 2014 show that our method outperforms the state-of-the-art by 4.9%.
• Purpose: To apply tracer kinetic models as temporal constraints during reconstruction of under-sampled dynamic contrast enhanced (DCE) MRI. Methods: A library of concentration v.s time profiles is simulated for a range of physiological kinetic parameters. The library is reduced to a dictionary of temporal bases, where each profile is approximated by a sparse linear combination of the bases. Image reconstruction is formulated as estimation of concentration profiles and sparse model coefficients with a fixed sparsity level. Simulations are performed to evaluate modeling error, and error statistics in kinetic parameter estimation in presence of noise. Retrospective under-sampling experiments are performed on a brain tumor DCE digital reference object (DRO) at different signal to noise levels (SNR=20-40) at (k-t) space under-sampling factor (R=20), and 12 brain tumor in- vivo 3T datasets at (R=20-40). The approach is compared against an existing compressed sensing based temporal finite-difference (tFD) reconstruction approach. Results: Simulations demonstrate that sparsity levels of 2 and 3 model the library profiles from the Patlak and extended Tofts-Kety (ETK) models, respectively. Noise sensitivity analysis showed equivalent kinetic parameter estimation error statistics from noisy concentration profiles, and model approximated profiles. DRO based experiments showed good fidelity in recovery of kinetic maps from 20-fold under- sampled data at SNRs between 10-30. In-vivo experiments demonstrated reduced bias and uncertainty in kinetic mapping with the proposed approach compared to tFD at R>=20. Conclusions: Tracer kinetic models can be applied as temporal constraints during DCE-MRI reconstruction, enabling more accurate reconstruction from under- sampled data. The approach is flexible, can use several kinetic models, and does not require tuning of regularization parameters.
• Consideration of the entropy production in the creation of the CMB leads to a simple model of the evolution of the universe during this period which suggests a connection between the small observed acceleration term and the early inflation of a closed universe. From this we find an unexpected relationship between the Omega's of cosmology and calculate the total volume of the universe.
• The progression of breast cancer can be quantified in lymph node whole-slide images (WSIs). We describe a novel method for effectively performing classification of whole-slide images and patient level breast cancer grading. Our method utilises a deep neural network. The method performs classification on small patches and uses model averaging for boosting. In the first step, region of interest patches are determined and cropped automatically by color thresholding and then classified by the deep neural network. The classification results are used to determine a slide level class and for further aggregation to predict a patient level grade. Fast processing speed of our method enables high throughput image analysis.
• We propose a methodology that adapts graph embedding techniques (DeepWalk (Perozzi et al., 2014) and node2vec (Grover and Leskovec, 2016)) as well as cross-lingual vector space mapping approaches (Least Squares and Canonical Correlation Analysis) in order to merge the corpus and ontological sources of lexical knowledge. We also perform comparative analysis of the used algorithms in order to identify the best combination for the proposed system. We then apply this to the task of enhancing the coverage of an existing word embedding's vocabulary with rare and unseen words. We show that our technique can provide considerable extra coverage (over 99%), leading to consistent performance gain (around 10% absolute gain is achieved with w2v-gn-500K cf.\S 3.3) on the Rare Word Similarity dataset.
• Existing marker-less motion capture methods often assume known backgrounds, static cameras, and sequence specific motion priors, which narrows its application scenarios. Here we propose a fully automatic method that given multi-view video, estimates 3D human motion and body shape. We take recent SMPLify \citebogo2016keep as the base method, and extend it in several ways. First we fit the body to 2D features detected in multi-view images. Second, we use a CNN method to segment the person in each image and fit the 3D body model to the contours to further improves accuracy. Third we utilize a generic and robust DCT temporal prior to handle the left and right side swapping issue sometimes introduced by the 2D pose estimator. Validation on standard benchmarks shows our results are comparable to the state of the art and also provide a realistic 3D shape avatar. We also demonstrate accurate results on HumanEva and on challenging dance sequences from YouTube in monocular case.
• Feature selection is playing an increasingly significant role with respect to many computer vision applications spanning from object recognition to visual object tracking. However, most of the recent solutions in feature selection are not robust across different and heterogeneous set of data. In this paper, we address this issue proposing a robust probabilistic latent graph-based feature selection algorithm that performs the ranking step while considering all the possible subsets of features, as paths on a graph, bypassing the combinatorial problem analytically. An appealing characteristic of the approach is that it aims to discover an abstraction behind low-level sensory data, that is, relevancy. Relevancy is modelled as a latent variable in a PLSA-inspired generative process that allows the investigation of the importance of a feature when injected into an arbitrary set of cues. The proposed method has been tested on ten diverse benchmarks, and compared against eleven state of the art feature selection methods. Results show that the proposed approach attains the highest performance levels across many different scenarios and difficulties, thereby confirming its strong robustness while setting a new state of the art in feature selection domain.
• We investigate the effect of a global degeneracy in the distribution of entanglement spectrum in conformal field theories in one spatial dimension. We relate the recently found universal expression for the entanglement hamiltonian to the distribution of the entanglement spectrum. The main tool to establish this connection is the Cardy formula. It turns out that the Affleck-Ludwig non-integer degeneracy, appearing because of the boundary conditions induced at the entangling surface, can be directly read from the entanglement spectrum distribution. We also clarify the effect of the non-integer degeneracy on the spectrum of the partial transpose, which is the central object for quantifying the entanglement in mixed states. We show that the exact knowledge of the entanglement spectrum in some integrable spin-chains provides strong analytical evidences corroborating our results.
• We present a simple method for assessing the quality of generated images in Generative Adversarial Networks (GANs). The method can be applied in any kind of GAN without interfering with the learning procedure or affecting the learning objective. The central idea is to define a likelihood function that correlates with the quality of the generated images. In particular, we derive a Gaussian likelihood function from the distribution of the embeddings (hidden activations) of the real images in the discriminator, and based on this, define two simple measures of how likely it is that the embeddings of generated images are from the distribution of the embeddings of the real images. This yields a simple measure of fitness for generated images, for all varieties of GANs. Empirical results on CIFAR-10 demonstrate a strong correlation between the proposed measures and the perceived quality of the generated images.
• The "Mahler volume" is, intuitively speaking, a measure of how "round" a centrally symmetric convex body is. In one direction this intuition is given weight by a result of Santalo, who in the 1940s showed that the Mahler volume is maximized, in a given dimension, by the unit sphere and its linear images, and only these. A counterpart to this result in the opposite direction is proposed by a conjecture, formulated by Kurt Mahler in the 1930s and still open in dimensions 4 and greater, asserting that the Mahler volume should be minimized by a cuboid. In this article we present a seemingly new proof of the 2-dimensional case of this conjecture via the probabilistic method. The central idea is to show that either deleting a random pair of edges from a centrally symmetric convex polygon, or deleting a random pair of vertices, reduces the Mahler volume with positive probability.
• List-wise learning to rank methods are considered to be the state-of-the-art. One of the major problems with these methods is that the ambiguous nature of relevance labels in learning to rank data is ignored. Ambiguity of relevance labels refers to the phenomenon that multiple documents may be assigned the same relevance label for a given query, so that no preference order should be learned for those documents. In this paper we propose a novel sampling technique for computing a list-wise loss that can take into account this ambiguity. We show the effectiveness of the proposed method by training a 3-layer deep neural network. We compare our new loss function to two strong baselines: ListNet and ListMLE. We show that our method generalizes better and significantly outperforms other methods on the validation and test sets.
• Patterns stored within pre-trained deep neural networks compose large and powerful descriptive languages that can be used for many different purposes. Typically, deep network representations are implemented within vector embedding spaces, which enables the use of traditional machine learning algorithms on top of them. In this short paper we propose the construction of a graph embedding space instead, introducing a methodology to transform the knowledge coded within a deep convolutional network into a topological space (i.e. a network). We outline how such graph can hold data instances, data features, relations between instances and features, and relations among features. Finally, we introduce some preliminary experiments to illustrate how the resultant graph embedding space can be exploited through graph analytics algorithms.

SHUAI ZHANG Jul 26 2017 00:20 UTC

I am still working on improving this survey. If you have any suggestions, questions or find any mistakes, please do not hesitate to contact me: shuai.zhang@student.unsw.edu.au.

gae Jul 25 2017 23:19 UTC

Dear Marco, that representation does not depend on the specific channel as long as the input and output dimensions are fixed (in DV). Said in other words you may always choose the same representation. Let me remark that the only teleportation channels we know in DVs are: 1) Pauli channels (from dime

...(continued)
Marco Piani Jul 25 2017 22:07 UTC

Thanks gae. I see in Definition 7 of "WH- teleportation channel" in https://arxiv.org/pdf/1706.05384.pdf that

> $V (g)$ is a (generally different) representation of the [Weyl-Heisenberg] group

I take that such a different representation depends on the channel. Thus, I imagine that in general

...(continued)
Alvaro M. Alhambra Jul 24 2017 16:10 UTC

This paper has just been updated and we thought it would be a good
idea to advertise it here. It was originally submitted a year ago, and
it has now been essentially rewritten, with two new authors added.

We have fixed some of the original results and now we:
-Show how some fundamental theorem

...(continued)
gae Jul 21 2017 17:58 UTC

Dear Marco, indeed the description in those two papers is very general because they treat both DV and CV channels. However, things become "easier" and more specific if you restrict things to DVs. In this regard, let me point you at this paper https://arxiv.org/pdf/1706.05384.pdf , in particular to

...(continued)
Marco Piani Jul 21 2017 16:33 UTC

Is it really the case for the general definition of teleportation-covariant channel given in https://arxiv.org/abs/1609.02160 or https://arxiv.org/abs/1510.08863 ? I understand that there special classes of teleportation-covariant channels are considered where what you say holds (that is, for pairs

...(continued)
gae Jul 21 2017 15:51 UTC

If two channels are teleportation-covariant and between Hilbert spaces with the same dimension, then the correction unitaries are exactly the same. For instance, for any pair of Pauli channels (not just a Pauli and the identity), the corrections are Pauli operators.

Marco Piani Jul 21 2017 15:36 UTC

Is it more precisely that the result holds for any pair of *jointly* teleportation-covariant channels? The definition of teleportation-covariant channel (according to what I see in https://arxiv.org/abs/1609.02160 ) is such that the covariance can be achieved with a unitary at the output that depend

...(continued)
gae Jul 21 2017 14:01 UTC

Thx Steve for pointing out this paper too, which is relevant as well. Let me just remark that the PRL mentioned in my previous comment [PRL 118, 100502 (2017), https://arxiv.org/abs/1609.02160 ] finds the result for any pair of teleportation-covariant channels (not just between a Pauli channel and t

...(continued)
Steve Flammia Jul 21 2017 13:43 UTC

Actually, there is even earlier work that shows this result. In [arXiv:1109.6887][1], Magesan, Gambetta, and Emerson showed that for any Pauli channel the diamond distance to the identity is equal to the trace distance between the associated Choi states. They prefer to phrase their results in terms

...(continued)