# Top arXiv papers

• Actuator location and design are important choices in controller design for distributed parameter systems. Semi-linear partial differential equations model a wide spectrum of physical systems with distributed parameters. It is shown that under certain conditions on the nonlinearity and the cost function, an optimal control input together with an optimal actuator choice exist. First order necessary optimality conditions are derived. The results are applied to optimal actuator location and controller design in a nonlinear railway track model.
• Deriving the energetics of remnant and restarted active galactic nuclei (AGNs) is much more challenging than for active sources due to the complexity in accurately determining the time since the nucleus switched-off. I resolve this problem using a new approach that combines spectral ageing and dynamical models to tightly constrain the energetics and duty-cycles of dying sources. Fitting the shape of the integrated radio spectrum yields the fraction of the source age the nucleus is active; this, in addition to the flux density, source size, axis ratio, and properties of the host environment, provides a constraint on dynamical models describing the remnant radio source. This technique is used to derive the intrinsic properties of the well-studied remnant radio source B2 0924+30. This object is found to spend $50^{+14}_{-12}$ Myr in the active phase and a further $28^{+6}_{-5}$ Myr in the quiescent phase, have a jet kinetic power of $3.6^{+3.0}_{-1.7}\times 10^{37}$ W, and a lobe magnetic field strength below equipartition at the $8\sigma$ level. The integrated spectra of restarted and intermittent radio sources is found to yield a 'steep-shallow' shape when the previous outburst occurred within 100 Myr. The duty-cycle of B2 0924+30 is hence constrained to be $\delta < 0.15$ by fitting the shortest time to the previous comparable outburst that does not appreciably modify the remnant spectrum. The time-averaged feedback energy imparted by AGNs into their host galaxy environments can in this manner be quantified.
• We discuss the detection in the Outer Solar System Origins Survey (OSSOS) of two objects in Neptune's distant 9:1 mean motion resonance at semimajor axis $a\approx 130$~au. Both objects are securely resonant on 10 Myr timescales, with one securely in the 9:1 resonance's leading asymmetric libration island and the other in either the symmetric or trailing asymmetric island. These two objects are the largest semimajor axis objects known with secure resonant classifications, and their detection in a carefully characterized survey allows for the first robust population estimate for a resonance beyond 100~au. The detection of these two objects implies a population in the 9:1 resonance of $1.1\times10^4$ objects with $H_r<8.66$ ($D \gtrsim 100$~km) on similar orbits, with 95\% confidence range of $\sim0.4-3\times10^4$. Integrations over 4 Gyr of an ensemble of clones chosen from within the orbit fit uncertainties for these objects reveal that they both have median resonance occupation timescales of $\sim1$~Gyr. These timescales are consistent with the hypothesis that these two objects originate in the scattering population but became transiently stuck to Neptune's 9:1 resonance within the last $\sim1$~Gyr of solar system evolution. Based on simulations of a model of the current scattering population, we estimate the expected resonance sticking population in the 9:1 resonance to be 1000--5000 objects with $H_r<8.66$; this is marginally consistent with the OSSOS 9:1 population estimate. We conclude that resonance sticking is a plausible explanation for the observed 9:1 population, but we also discuss the possibility of a primordial 9:1 population, which would have interesting implications for the Kuiper belt's dynamical history.
• The superextension $\lambda(X)$ of a set $X$ consists of all maximal linked families on $X$. Any associative binary operation $*: X\times X \to X$ can be extended to an associative binary operation $*: \lambda(X)\times\lambda(X)\to\lambda(X)$. In the paper we study isomorphisms of superextensions of groups and prove that two groups are isomorphic if and only if their superextensions are isomorphic. Also we describe the automorphism groups of superextensions of all groups of cardinality $\leq 5$.
• In this paper we investigate the use of MPC-inspired neural network policies for sequential decision making. We introduce an extension to the DAgger algorithm for training such policies and show how they have improved training performance and generalization capabilities. We take advantage of this extension to show scalable and efficient training of complex planning policy architectures in continuous state and action spaces. We provide an extensive comparison of neural network policies by considering feed forward policies, recurrent policies, and recurrent policies with planning structure inspired by the Path Integral control framework. Our results suggest that MPC-type recurrent policies have better robustness to disturbances and modeling error.
• Timing guarantees are crucial to cyber-physical applications that must bound the end-to-end delay between sensing, processing and actuation. For example, in a flight controller for a multirotor drone, the data from a gyro or inertial sensor must be gathered and processed to determine the attitude of the aircraft. Sensor data fusion is followed by control decisions that adjust the flight of a drone by altering motor speeds. If the processing pipeline between sensor input and actuation is not bounded, the drone will lose control and possibly fail to maintain flight. Motivated by the implementation of a multithreaded drone flight controller on the Quest RTOS, we develop a composable pipe model based on the system's task, scheduling and communication abstractions. This pipe model is used to analyze two semantics of end-to-end time: reaction time and freshness time. We also argue that end-to-end timing properties should be factored in at the early stage of application design. Thus, we provide a mathematical framework to derive feasible task periods that satisfy both a given set of end-to-end timing constraints and the schedulability requirement. We demonstrate the applicability of our design approach by using it to port the Cleanflight flight controller firmware to Quest on the Intel Aero board. Experiments show that Cleanflight ported to Quest is able to achieve end-to-end latencies within the predicted time bounds derived by analysis.
• For the last two decades, high-dimensional data and methods have proliferated throughout the literature. The classical technique of linear regression, however, has not lost its touch in applications. Most high-dimensional estimation techniques can be seen as variable selection tools which lead to a smaller set of variables where classical linear regression technique applies. In this paper, we prove estimation error and linear representation bounds for the linear regression estimator uniformly over (many) subsets of variables. Based on deterministic inequalities, our results provide "good" rates when applied to both independent and dependent data. These results are useful in correctly interpreting the linear regression estimator obtained after exploring the data and also in post model-selection inference. All the results are derived under no model assumptions and are non-asymptotic in nature.
• In recent years, Convolutional Neural Networks (CNNs) have shown remarkable performance in many computer vision tasks such as object recognition and detection. However, complex training issues, such as "catastrophic forgetting" and hyper-parameter tuning, make incremental learning in CNNs a difficult challenge. In this paper, we propose a hierarchical deep neural network, with CNNs at multiple levels, and a corresponding training method for lifelong learning. The network grows in a tree-like manner to accommodate the new classes of data without losing the ability to identify the previously trained classes. The proposed network was tested on CIFAR-10 and CIFAR-100 datasets, and compared against the method of fine tuning specific layers of a conventional CNN. We obtained comparable accuracies and achieved 40% and 20% reduction in training effort in CIFAR-10 and CIFAR 100 respectively. The network was able to organize the incoming classes of data into feature-driven super-classes. Our model improves upon existing hierarchical CNN models by adding the capability of self-growth and also yields important observations on feature selective classification.
• Training modern deep learning models requires large amounts of computation, often provided by GPUs. Scaling computation from one GPU to many can enable much faster training and research progress but entails two complications. First, the training library must support inter-GPU communication. Depending on the particular methods employed, this communication may entail anywhere from negligible to significant overhead. Second, the user must modify his or her training code to take advantage of inter-GPU communication. Depending on the training library's API, the modification required may be either significant or minimal. Existing methods for enabling multi-GPU training under the TensorFlow library entail non-negligible communication overhead and require users to heavily modify their model-building code, leading many researchers to avoid the whole mess and stick with slower single-GPU training. In this paper we introduce Horovod, an open source library that improves on both obstructions to scaling: it employs efficient inter-GPU communication via ring reduction and requires only a few lines of modification to user code, enabling faster, easier distributed training in TensorFlow. Horovod is available under the Apache 2.0 license at https://github.com/uber/horovod.
• Detecting anomalous faces has important applications. For example, a system might tell when a train driver is incapacitated by a medical event, and assist in adopting a safe recovery strategy. These applications are demanding, because they require accurate detection of rare anomalies that may be seen only at runtime. Such a setting causes supervised methods to perform poorly. We describe a method for detecting an anomalous face image that meets these requirements. We construct a feature vector that reliably has large entries for anomalous images, then use various simple unsupervised methods to score the image based on the feature. Obvious constructions (autoencoder codes; autoencoder residuals) are defeated by a 'peeking' behavior in autoencoders. Our feature construction removes rectangular patches from the image, predicts the likely content of the patch conditioned on the rest of the image using a specially trained autoencoder, then compares the result to the image. High scores suggest that the patch was difficult for an autoencoder to predict, and so is likely anomalous. We demonstrate that our method can identify real anomalous face images in pools of typical images, taken from celeb-A, that is much larger than usual in state-of-the-art experiments. A control experiment based on our method with another set of normal celebrity images - a 'typical set', but nonceleb-A are not identified as anomalous; confirms this is not due to special properties of celeb-A.
• Mixed reality (MR) technology is now gaining ground due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. This survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specifically, we list and review the different protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as a number of work from the larger area of mobile devices, wearables, and Internet-of-Things (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.
• We develop and extend a method presented in [S. Patinet, D. Vandembroucq, and M. L. Falk, Phys. Rev. Lett., 117, 045501 (2016)] to compute the local yield stresses at the atomic scale in model two-dimensional Lennard-Jones glasses produced via differing quench protocols. This technique allows us to sample the plastic rearrangements in a non-perturbative manner for different loading directions on a well-controlled length scale. Plastic activity upon shearing correlates strongly with the locations of low yield stresses in the quenched states. This correlation is higher in more structurally relaxed systems. The distribution of local yield stresses is also shown to strongly depend on the quench protocol: the more relaxed the glass, the higher the local plastic thresholds. Analysis of the magnitude of local plastic relaxations reveals that stress drops follow exponential distributions, justifying the hypothesis of an average characteristic amplitude often conjectured in mesoscopic or continuum models. The amplitude of the local plastic rearrangements increases on average with the yield stress, regardless of the system preparation. The local yield stress varies with the shear orientation tested and strongly correlates with the plastic rearrangement locations when the system is sheared correspondingly. It is thus argued that plastic rearrangements are the consequence of shear transformation zones encoded in the glass structure that possess weak slip planes along different orientations. Finally, we justify the length scale employed in this work and extract the yield threshold statistics as a function of the size of the probing zones. This method makes it possible to derive physically grounded models of plasticity for amorphous materials by directly revealing the relevant details of the shear transformation zones that mediate this process.
• This paper deals with the problem of linear programming with inexact data represented by real closed intervals. Optimization problems with interval data arise in practical computations and they are of theoretical interest for more than forty years. We extend the concept of duality gap (DG), the difference between the primal and its dual optimal value, into interval linear programming. We consider two situations: First, DG is zero for every realization of interval parameters (the so called strongly zero DG) and, second, DG is zero for at least one realization of interval parameters (the so called weakly zero DG). We characterize strongly and weakly zero DG and its special case where the matrix of coefficients is real. We discuss computational complexity of testing weakly and strongly zero DG for commonly used types of interval linear programs and their variants with the real matrix of coefficients. We distinguish the NP-hard cases and the cases that are efficiently decidable. Based on DG conditions, we extend previous results about the bounds of the optimal value set given by Rohn. We provide equivalent statements for the bounds
• We survey results concerning the spectral properties of limit-periodic operators. The main focus is on discrete one-dimensional Schrödinger operators, but other classes of operators, such as Jacobi and CMV matrices, continuum Schrödinger operators and multi-dimensional Schrödinger operators, are discussed as well. We explain that each basic spectral type occurs, and it does so for a dense set of limit-periodic potentials. The spectrum has a strong tendency to be a Cantor set, but there are also cases where the spectrum has no gaps at all. The possible regularity properties of the integrated density of states range from extremely irregular to extremely regular. Additionally, we present background about periodic Schrödinger operators and almost-periodic sequences. In many cases we outline the proofs of the results we present.
• Feb 19 2018 math.SP arXiv:1802.05793v1
This note deals with the operator $T^*T$, where $T$ is a densely defined operator on a complex Hilbert space. We reprove a recent result of Z. Sebestyén and Z. Tarcsay [11]: If $T^*T$ and $TT^*$ are self-adjoint, then $T$ is closed. In addition, we describe the Friedrichs extension of $S^2$, where $S$ is a symmetric operator.
• Deep neural network architectures designed for application domains other than sound, especially image recognition, may not optimally harness the time-frequency representation when adapted to the sound recognition problem. In this work, we explore the ConditionaL Neural Network (CLNN) and the Masked ConditionaL Neural Network (MCLNN) for multi-dimensional temporal signal recognition. The CLNN considers the inter-frame relationship, and the MCLNN enforces a systematic sparseness over the network's links to enable learning in frequency bands rather than bins allowing the network to be frequency shift invariant mimicking a filterbank. The mask also allows considering several combinations of features concurrently, which is usually handcrafted through exhaustive manual search. We applied the MCLNN to the environmental sound recognition problem using the ESC-10 and ESC-50 datasets. MCLNN achieved competitive performance, using 12% of the parameters and without augmentation, compared to state-of-the-art Convolutional Neural Networks.
• Feb 19 2018 cs.DS arXiv:1802.05791v1
Given a set $W = \{w_1,\ldots, w_n\}$ of non-negative integer weights and an integer $C$, the #Knapsack problem asks to count the number of distinct subsets of $W$ whose total weight is at most $C$. In the more general integer version of the problem, the subsets are multisets. That is, we are also given a set $\{u_1,\ldots, u_n\}$ and we are allowed to take up to $u_i$ items of weight $w_i$. We present a deterministic FPTAS for #Knapsack running in $O(n^{2.5}\varepsilon^{-1.5}\log(n \varepsilon^{-1})\log (n \varepsilon))$ time. The previous best deterministic algorithm [FOCS 2011] runs in $O(n^3 \varepsilon^{-1} \log(n\varepsilon^{-1}))$ time (see also [ESA 2014] for a logarithmic factor improvement). The previous best randomized algorithm [STOC 2003] runs in $O(n^{2.5} \sqrt{\log (n\varepsilon^{-1}) } + \varepsilon^{-2} n^2 )$ time. Therefore, in the natural setting of constant $\varepsilon$, we close the gap between the $\tilde O(n^{2.5})$ randomized algorithm and the $\tilde O(n^3)$ deterministic algorithm. For the integer version with $U = \max_i \{u_i\}$, we present a deterministic FPTAS running in $O(n^{2.5}\varepsilon^{-1.5}\log(n\varepsilon^{-1} \log U)\log (n \varepsilon) \log^2 U)$ time. The previous best deterministic algorithm [APPROX 2016] runs in $O(n^3\varepsilon^{-1}\log(n \varepsilon^{-1} \log U) \log^2 U)$ time.
• We report on an OAM-enhanced scheme for angular displacement estimation based on two-mode squeezed vacuum and parity detection. The sub-Heisenberg-limited sensitivity for angular displacement estimation is obtained in an ideal situation. Several realistic factors are also considered, including photon loss, dark counts, response-time delay, and thermal photon noise. Our results indicate that the effects of the realistic factors on the sensitivity can be offset by raising OAM quantum number $\ell$. This reflects that the robustness and the practicability of the system can be improved via raising $\ell$ without changing mean photon number $N$.
• Direct carbon fuel cells (DCFCs) are highly efficient power generators fueled by abundant and cheap solid carbons. However, the limited formation of triple phase boundaries (TPBs) within fuel electrodes inhibits their performance even at high temperatures due to the limitation of mass transfer. It also results in low direct-utilization of the fuel. To address the challenges of low carbon oxidation activity and low carbon utilization simultaneously, a highly efficient, 3-D solid-state architected anode has been developed to enhance the performance of DCFCs below 600C. The cells with the 3-D textile anode, Gd:CeO2-Li/Na2CO3 composite electrolyte, and Sm0.5Sr0.5CoO3 (SSC) cathode have demonstrated excellent performance with maximum power densities of 143, 196, and 325 mW cm-2 at 500, 550, and 600C, respectively. At 500C, the cells could be operated steadily with a rated power density of ~0.13 W cm-2 at a constant current density of 0.15 A cm-2 with a carbon utilization over 86%. The significant improvement of the cell performance at such temperatures attributes to the high synergistic conduction of the composite electrolyte and the superior 3-D anode structure which offers more paths for carbon catalytic oxidation. Our results indicate the feasibility of direct electrochemical oxidation of solid carbon at 500-600C with a high carbon utilization, representing a promising strategy to develop 3-D architected electrodes for fuel cells and other electrochemical devices.
• In this paper firstly we define the complete maximal $S-$sets, which are very important for showing that not every element $X$ of $\Z_{2}^{n}$ is a row of a Hadamard matrix. Next, we study the structure of Schur ring with circulant basic sets over $\Z_{2}^{n}$ and we define the free $S-$sets and the symmetric $S-$sets. We prove that this $S-$sets are invariants under decimation. In addition, we find a condition for mapping which associates with each circulant basic set an autocorrelation vector be injective. Finally, we prove that if a circulant Hadamard matrix exists it cannot be symmetric. We conclude by showing the importance of studying the entire invariants by decimation of Schur ring $\Z_{2C}^{n}$ to prove conjectures about circulant Hadamard matrices and Hadamard matrices with one circulant core.
• Spin-orbit torque magnetic random access memory (SOT-MRAM) has become the research focus due to the advantages of energy-efficient switch and prolonged endurance. The writing operation is performed by the SOT induced by the heavy metal (HM) layer, while the reading operation is based on the tunneling magnetoresistance (TMR) effect. To explore the effect of HM layer on the TMR, we built top HM/CoFe/MgO/CoFe/bottom HM hetero-junctions and investigated the TMR character by first-principles calculations. It is found that the TMR would be enhanced by the HM layer symmetry, as the TMRs in W/CoFe/MgO/CoFe/W and Ta/CoFe/MgO/CoFe/Ta are higher than that in Ta/CoFe/MgO/CoFe/W. This phenomenon is attributed to the density of scattering states (DOSS) behavior and the resonant tunneling effect in parallel condition. We also studied the influence on TMR caused by thickness variation of bottom HM layer. In Ta/W/CoFe/MgO/CoFe/W/Ta SOT-MTJs, TMR ratios oscillate with variable bottom tungsten layer thickness, while all TMRs remain high. This result indicates that the HM layer symmetry dominates TMR, not the thickness. Our investigation presents a method to enhance TMR in spin-orbit torque magnetic tunnel junction (SOT-MTJ), which would benefit the high-reliability and low-energy-consumption SOT-MRAM.
• Feb 19 2018 cs.AI stat.ML arXiv:1802.05786v1
In the modern era, abundant information is easily accessible from various sources, however only a few of these sources are reliable as they mostly contain unverified contents. We develop a system to validate the truthfulness of a given statement together with underlying evidence. The proposed system provides supporting evidence when the statement is tagged as false. Our work relies on an inference method on a knowledge graph (KG) to identify the truthfulness of statements. In order to extract the evidence of falseness, the proposed algorithm takes into account combined knowledge from KG and ontologies. The system shows very good results as it provides valid and concise evidence. The quality of KG plays a role in the performance of the inference method which explicitly affects the performance of our evidence-extracting algorithm.
• Let $X$ and $Y$ be finite complexes. When $Y$ is a nilpotent space, it has a rationalization $Y \to Y_{(0)}$ which is well-understood. Sullivan showed that the induced map $[X,Y] \to [X,Y_{(0)}]$ on sets of mapping classes is finite-to-one. The sizes of the preimages need not be bounded; we show, however, that as the complexity (in a suitable sense) of a rational mapping class increases, these sizes are polynomially bounded. This "torsion" information about $[X,Y]$ is in some sense orthogonal to rational homotopy theory but is nevertheless an invariant of the rational homotopy type of $Y$ in at least some cases. The notion of complexity is geometric and we also prove a conjecture of Gromov regarding the number of mapping classes that have Lipschitz constant at most $L$.
• We propose a new superresolution imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the $\ell_1$-norm and a new function named Total Squared Variation (TSV) of the brightness distribution. TSV is an edge-smoothing variant of Total Variation (TV), leading to reducing the sum of squared gradients. First, we demonstrate that our technique may achieve super-resolution of $\sim 30$% compared to the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. We find that $\ell_1$+TSV regularization outperforms $\ell_1$+TV regularization with the popular isotropic TV term and the Cotton-Schwab CLEAN algorithm, demonstrating that TSV is well-matched to the expected physical properties of the astronomical images, which are often nebulous. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is $\lesssim 10$% of the traditional CLEAN beam size for $\ell_1$+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required for the $\ell_1$+TSV regularization. We also propose a feature extraction method to detect circular features from the image of a black hole shadow with the circle Hough transform (CHT) and use it to evaluate the performance of the image reconstruction. With our imaging technique and the CHT, the EHT can constrain the radius of the black hole shadow with an accuracy of $\sim 10-20$% in present simulations for Sgr A*.
• We reinterpret the Thouless-Anderson-Palmer approach to mean field spin glass models as a variational principle in the spirit of the Gibbs variational principle and the Bragg-Williams approximation. We prove this TAP-Plefka variational principle rigorously in the case of the spherical Sherrington-Kirkpatrick model.
• The recent wide recognition of the existence of neutrino oscillations concludes the pioneer stage of these studies and poses the problem of how to communicate effectively the basic aspects of this branch of science. In fact, the phenomenon of neutrino oscillations has peculiar features and requires to master some specific idea and some amount of formalism. The main aim of these introductory notes is exactly to cover these aspects, in order to allow the interested students to appreciate the modern developments and possibly to begin to do research in neutrino oscillations.
• The concentration of biochemical oxygen demand, BOD5, was studied in order to evaluate the water quality of the Igapó I Lake, in Londrina, Paraná State, Brazil. The simulation was conducted by means of the discretization in curvilinear coordinates of the geometry of Igapó I Lake, together with finite difference and finite element methods. The evaluation of the proposed numerical model for water quality was performed by comparing the experimental values of BOD5 with the numerical results. The evaluation of the model showed quantitative results compatible with the actual behavior of Igapó I Lake in relation to the simulated parameter. The qualitative analysis of the numerical simulations provided a better understanding of the dynamics of the BOD5 concentration at Igapó I Lake, showing that such concentrations in the central regions of the lake have values above those allowed by Brazilian law. The results can help to guide choices by public officials, as: (i) improve the identification mechanisms of pollutant emitters on Lake Igapó I, (ii) contribute to the optimal treatment of the recovery of the polluted environment and (iii) provide a better quality of life for the regulars of the lake as well as for the residents living on the lakeside.
• This study explores the performance of modern, accurate machine learning algorithms on the classification of fossil teeth in the Family Bovidae. Isolated bovid teeth are typically the most common fossils found in southern Africa and they often constitute the basis for paleoenvironmental reconstructions. Taxonomic identification of fossil bovid teeth, however, is often imprecise and subjective. Using modern teeth with known taxons, machine learning algorithms can be trained to classify fossils. Previous work by Brophy et. al. 2014 uses elliptical Fourier analysis of the form (size and shape) of the outline of the occlusal surface of each tooth as features in a linear discriminant analysis framework. This manuscript expands on that previous work by exploring how different machine learning approaches classify the teeth and testing which technique is best for classification. Five different machine learning techniques including linear discriminant analysis, neural networks, nuclear penalized multinomial regression, random forests, and support vector machines were used to estimate these models. Support vector machines and random forests perform the best in terms of both log-loss and misclassification rate; both of these methods are improvements over linear discriminant analysis. With the identification and application of these superior methods, bovid teeth can be classified with higher accuracy.
• We establish uniform a-priori bounds for solutions of the quasilinear problem $-\Delta_Nu=f(u)$ in $\Omega$, with $u=0$ on $\partial\Omega$, where $\Omega\subset\mathbb{R}^N$ is a bounded smooth and convex domain, and $f$ is a positive superlinear and subcritical function in the sense of the Trudinger-Moser inequality. The typical growth of $f$ is thus exponential. Finally, a generalization of the result for nonhomogeneous nonlinearities is given. Using a blow-up approach, this paper completes the results in [Damascelli-Pardo, Nonlinear Anal. Real World Appl. 41 (2018)] and [Lorca-Ruf-Ubilla, J. Differential Equations 246 no. 5 (2009)], enlarging the class of nonlinearities for which the uniform a-priori bound applies.
• This paper studies the asymptotic performance of maximum-a-posteriori estimation in the presence of prior information. The problem arises in several applications such as recovery of signals with non-uniform sparsity pattern from underdetermined measurements. With prior information, the maximum-a-posteriori estimator might have asymmetric penalty. We consider a generic form of this estimator and study its performance via the replica method. Our analyses demonstrate an asymmetric form of the decoupling property in the large-system limit. Employing our results, we further investigate the performance of weighted zero-norm minimization for recovery of a non-uniform sparse signal. Our investigations illustrate that for a given distortion, the minimum number of required measurements can be significantly reduced by choosing weighting coefficients optimally.
• Evacuation planning is an important and challenging element in emergency management due to the high level of uncertainty and numerous players and agencies involved in the event. To address all the factors with conflicting objectives, mathematical modeling has gained an extensive application over all aspects of evacuation planning to help responders and policy makers evaluate required time for evacuation and estimate numbers and distribution of casualties under different disaster scenarios. Correspondingly, mathematical formulation of evacuation optimization problems and solution methods are important when planning for evacuation. In this paper, the bi-level programming formulation of shelter location-allocation problem is considered. To account for stochasticity, a scenario-based approach is taken to address the uncertainty in the population to be evacuated from a small town in Lombardy region, Italy. Genetic algorithm is used as the solution method. Four scenarios are considered to study the optimal number and location of shelters for evacuation during normal weekdays, at nights, during weekends, and during vacation times with visiting travelers. The results highlight how different scenarios need different number and location of shelters for an optimal evacuation.
• The distribution of fitness effects for mutations is often believed to be key to predicting microbial evolution. However, fitness effects alone may be insufficient to predict evolutionary dynamics if mutations produce nontrivial ecological effects which depend on the composition of the population. Here we show that variation in multiple growth traits, such as lag times and growth rates, creates higher-order effects such the relative competition between two strains is fundamentally altered by the presence of a third strain. These effects produce a range of ecological phenomena: an unlimited number of strains can coexist, potentially with a single keystone strain stabilizing the community; strains that coexist in pairs do not coexist all together; and the champion of all pairwise competitions may not dominate in a mixed community. This occurs with competition for only a single finite resource and no other interactions. Since variation in multiple growth traits is ubiquitous in microbial populations due to pleiotropy and non-genetic variation, these higher-order effects may also be widespread, especially in laboratory ecology and evolution experiments. Our results underscore the importance of considering the distribution of ecological effects from mutations in predicting microbial evolution.
• Quantum key distribution is on the verge of real world applications, where perfectly secure information can be distributed among multiple parties. Several quantum cryptographic protocols have been theoretically proposed and independently realized in different experimental conditions. Here, we develop an experimental platform based on high-dimensional orbital angular momentum states of single photons that enables implementation of multiple quantum key distribution protocols with a single experimental apparatus. Our versatile approach allows us to experimentally survey different classes of quantum key distribution techniques, such as the 1984 Bennett & Brassard (BB84), tomographic protocols including the six-state and the Singapore protocol, and a recently introduced differential phase shift (Chau15) protocol. This enables us to experimentally compare the performance of these techniques and discuss their benefits and deficiencies in terms of noise tolerance in different dimensions. Our analysis gives an overview of the available quantum key distribution protocols for photonic orbital angular momentum and highlights the benefits of the presented schemes for different implementations and channel conditions.
• Feb 19 2018 math.CV arXiv:1802.05772v1
Let $\mathscr J$ be the set of inner functions whose derivative lies in the Nevanlinna class. It is natural to record the critical structure of $F \in \mathscr J$ by the inner part of its derivative. In this paper, we discuss a natural topology on $\mathscr J$ where $F_n \to F$ if the critical structures of $F_n$ converge to the critical structure of $F$. We show that this occurs precisely when the critical structures of the $F_n$ are uniformly concentrated on Korenblum stars. Building on the works of Korenblum and Roberts, we show that this topology also governs the behaviour of invariant subspaces of a weighted Bergman space which are generated by a single inner function.
• We compute the full classical 4d scalar potential of type IIA Calabi-Yau orientifolds in the presence of fluxes and D6-branes. We show that it can be written as a bilinear form $V = Z^{AB} \rho_A\rho_B$, where the $\rho_A$ are in one-to-one correspondence with the 4-form fluxes of the 4d effective theory. The $\rho_A$ only depend on the internal fluxes, the axions and the topological data of the compactification, and are fully determined by the Freed-Witten anomalies of branes that appear as 4d string defects. The quadratic form $Z^{AB}$ only depends on the saxionic partners of these axions. In general, the $\rho_A$ can be seen as the basic invariants under the discrete shift symmetries of the 4d effective theory, and therefore the building blocks of any flux-dependent quantity. All these polynomials may be obtained by derivation from one of them, associated to a universal 4-form. The standard N=1 supergravity flux superpotential is uniquely determined from this \it master polynomial, and vice versa.
• Menasco showed that a non-split, prime, alternating link that is not a 2-braid is hyperbolic in $S^3$. We prove a similar result for links in closed thickened surfaces $S \times I$. We define a link to be fully alternating if it has an alternating projection from $S\times I$ to $S$ where the interior of every complementary region is an open disk. We show that a prime, fully alternating link in $S\times I$ is hyperbolic. Similar to Menasco, we also give an easy way to determine primeness in $S\times I$. A fully alternating link is prime in $S\times I$ if and only if it is "obviously prime". Furthermore, we extend our result to show that a prime link with fully alternating projection to an essential surface embedded in an orientable, hyperbolic 3-manifold has a hyperbolic complement.
• On 2018-01-17 two electron crystallography structures (with PDB entries 6AXZ, 6BTK) on a prion protofibril of bank vole PrP(168-176) (a segment in the PrP $\beta$2-$\alpha$2 loop) were released into the PDB Bank. The paper published by [Nat Struct Mol Biol 25(2):131-134 (2018)] reports some polar clasps for these two crystal structures, and "an intersheet hydrogen bond between Tyr169 and the backbone carbonyl of Asn171 on an opposing strand." However, by revisiting the polar clasps, we cannot confirm this very important intersheet hydrogen bond; instead we found another hydrogen bond (Asn171:H-Gln172:OE1 between the strand of one sheet and the opposing strand of the mating sheet) to replace it.
• We demonstrate that framing, a subjective aspect of news, is a causal precursor to both significant public perception changes, and to federal legislation. We posit, counter-intuitively, that topic news volume and mean article similarity increase and decrease together. We show that specific features of news, such as publishing volume , are predictive of both sustained public attention, measured by annual Google trend data, and federal legislation. We observe that public attention changes are driven primarily by periods of high news volume and mean similarity, which we call \emphprenatal periods. Finally, we demonstrate that framing during prenatal periods may be characterized by high-utility news \emphkeywords.
• We give an analog of a Chevalley-Serre presentation for the Lie superalgebras W(n) and S(n) of Cartan type. These are part of a wider class of Lie superalgebras, the so-called tensor hierarchy algebras, denoted W(g) and S(g), where g denotes the Kac-Moody algebra A_r, D_r or E_r. Then W(A_n-1) and S(A_n-1) are the Lie superalgebras W(n) and S(n). The algebras W(g) and S(g) are constructed from the Dynkin diagram of the Borcherds-Kac-Moody superalgebras B(g) obtained by adding a single grey node (representing an odd null root) to the Dynkin diagram of g. We redefine the algebras W(A_r) and S(A_r) in terms of Chevalley generators and defining relations. We prove that all relations follow from the defining ones at level -2 and higher. The analogous definitions of the algebras in the D- and E-series are given. In the latter case the full set of defining relations is conjectured.
• Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.
• Magnetic fields play an important role during star formation. Direct magnetic field strength observations have proven specifically challenging in the extremely dynamic protostellar phase. Because of their occurrence in the densest parts of star forming regions, masers, through polarization observations, are the main source of magnetic field strength and morphology measurements around protostars. Of all maser species, methanol is one of the strongest and most abundant tracers of gas around high-mass protostellar disks and in outflows. However, as experimental determination of the magnetic characteristics of methanol has remained largely unsuccessful, a robust magnetic field strength analysis of these regions could hitherto not be performed. Here we report a quantitative theoretical model of the magnetic properties of methanol, including the complicated hyperfine structure that results from its internal rotation. We show that the large range in values of the Landé g-factors of the hyperfine components of each maser line lead to conclusions which differ substantially from the current interpretation based on a single effective g-factor. These conclusions are more consistent with other observations and confirm the presence of dynamically important magnetic fields around protostars. Additionally, our calculations show that (non-linear) Zeeman effects must be taken into account to further enhance the accuracy of cosmological electron-to-proton mass ratio determinations using methanol.
• News has traditionally been well researched, with studies ranging from sentiment analysis to event detection and topic tracking. We extend the focus to two surprisingly under-researched aspects of news: \emphframing and \emphpredictive utility. We demonstrate that framing influences public opinion and behavior, and present a simple entropic algorithm to characterize and detect framing changes. We introduce a dataset of news topics with framing changes, harvested from manual surveys in previous research. Our approach achieves an F-measure of $F_1=0.96$ on our data, whereas dynamic topic modeling returns $F_1=0.1$. We also establish that news has \emphpredictive utility, by showing that legislation in topics of current interest can be foreshadowed and predicted from news patterns.
• In this paper, we present and compare functional and spatio-temporal (Sp.T.) kriging approaches to predict spatial functional random processes (which can also be viewed as Sp.T. random processes). Comparisons with respect to computational time and prediction performance via functional cross-validation is evaluated, mainly through a simulation study but also on two real data sets. We restrict comparisons to Sp.T. kriging versus ordinary kriging for functional data (OKFD), since the more flexible functional kriging approaches, pointwise functional kriging (PWFK) and functional kriging total model, coincide with OKFD in several situations. We contribute with new knowledge by proving that OKFD and PWFK coincide under certain conditions. From the simulation study, it is concluded that the prediction performance for the two kriging approaches in general is rather equal for stationary Sp.T. processes, with a tendency for functional kriging to work better for small sample sizes and Sp.T. kriging to work better for large sample sizes. For non-stationary Sp.T. processes, with a common deterministic time trend and/or time varying variances and dependence structure, OKFD performs better than Sp.T. kriging irrespective of sample size. For all simulated cases, the computational time for OKFD was considerably lower compared to those for the Sp.T. kriging methods.
• Upon an expansion of all of the searches for redshifted HI 21-cm absorption (0.0021 < z 5.19), we update recent results regarding the detection of 21-cm in the non-local Universe. Specifically, we confirm that photo-ionisation of the gas is the mostly likely cause of the low detection rate at high redshift, in addition to finding that at z < 0.1 there may also be a decrease in the detection rate, which we suggest is due to the dilution of the absorption strength by 21-cm emission. By assuming that associated and intervening absorbers have similar cosmological mass densities, we find evidence that the spin temperature of the gas evolves with redshift, consistent with heating by ultra-violet photons. From the near--infrared (3.4, 4.6 and 12 micron) colours, we see that radio galaxies become more quasar-like in their activity with increasing redshift. We also find that the non-detection of 21-cm absorption at high redshift is not likely to be due to the selection of gas-poor ellipticals, in addition to a strong correlation between the ionising photon rate and the [3.4] - [4.6] colour, indicating that the UV photons arise from AGN activity. Like previous studies, we find a correlation between the detection of 21-cm absorption and the [4.6] - [12] colour, which is a tracer of star-forming activity. However, this only applies at the lowest redshifts (z < 0.1), the range considered by the other studies.
• Bivariate matrix functions provide a unified framework for various tasks in numerical linear algebra, including the solution of linear matrix equations and the application of the Fréchet derivative. In this work, we propose a novel tensorized Krylov subspace method for approximating such bivariate matrix functions and analyze its convergence. While this method is already known for some instances, our analysis appears to result in new convergence estimates and insights for all but one instance, Sylvester matrix equations.
• Argument mining is a core technology for automating argument search in large document collections. Despite its usefulness for this task, most current approaches to argument mining are designed for use only with specific text types and fall short when applied to heterogeneous texts. In this paper, we propose a new sentential annotation scheme that is reliably applicable by crowd workers to arbitrary Web texts. We source annotations for over 25,000 instances covering eight controversial topics. The results of cross-topic experiments show that our attention-based neural network generalizes best to unseen topics and outperforms vanilla BiLSTM models by 6% in accuracy and 11% in F-score.
• We present a stochastic algorithm to compute the barycenter of a set of probability distributions under the Wasserstein metric from optimal transport. Unlike previous approaches, our method extends to continuous input distributions and allows the support of the barycenter to be adjusted in each iteration. We tackle the problem without regularization, allowing us to recover a sharp output whose support is contained within the support of the true barycenter. We give examples where our algorithm recovers a more meaningful barycenter than previous work. Our method is versatile and can be extended to applications such as generating super samples from a given distribution and recovering blue noise approximations.
• Capillary wireless sensor networks devoted to air quality monitoring have provided vital information on dangerous air conditions. In adopting the environmentally generated energy as the fundamental energy source the main challenge is the implementation of capillary networks rather than replacing the batteries on a set period of times that leads to functional dilemma of devices management and high costs. In this paper we present a battery-less, self-governing, multi-parametric sensing platform for air quality monitoring that harvests environment energy for long run. Furthermore study on sensor section with their results have also been described in the paper. A customized process of calibration to check the sensors' sensitivity and a basic portfolio of variant energy sources over the power recovery section could productively improve air quality standards tracing in indoor and outdoor application, in a kind of 'set and forget' scenario.
• We study McKean-Vlasov stochastic control problems where both the cost functions and the state dynamics depend upon the joint distribution of the controlled state and the control process. Our contribution is twofold. On the one hand, we prove a suitable version of the Pontryagin stochastic maximum principle (in necessary and in sufficient form). On the other hand, we suggest a variational approach to study a weak formulation of these difficult control problems. In this context, we derive a necessary martingale optimality condition, and we establish a new connection between such problems and an optimal transport problem on path space.

serfati philippe Feb 16 2018 10:57 UTC

+on (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 16 2018 10:44 UTC

+on (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 19:03 UTC

+on (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 19:03 UTC

+on (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 19:03 UTC

+on (3 and more, 2008-13-14..) papers of bourgain etal (and their numerous descendants) on =1/ (t-static) illposednesses for the nd incompressible euler equations (and nse) and +- critical spaces, see possible counterexamples constructed on my nd shear flows, pressureless (shockless) solutions of in

...(continued)
serfati philippe Feb 15 2018 13:29 UTC

on transport and continuity equations with regular speed out of an hypersurface, and on it, having 2 relative normal components with the same punctual sign (possibly varying) and better unexpected results on solutions and jacobians etc, see (https://www.researchgate.net/profile/Philippe_Serfati), pa

...(continued)
serfati philippe Feb 15 2018 12:35 UTC

on transport and continuity equations with regular speed out of an hypersurface, and on it, having 2 relative normal components with the same punctual sign (possibly varying) and better unexpected results on solutions and jacobians etc, see (https://www.researchgate.net/profile/Philippe_Serfati), pa

...(continued)
Beni Yoshida Feb 13 2018 19:53 UTC

This is not a direct answer to your question, but may give some intuition to formulate the problem in a more precise language. (And I simplify the discussion drastically). Consider a static slice of an empty AdS space (just a hyperbolic space) and imagine an operator which creates a particle at some

...(continued)
Abhinav Deshpande Feb 10 2018 15:42 UTC

I see. Yes, the epsilon ball issue seems to be a thorny one in the prevalent definition, since the gate complexity to reach a target state from any of a fixed set of initial states depends on epsilon, and not in a very nice way (I imagine that it's all riddled with discontinuities). It would be inte

...(continued)
Elizabeth Crosson Feb 10 2018 05:49 UTC

Thanks for the correction Abhinav, indeed I meant that the complexity of |psi(t)> grows linearly with t.

Producing an arbitrary state |phi> exactly is also too demanding for the circuit model, by the well-known argument that given any finite set of gates, the set of states that can be reached i

...(continued)