# Top arXiv papers

• We report on the increased extraction of light emitted by solid-state sources embedded within high refractive index materials. This is achieved by making use of a local lensing effect by sub-micron metallic rings deposited on the sample surface and centered around single emitters. We show enhancements in the intensity of the light emitted by InAs/GaAs single quantum dot lines into free space as high as a factor 20. Such a device is intrinsically broadband and therefore compatible with any kind of solid-state light source. We foresee the fabrication of metallic rings via scalable techniques, like nano-imprint, and their implementation to improve the emission of classical and quantum light from solid-state sources. Furthermore, while increasing the brightness of the devices, the metallic rings can also act as top contacts for the local application of electric fields for carrier injection or wavelength tuning.
• The present letter to the editor is one in a series of publications discussing the formulation of hypotheses (propositions) for the evaluation of strength of forensic evidence. In particular, the discussion focusses on the issue of what information may be used to define the relevant population specified as part of the different-speaker hypothesis in forensic voice comparison. The previous publications in the series are: Hicks et al. 2015 <http://dx.doi.org/10.1016/j.scijus.2015.06.008>; Morrison et al. (2016) <http://dx.doi.org/10.1016/j.scijus.2016.07.002>; Hicks et al. (2017) <http://dx.doi.org/10.1016/j.scijus.2017.04.005>. The latter letter to the editor mostly resolves the apparent disagreement between the two groups of authors. We briefly discuss one outstanding point of apparent disagreement, and attempt to correct a misinterpretation of our earlier remarks. We believe that at this point there is no actual disagreement, and that both groups of authors are calling for greater collaboration in order to reduce the likelihood of future misunderstandings.
• This study aims to investigate the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Therefore, two populations representing the conditions of a violation vs. a non-violation of the sphericity assumption without any between-group effect or within-subject effect were created and 5,000 random samples of each population were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6 and 9. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes (n =20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The mean Type I error rates for rANOVA with Greenhouse-Geisser-correction demonstrate a small conservative bias if sphericity was not violated, sample sizes were small (n = 20), and m = 6 or more measurement occasions were conducted. The results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. MLM-UN may be used when the sphericity assumption is violated and when sample sizes are large.
• The vector potential operator, $\hat{\boldsymbol A}$, is transformed and rewritten in terms of cosine and sine functions in order to get a clear picture of how the photon states relate to the $\boldsymbol A$ field. The phase operator, defined by $\hat E = \exp(-i \hat \phi)$, is derived from this picture. The result has a close resemblance with the known Susskind-Glogower (SG) operator, which is given by $\hat E_{SG}=(\hat a_{\boldsymbol k} \hat a_{\boldsymbol k}^\dagger)^{-1/2} \hat a_{\boldsymbol k}$. It will be shown that $\hat a_{\boldsymbol k}$ should be replaced by $(\hat a_{\boldsymbol k} + \hat a_{-\boldsymbol k}^\dagger)$ instead to yield $\hat E = ((\hat a_{\boldsymbol k} + \hat a_{-\boldsymbol k}^\dagger ) (\hat a_{\boldsymbol k}^\dagger + \hat a_{-\boldsymbol k}))^{-1/2} (\hat a_{\boldsymbol k} + \hat a_{-\boldsymbol k}^\dagger)$, which makes the operator unitary. $\hat E$ will also be analyzed when restricted to the space of only forward moving photons with wave vector $\boldsymbol k$. The resulting phase operator, $\hat E_+$, will turn out to resemble the SG operator as well, but with a small correction: Whereas $E_{SG}$ can be equivalently written as $\hat E_{SG} = \sum_{n=0}^{\infty} |n\rangle \langle n+1 |$, the operator, $\hat E_+$, is instead given by $\hat E_+ = \sum_{n=0}^{\infty} a_n |n \rangle \langle n+1|$, where $a_n = (n+1/2)!/(n! \sqrt{n+1})$. The sequence, $(a_n)_{n \in \lbrace 0, 1, 2, \ldots \rbrace}$, converges to $1$ from below for $n$ going to infinity.
• The functional network approach, where fMRI BOLD time series are mapped to networks depicting functional relationships between brain areas, has opened new insights into the function of the human brain. In this approach, the choice of network nodes is of crucial importance. One option is to consider fMRI voxels as nodes. This results in a large number of nodes, making network analysis and interpretation of results challenging. A common alternative is to use pre-defined clusters of anatomically close voxels, Regions of Interest (ROIs). This approach assumes that voxels within ROIs are functionally similar. Because these two approaches result in different network structures, it is crucial to understand what happens to network connectivity when moving from the voxel level to the ROI level. We show that the consistency of ROIs, defined as the mean Pearson correlation coefficient between the time series of their voxels, varies widely in resting-state experimental data. Therefore the assumption of similar voxel dynamics within each ROI does not generally hold. Further, the time series of low-consistency ROIs may be highly correlated, resulting in spurious links in ROI-level networks. Based on these results, we recommend that averaging BOLD signals over anatomically defined ROIs should be carefully considered.
• Let K be a field of characteristic different from 2 and let V be a vector space of dimension n over K. Let M be a non-zero subspace of symmetric bilinear forms defined on V x V and let rank(M) denote the set of different positive integers that occur as the ranks of the non-zero elements of M. The main result of this paper is the inequality dim M is at most |rank(M)|n provided that |K| is at least n.
• Bounded weak solutions of Burgers' equation $\partial_tu+\partial_x(u^2/2)=0$ that are not entropy solutions need in general not be $BV$. Nevertheless it is known that solutions with finite entropy productions have a $BV$-like structure: a rectifiable jump set of dimension one can be identified, outside which $u$ has vanishing mean oscillation at all points. But it is not known whether all points outside this jump set are Lebesgue points, as they would be for $BV$ solutions. In the present article we show that the set of non-Lebesgue points of $u$ has Hausdorff dimension at most one. In contrast with the aforementioned structure result, we need only one particular entropy production to be a finite Radon measure, namely $\mu=\partial_t (u^2/2)+\partial_x(u^3/3)$. We prove Hölder regularity at points where $\mu$ has finite $(1+\alpha)$-dimensional upper density for some $\alpha>0$. The proof is inspired by a result of De Lellis, Westdickenberg and the second author : if $\mu_+$ has vanishing 1-dimensional upper density, then $u$ is an entropy solution. We obtain a quantitative version of this statement: if $\mu_+$ is small then $u$ is close in $L^1$ to an entropy solution.
• In this paper, we propose a novel method to jointly solve scene layout estimation and global registration problems for accurate indoor 3D reconstruction. Given a sequence of range data, we first build a set of scene fragments using KinectFusion and register them through pose graph optimization. Afterwards, we alternate between layout estimation and layout-based global registration processes in iterative fashion to complement each other. We extract the scene layout through hierarchical agglomerative clustering and energy-based multi-model fitting in consideration of noisy measurements. Having the estimated scene layout in one hand, we register all the range data through the global iterative closest point algorithm where the positions of 3D points that belong to the layout such as walls and a ceiling are constrained to be close to the layout. We experimentally verify the proposed method with the publicly available synthetic and real-world datasets in both quantitative and qualitative ways.
• We describe absolutely ordered $p$-normed spaces, for $1 \le p \le \infty$ which presents a model for "non-commutative" vector lattices and includes order theoretic orthogonality. To demonstrate its relevance, we introduce the notion of \it absolute compatibility among positive elements in absolute order unit spaces and relate it to symmetrized product in the case of a C$^{\ast}$-algebra. In the latter case, whenever one of the elements is a projection, the elements are absolutely compatible if and only if they commute. We develop an order theoretic prototype of the results. For this purpose, we introduce the notion of \it order projections and extend the results related to projections in a unital C$^{\ast}$-algebra to order projections in an absolute order unit space. As an application, we describe spectral decomposition theory for elements of an absolute order unit space.
• Apr 26 2017 math.QA math.AG math.GT math.RT arXiv:1704.07630v1
Using the method of Elias-Hogancamp and combinatorics of toric braids we give an explicit formula for the triply graded Khovanov-Rozansky homology of an arbitrary torus knot, thereby proving some of the conjectures of Aganagic-Shakirov, Cherednik, Gorsky-Negut and Oblomkov-Rasmussen-Shende.
• Wireless surveillance is becoming increasingly important to protect the public security by legitimately eavesdropping suspicious wireless communications. This paper studies the wireless surveillance of a two-hop suspicious communication link by a half-duplex legitimate monitor. By exploring the suspicious link's two-hop nature, the monitor can adaptively choose among the following three eavesdropping modes to improve the eavesdropping performance: (I) \emphpassive eavesdropping to intercept both hops to decode the message collectively, (II) \emphproactive eavesdropping via \emphnoise jamming over the first hop, and (III) \emphproactive eavesdropping via \emphhybrid jamming over the second hop. In both proactive eavesdropping modes, the (noise/hybrid) jamming over one hop is for the purpose of reducing the end-to-end communication rate of the suspicious link and accordingly making the interception more easily over the other hop. Under this setup, we maximize the eavesdropping rate at the monitor by jointly optimizing the eavesdropping mode selection as well as the transmit power for noise and hybrid jamming. Numerical results show that the eavesdropping mode selection significantly improves the eavesdropping rate as compared to each individual eavesdropping mode.
• In this paper, we study some order theoretic properties of $M$-ideals in order smooth $\infty$-normed spaces. We obtain an order theoretic version of the Alfsen-Efffros' cone decomposition theorem \cite[Theorem 2.9]AE for order smooth $1$-normed spaces satisfying condition $(OS.1.2)$. As an application of this result, we sharpen a result on the extension of bounded positive linear functionals on subspaces of order smooth $\infty$-normed spaces. We also give two different characterizations for \emphM-ideals of order smooth $\infty$-normed spaces. Finally, we characterize approximate order unit spaces as those order smooth $\infty$-normed spaces $V$ that are $M$-ideals in $\tilde{V}$. Here $\tilde{V}$ is the order unit space obtained by adjoining an order unit to $V$. We obtain this result by realising a complete order smooth $\infty$-normed space $V$ as $A_0(Q(V))$, the space of continuous affine functions on $Q(V)$ vanishing at $0$. Here $Q(V)$ is the set of quasi-states of $V$.
• For the search for charginos and neutralinos in the Minimal Supersymmetric Standard Model (MSSM) as well as for future precision analyses of these particles an accurate knowledge of their production and decay properties is mandatory. We evaluate the cross sections for the chargino and neutralino production at e+e- colliders in the MSSM with complex parameters (cMSSM). The evaluation is based on a full one-loop calculation of the production mechanisms e+e- -> cha_c cha_c' and e+e- -> neu_n neu_n', including soft and hard photon radiation. We mostly restricted ourselves to a version of our renormalization scheme which is valid for |M_1| < |M_2|, |mu| and M_2 != mu to simplify the analysis, even though we are able to switch to other parameter regions and correspondingly different renormalization schemes. The dependence of the chargino/neutralino cross sections on the relevant cMSSM parameters is analyzed numerically. We find sizable contributions to many production cross sections. They amount roughly 10-20% of the tree-level results, but can go up to 40% or higher in extreme cases. Also the complex phase dependence of the one-loop corrections was found non-negligible. The full one-loop contributions are thus crucial for physics analyses at a future linear e+e- collider such as the ILC or CLIC.
• We propose a novel, semi-supervised approach towards domain taxonomy induction from an input vocabulary of seed terms. Unlike most previous approaches, which typically extract direct hypernym edges for terms, our approach utilizes a novel probabilistic framework to extract hypernym subsequences. Taxonomy induction from extracted subsequences is cast as an instance of the minimum-cost flow problem on a carefully designed directed graph. Through experiments, we demonstrate that our approach outperforms state-of-the-art taxonomy induction approaches across four languages. Furthermore, we show that our approach is robust to the presence of noise in the input vocabulary.
• In a weighted sequence, for every position of the sequence and every letter of the alphabet a probability of occurrence of this letter at this position is specified. Weighted sequences are commonly used to represent imprecise or uncertain data, for example, in molecular biology where they are known under the name of Position-Weight Matrices. Given a probability threshold $\frac1z$, we say that a string $P$ of length $m$ matches a weighted sequence $X$ at starting position $i$ if the product of probabilities of the letters of $P$ at positions $i,\ldots,i+m-1$ in $X$ is at least $\frac1z$. In this article, we consider an indexing variant of the problem, in which we are to preprocess a weighted sequence to answer multiple pattern matching queries. We present an $O(nz)$-time construction of an $O(nz)$-sized index for a weighted sequence of length $n$ over an integer alphabet that answers pattern matching queries in optimal, $O(m+\mathit{Occ})$ time, where $\mathit{Occ}$ is the number of occurrences reported. Our new index is based on a non-trivial construction of a family of $\lfloor z \rfloor$ weighted sequences of an especially simple form that are equivalent to a general weighted sequence. This new combinatorial insight allowed us to obtain: a construction of the index in the case of a constant-sized alphabet with the same complexities as in (Barton et al., CPM 2016) but with a simple implementation; a deterministic construction in the case of a general integer alphabet (the construction of Barton et al. in this case was randomised); an improvement of the space complexity from $O(nz^2)$ to $O(nz)$ of a more general index for weighted sequences that was presented in (Biswas et al., EDBT 2016); and a significant improvement of the complexities of the approximate variant of the index of Biswas et al.
• We study the dynamics of a quantum impurity immersed in a Bose-Einstein condensate as an open quantum system in the framework of the quantum Brownian motion model. We derive a generalized Langevin equation for the position of the impurity. The Langevin equation is an integrodifferential equation that contains a memory kernel and is driven by a colored noise. These result from considering the environment as given by the degrees of freedom of the quantum gas, and thus depend on its parameters, e.g. interaction strength between the bosons, temperature, etc. We study the role of the memory on the dynamics of the impurity. When the impurity is untrapped, we find that it exhibits a super-diffusive behavior at long times. We find that back-flow in energy between the environment and the impurity occurs during evolution. When the particle is trapped, we calculate the variance of the position and momentum to determine how they compare with the Heisenberg limit. One important result of this paper is that we find position squeezing for the trapped impurity at long times. We determine the regime of validity of our model and the parameters in which these effects can be observed in realistic experiments.
• Apr 26 2017 cs.NE arXiv:1704.07622v1
We propose sensorimotor tappings, a new graphical technique that explicitly represents relations between the time steps of an agent's sensorimotor loop and a single training step of an adaptive model that the agent is using internally. In the simplest case this is a relation linking two time steps. In realistic cases these relations can extend over several time steps and over different sensory channels. The aim is to capture the footprint of information intake relative to the agent's current time step. We argue that this view allows us to make prior considerations explicit and then use them in implementations without modification once they are established. In the paper we introduce the problem domain, explain the basic idea, provide example tappings for standard configurations used in developmental models, and show how tappings can be applied to problems in related fields.
• The proliferation of mobile Internet and connected devices, offering a variety of services at different levels of performance, represents a major challenge for the fifth generation wireless networks and beyond. This requires a paradigm shift towards the development of key enabling techniques for the next generation wireless networks. In this respect, visible light communication (VLC) has recently emerged as a new communication paradigm that is capable of providing ubiquitous connectivity by complementing radio frequency communications. One of the main challenges of VLC systems, however, is the low modulation bandwidth of the light-emitting-diodes, which is in the megahertz range. This article presents a promising technology, referred to as "optical- non-orthogonal multiple access (O-NOMA)", which is envisioned to address the key challenges in the next generation of wireless networks. We provide a detailed overview and analysis of the state-of-the-art integration of O-NOMA in VLC networks. Furthermore, we provide insights on the potential opportunities and challenges as well as some open research problems that are envisioned to pave the way for the future design and implementation of O-NOMA in VLC systems.
• A hypersymplectic structure on a 4-manifold $X$ is a triple $\underline{\omega}$ of symplectic forms which at every point span a maximal positive-definite subspace of $\Lambda^2$ for the wedge product. This article is motivated by a conjecture of Donaldson: when $X$ is compact $\underline{\omega}$ can be deformed through cohomologous hypersymplectic structures to a hyperkähler triple. We approach this via a link with $G_2$-geometry. A hypersymplectic structure $\underline\omega$ on a compact manifold $X$ defines a natural $G_2$-structure $\phi$ on $X \times \mathbb{T}^3$ which has vanishing torsion precisely when $\underline{\omega}$ is a hyperkähler triple. We study the $G_2$-Laplacian flow starting from $\phi$, which we interpret as a flow of hypersymplectic structures. Our main result is that the flow extends as long as the scalar curvature of the corresponding $G_2$-structure remains bounded. An application of our result is a lower bound for the maximal existence time of the flow, in terms of weak bounds on the initial data (and with no assumption that scalar curvature is bounded along the flow).
• This paper presents the designs of asynchronous early output dual-bit full adders without and with redundant logic (implicit) corresponding to homogeneous and heterogeneous delay-insensitive data encoding. For homogeneous delay-insensitive data encoding only dual-rail i.e. 1-of-2 code is used, and for heterogeneous delay-insensitive data encoding 1-of-2 and 1-of-4 codes are used. The 4-phase return-to-zero protocol is used for handshaking. To demonstrate the merits of the proposed dual-bit full adder designs, 32-bit ripple carry adders (RCAs) are constructed comprising dual-bit full adders. The proposed dual-bit full adders based 32-bit RCAs incorporating redundant logic feature reduced latency and area compared to their non-redundant counterparts with no accompanying power penalty. In comparison with the weakly indicating 32-bit RCA constructed using homogeneously encoded dual-bit full adders containing redundant logic, the early output 32-bit RCA comprising the proposed homogeneously encoded dual-bit full adders with redundant logic reports corresponding reductions in latency and area by 22.2% and 15.1% with no associated power penalty. On the other hand, the early output 32-bit RCA constructed using the proposed heterogeneously encoded dual-bit full adder which incorporates redundant logic reports respective decreases in latency and area than the weakly indicating 32-bit RCA that consists of heterogeneously encoded dual-bit full adders with redundant logic by 21.5% and 21.3% with nil power overhead. The simulation results obtained are based on a 32/28nm CMOS process technology.
• We report modifications of the temperature-dependent transport properties of $\mathrm{MoS_2}$ thin flakes via field-driven ion intercalation in an electric double layer transistor. We find that intercalation with $\mathrm{Li^+}$ ions induces the onset of an inhomogeneous superconducting state. Intercalation with $\mathrm{K^+}$ leads instead to a disorder-induced incipient metal-to-insulator transition. These findings suggest that similar ionic species can provide access to different electronic phases in the same material.
• We consider higher derivative supergravities which are dual to ghost-free $N=1$ SUGRAs in the Einstein frame. The duality is implemented by deforming the Kähler function, and / or the superpotential, to include non-linear dependences on chiral fields that in other approaches play the role of the Lagrange multipliers employed to establish this duality. These models are of the no-scale type and in the minimal case require the presence of four chiral multiplets, with a Kähler potential having the structure of the $SU(4,1)/SU(4) \times U(1)$ coset manifold. Moreover, these give rise to deformations of the supersymmetric Starobinsky model with interesting features. In their standard $N=1$ supergravity formulation they are described by multi-scalar potentials, featuring Starobinsky directions and can, in principle, lead to successful cosmological inflation.
• While part-of-speech (POS) tagging and dependency parsing are observed to be closely related, existing work on joint modeling with manually crafted feature templates suffers from the feature sparsity and incompleteness problems. In this paper, we propose an approach to joint POS tagging and dependency parsing using transition-based neural networks. Three neural network based classifiers are designed to resolve shift/reduce, tagging, and labeling conflicts. Experiments show that our approach significantly outperforms previous methods for joint POS tagging and dependency parsing across a variety of natural languages.
• The smart meter (SM) privacy problem is addressed together with the cost of energy for the consumer. It is assumed that a storage device, e.g., an electrical battery, is available to the consumer, which can be utilized both to achieve privacy and to reduce the energy cost by modifying the energy consumption profile. Privacy is measured via the squared-error distortion between the SM readings, which are reported to the utility provider (UP), and a target profile, whereas time-of-use pricing is considered for energy cost calculation. Extensive numerical results are presented to evaluate the performance of the proposed strategy. The trade-off between the achievable privacy and the energy cost is studied by taking into account the limited capacity of the battery as well as the capability to sell energy to the UP.
• In this paper, we consider the singular isothermal sphere lensing model that have a spherically symmetric power-law mass distribution $\rho_{tot}(r)\sim r^{-\gamma}$. We investigate whether the mass density power-law index $\gamma$ is cosmologically evolutionary by using the strong gravitational lensing (SGL) observation, in combination with other cosmological observations. We also check whether the constraint result of $\gamma$ is affected by the cosmological model, by considering several simple dynamical dark energy models. We find that the constraint on $\gamma$ is mainly decided by the SGL observation and independent of the cosmological model, and we find no evidence for the evolution of $\gamma$ from the SGL observation.
• The structure, function, and failure of many socio-technical networked systems are deeply related to the emergence of large-scale connectivity. There is thus a great interest in understanding how the onset of the transition to large-scale connectivity can be either enhanced or delayed. Several studies have addressed this question by considering unrestricted interventions in the process of link addition, revealing an effective control over the onset and abruptness of the transition. Here, we consider the more realistic case where resources are limited and thus each intervention is associated with a cost. We show that the maximum possible delay is achieved by the intervention protocol that guarantees that the budget just survives until the onset of percolation, but this typically results in a discontinuous percolation transition. Therefore, resource-optimal interventions typically delay percolation. However, they also result in a postponed but systemic loss of controllability and an extremely abrupt emergence of large-scale connectivity, an unintended self-defeating consequence.
• In this article a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a channel estimation problem. At the receiver, conventional signal processing systems restrict the bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ in order to comply with the well-known Nyquist sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\leq f_s$. To this end, at the receiver we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable channel estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cramér-Rao lower bound. For the case of a channel with unknown delay-Doppler shift we show how to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We discuss the complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the achievable performance of the proposed designs by Monte-Carlo simulations of a likelihood-based channel estimator.
• Multiple antennas have been exploited for spatial multiplexing and diversity transmission in a wide range of communication applications. However, most of the advances in the design of high speed wireless multiple-input multiple output (MIMO) systems are based on information-theoretic principles that demonstrate how to efficiently transmit signals conforming to Gaussian distribution. Although the Gaussian signal is capacity-achieving, signals conforming to discrete constellations are transmitted in practical communication systems. As a result, this paper is motivated to provide a comprehensive overview on MIMO transmission design with discrete input signals. We first summarize the existing fundamental results for MIMO systems with discrete input signals. Then, focusing on the basic point-to-point MIMO systems, we examine transmission schemes based on three most important criteria for communication systems: the mutual information driven designs, the mean square error driven designs, and the diversity driven designs. Particularly, a unified framework which designs low complexity transmission schemes applicable to massive MIMO systems in upcoming 5G wireless networks is provided in the first time. Moreover, adaptive transmission designs which switch among these criteria based on the channel conditions to formulate the best transmission strategy are discussed. Then, we provide a survey of the transmission designs with discrete input signals for multiuser MIMO scenarios, including MIMO uplink transmission, MIMO downlink transmission, MIMO interference channel, and MIMO wiretap channel. Additionally, we discuss the transmission designs with discrete input signals for other systems using MIMO technology. Finally, technical challenges which remain unresolved at the time of writing are summarized and the future trends of transmission designs with discrete input signals are addressed.
• The $\beta$ decay of the odd-odd nucleus $^{70}$Br has been investigated with the BigRIPS and EURICA setups at the Radioactive Ion Beam Factory (RIBF) of the RIKEN Nishina Center. The $T=0$ ($J^{\pi}=9^+$) and $T=1$ ($J^{\pi}=0^+$) isomers have both been produced in in-flight fragmentation of $^{78}$Kr with ratios of 41.6(8)\% and 58.4(8)\%, respectively. A half-life of $t_{1/2}=2157^{+53}_{-49}$ ms has been measured for the $J^{\pi}=9^+$ isomer from $\gamma$-ray time decay analysis. Based on this result, we provide a new value of the half-life for the $J^{\pi}=0^+$ ground state of $^{70}$Br, $t_{1/2}=78.42\pm0.51$ ms, which is slightly more precise, and in excellent agreement, with the best measurement reported hitherto in the literature. For this decay, we provide the first estimate of the total branching fraction decaying through the $2^+_1$ state in the daughter nucleus $^{70}$Se, $R(2^+_1)=1.3\pm1.1\%$. We also report four new low-intensity $\gamma$-ray transitions at 661, 1103, 1561, and 1749 keV following the $\beta$ decay of the $J^{\pi}=9^+$ isomer. Based on their coincidence relationships, we tentatively propose two new excited states at 3945 and 4752 keV in $^{70}$Se with most probable spins and parities of $J^{\pi}=(6^+)$ and $(8^+)$, respectively. The observed structure is interpreted with the help of shell-model calculations, which predict a complex interplay between oblate and prolate configurations at low excitation energies.
• Propagation of coherent light in a Kerr nonlinear medium can be mapped onto a flow of an equivalent fluid. Here we use this mapping to model the conditions in the vicinity of a rotating black hole as a Laguerre-Gauss vortex beam. We describe weak fluctuations of the phase and amplitude of the electric field by wave equations in curved space, with a metric that is similar to the Kerr metric. We find the positions of event horizons and ergoregion boundaries, and the conditions for the onset of superradiance, which are simultaneously the conditions for a resonance in the analogue Hawking radiation. The resonance strongly enhances the otherwise exponentially weak Hawking radiation at certain frequencies, and makes its experimental observation feasible.
• We consider the inverse Galois problem over function fields of positive characteristic p, for example, the inverse Galois problem over the projective line. We describe a method to construct certain Galois covers of the projective line and other curves, which are ordinary in the sense that their Jacobian has maximal p-torsion. We do this by constructing Galois covers of ordinary semi-stable curves, and then deforming them into smooth Galois covers.
• This paper proposes the use of the Spectral method to simulate diffusive moisture transfer through porous materials, which can be strongly nonlinear and can significantly affect sensible and latent heat transfer. An alternative way for computing solutions by considering a separated representation is presented, which can be applied to both linear and nonlinear diffusive problems, considering highly moisture-dependent properties. The Spectral method is compared with the classical implicit Euler and Crank-Nicolson schemes. The results show that the Spectral approach enables to accurately simulate the field of interest. Furthermore, the numerical gains become particularly interesting for nonlinear cases since the proposed method drastically can reduce the computer run time by 99% when compared to the traditional Crank-Nicolson scheme.
• The share of wind energy in total installed power capacity has grown rapidly in recent years around the world. Producing accurate and reliable forecasts of wind power production, together with a quantification of the uncertainty, is essential to optimally integrate wind energy into power systems. We build spatio-temporal models for wind power generation and obtain full probabilistic forecasts from 15 minutes to 5 hours ahead. Detailed analysis of the forecast performances on the individual wind farms and aggregated wind power are provided. We show that it is possible to improve the results of forecasting aggregated wind power by utilizing spatio-temporal correlations among individual wind farms. Furthermore, spatio-temporal models have the advantage of being able to produce spatially out-of-sample forecasts. We evaluate the predictions on a data set from wind farms in western Denmark and compare the spatio-temporal model with an autoregressive model containing a common autoregressive parameter for all wind farms, identifying the specific cases when it is important to have a spatio-temporal model instead of a temporal one. This case study demonstrates that it is possible to obtain fast and accurate forecasts of wind power generation at wind farms where data is available, but also at a larger portfolio including wind farms at new locations. The results and the methodologies are relevant for wind power forecasts across the globe as well as for spatial-temporal modelling in general.
• This note revolves on the free Dirac operator in $\mathbb{R}^3$ and its $\delta$-shell interaction with electrostatic potentials supported on a sphere. On one hand, we characterize the eigenstates of those couplings by finding sharp constants and minimizers of some precise inequalities related to an uncertainty principle. On the other hand, we prove that the domains given by Dittrich, Exner and Šeba [Dirac operators with a spherically symmetric $\delta$-shell interaction, J. Math. Phys. 30.12 (1989), 2875-2882] and by Arrizabalaga, Mas and Vega [Shell interactions for Dirac operators, J. Math. Pures et Appl. 102.4 (2014), 617-639] for the realization of an electrostatic spherical shell interaction coincide. Finally, we explore the spectral relation between the shell interaction and its approximation by short range potentials with shrinking support, improving previous results in the spherical case.
• The distribution of N/O abundance ratios calculated by the detailed modelling of different galaxy spectra at z<4 is investigated. Supernova (SN) and long gamma-ray-burst (LGRB) host galaxies cover different redshift domains. N/O in SN hosts increases due to secondary N production towards low z (0.01) accompanying the growing trend of active galaxies (AGN, LINER). N/O in LGRB hosts decreases rapidly between z>1 and z ~0.1 following the N/H trend and reach the characteristic N/O ratios calculated for the HII regions in local and nearby galaxies. The few short period GRB (SGRB) hosts included in the galaxy sample show N/H <0.04 solar and O/H solar. They seem to continue the low bound N/H trend of SN hosts at z<0.3. The distribution of N/O as function of metallicity for SN and LGRB hosts is compared with star chemical evolution models. The results show that several LGRB hosts can be explained by star multi-bursting models when 12+log(O/H) <8.5, while some objects follow the trend of continuous star formation models. N/O in SN hosts at log(O/H)+12 <8.5 are not well explained by stellar chemical evolution models calculated for starburst galaxies. At 12+log(O/H) >8.5 many different objects are nested close to O/H solar with N/O ranging between the maximum corresponding to starburst galaxies and AGN and the minimum corresponding to HII regions and SGRB.
• Objective: The recent emergence and success of electroencephalography (EEG) in low-cost portable devices, has opened the door to a new generation of applications processing a small number of EEG channels for health monitoring and brain-computer interfacing. These recordings are, however, contaminated by many sources of noise degrading the signals of interest, thus compromising the interpretation of the underlying brain state. In this work, we propose a new data-driven algorithm to effectively remove ocular and muscular artifacts from single-channel EEG: the surrogate-based artifact removal (SuBAR). Methods: By means of the time-frequency analysis of surrogate data, our approach is able to identify and filter automatically ocular and muscular artifacts embedded in single-channel EEG. Results: In a comparative study using artificially contaminated EEG signals, the efficacy of the algorithm in terms of noise removal and signal distortion was superior to other traditionally-employed single-channel EEG denoising techniques: wavelet thresholding and the canonical correlation analysis combined with an advanced version of the empirical mode decomposition. Even in the presence of mild and severe artifacts, our artifact removal method provides a relative error 4 to 5 times less than traditional techniques. Significance: In view of these results, the SuBAR method is a promising solution for mobile environments, such as ambulatory healthcare systems, sleep stage scoring or anesthesia monitoring, where very few EEG channels or even a single channel is available.
• We prove, under some assumptions, the existence of correctors for the stochastic homoge-nization of of " viscous " possibly degenerate Hamilton-Jacobi equations in stationary ergodic media. The general claim is that, assuming knowledge of homogenization in probability, correctors exist for all extreme points of the convex hull of the sublevel sets of the effective Hamiltonian. Even when homogenization is not a priori known, the arguments imply existence of correctors and, hence, homogenization in some new settings. These include positively homogeneous Hamiltonians and, hence, geometric-type equations including motion by mean curvature, in radially symmetric environments and for all directions. Correctors also exist and, hence, homogenization holds for many directions for non convex Hamiltoni-ans and general stationary ergodic media.
• Dias and Silvera (Letters, p. 715, 2017) claim the observation of the Wigner-Huntington transition to metallic hydrogen at 495 GPa. We show that neither the claims of the record pressure or the phase transition to a metallic state are supported by any data and contradict the authors' own unconfirmed previous results.
• We report the observation of magnetic domains in the exotic, antiferromagnetically ordered all-in-all-out state of Nd$_2$Zr$_2$O$_7$, induced by spin canting. The all-in-all-out state can be realized by Ising-like spins on a pyrochlore lattice and is established in Nd$_2$Zr$_2$O$_7$ below 0.31 K for external magnetic fields up to 0.14 T. Two different spin arrangements can fulfill this configuration which leads to the possibility of magnetic domains. The all-in-all-out domain structure can be controlled by an external magnetic field applied parallel to the [111] direction. This is a result of different spin canting mechanism for the two all-in-all-out configurations for such a direction of the magnetic field. The change of the domain structure is observed through a hysteresis in the magnetic susceptibility. No hysteresis occurs, however, in case the external magnetic field is applied along [100].
• Complex networks are usually characterized in terms of their topological, spatial, or information-theoretic properties and combinations of the associated metrics are used to discriminate networks into different classes or categories. However, even with the present variety of characteristics at hand it still remains a subject of current research to appropriately quantify a network's complexity and correspondingly discriminate between different types of complex networks, like infrastructure or social networks, on such a basis. Here, we explore the possibility to classify complex networks by means of a statistical complexity measure that has formerly been successfully applied to distinguish different types of chaotic and stochastic time series. It is composed of a network's averaged per-node entropic measure characterizing the network's information content and the associated Jenson-Shannon divergence as a measure of disequilibrium. We study 29 real world networks and show that networks of the same category tend to cluster in distinct areas of the resulting complexity-entropy plane. We demonstrate that within our framework, connectome networks exhibit among the highest complexity while, e.g, transportation and infrastructure networks display significantly lower values. We then show in a second application that the proposed framework is useful to objectively construct threshold-based networks, such as functional climate networks or recurrence networks, by choosing the threshold such that the statistical network complexity is maximized.
• In this paper we survey many of the known results about Morse boundaries and stability.
• Black-Scholes (BS) is the standard mathematical model for option pricing in financial markets. Option prices are calculated using an analytical formula whose main inputs are strike (at which price to exercise) and volatility. The BS framework assumes that volatility remains constant across all strikes, however, in practice it varies. How do traders come to learn these parameters? We introduce natural models of learning agents, in which they update their beliefs about the true implied volatility based on the opinions of other traders. We prove convergence of these opinion dynamics using techniques from control theory and leader-follower models, thus providing a resolution between theory and market practices. We allow for two different models, one with feedback and one with an unknown leader and no feedback. Both scalar and multidimensional cases are analyzed.
• We present the results of our investigation of the star-forming potential in the Perseus star-forming complex. We build on previous starless core, protostellar core, and young stellar object (YSO) catalogs from Spitzer, Herschel, and SCUBA observations in the literature. We place the cores and YSOs within seven star-forming clumps based on column densities greater than 5x10^21 cm^-2. We calculate the mean density and free-fall time for 69 starless cores as 5.55x10^-19 gcm^-3 and 0.1 Myr,respectively, and we estimate the star formation rate for the near future as 150 Msun Myr^-1. According to Bonnor Ebert stability analysis, we find that majority of starless cores in Perseus are unstable. Broadly, these cores can collapse to form the next generation of stars. We found a relation between starless cores and YSOs, where the numbers of young protostars (Class 0 + Class I) are similar to the numbers of starless cores. This similarity, which shows a one-to-one relation, suggests that these starless cores may form the next generation of stars with approximately the same formation rate as the current generation, as identified by the Class 0 and Class I protostars. It follows that if such a relation between starless cores and any YSO stage exists, the SFR values of these two populations must be nearly constant. In brief, we propose that this one-to-one relation is an important factor in better understanding the star formation process within a cloud.
• In this article, we present the general form of the full electromagnetic Green function suitable for application in bulk material physics. In particular, we show how the seven adjustable parameter functions of the free Green function translate into seven corresponding parameter functions of the full Green function. Furthermore, for both the fundamental response tensor and the electromagnetic Green function, we discuss the reduction of the Dyson equation on the four-dimensional Minkowski space to an equivalent, three-dimensional Cartesian Dyson equation.
• In molecular outflows from forming low-mass protostars, most oxygen is expected to be locked up in water. However, Herschel observations have shown that typically an order of magnitude or more of the oxygen is still unaccounted for. To test if the oxygen is instead in atomic form, SOFIA-GREAT observed the R1 position of the bright molecular outflow from NGC1333-IRAS4A. The [OI] 63 um line is detected and spectrally resolved. From an intensity peak at +15 km/s, the intensity decreases until +50 km/s. The profile is similar to that of high-velocity (HV) H2O and CO 16-15, the latter observed simultaneously with [OI]. A radiative transfer analysis suggests that ~15% of the oxygen is in atomic form toward this shock position. The CO abundance is inferred to be ~10^-4 by a similar analysis, suggesting that this is the dominant oxygen carrier in the HV component. These results demonstrate that a large portion of the observed [OI] emission is part of the outflow. Further observations are required to verify whether this is a general trend.
• We present the analysis of 29.77 days of K2 space photometry of the well-detached massive 4.6 d O+B binary HD 165246 (V=7.8) obtained during Campaign 9b. This analysis reveals intrinsic variability in the residual lightcurve after subtraction of the binary model, in the frequency range [0,10] $d^{-1}$. This makes HD 165246 only the second O+B eclipsing binary with asteroseismic potential. While some of the frequencies are connected with the rotation of the primary, others are interpreted as due to oscillations with periodicities of order days. The frequency resolution of the current dataset does not allow to distinguish between frequencies due to standing coherent oscillation modes or travelling waves. Further time-resolved high-precision spectroscopy covering several binary orbits will reveal whether HD 165246 is a Rosetta stone for synergistic binary and seismic modelling of an O-type star.
• One of the major difficulties in employing phase field crystal (PFC) modeling and the associated amplitude (APFC) formulation is the ability to tune model parameters to match experimental quantities. In this work we address the problem of tuning the defect core and interface energies in the APFC formulation. We show that the addition of a single term to the free energy functional can be used to increase the solid-liquid interface and defect energies in a well-controlled fashion, without any major changes to other features. The influence of the new term is explored in 2D triangular and honeycomb structures as well as bcc and fcc lattices in 3D. In addition, a Finite Element Method (FEM) is developed for the model that incorporates a mesh refinement scheme. The combination of FEM and mesh refinement to simulate amplitude expansion with a new energy term provides a method of controlling microscopic features such as defect and interface energies while simultaneously delivering a coarse-grained examination of the system.
• In this paper, we develop a simple model describing inherent photon-number noise in Rarity-Tapster type interferometers. This noise is caused by generating photon pairs in the process of spontaneous parametric down-conversion and adding a third photon by attenuating fundamental laser mode to single-photon level. We experimentally verify our model and present resulting signal to noise ratios as well as obtained three-photon generation rates as functions of various setup parameters. Subsequently we evaluate impact of this particular source of noise on quantum teleportation which is a key quantum information protocol using this interferometric configuration.
• Many studies show that the acquisition of knowledge is the key to build competitive advantage of companies. We propose a simple model of knowledge transfer within the organization and we implement the proposed model using cellular automata technique. In this paper the organisation is considered in the context of complex systems. In this perspective, the main role in organisation is played by the network of informal contacts and the distributed leadership. The goal of this paper is to check which factors influence the efficiency and effectiveness of knowledge transfer. Our studies indicate a significant role of initial concentration of chunks of knowledge for knowledge transfer process, and the results suggest taking action in the organisation to shorten the distance (social distance) between people with different levels of knowledge, or working out incentives to share knowledge.
• Several papers have recently presented results of measurements of physical aging by studying the behavior of glassy materials quenched from temperatures above their glass transition temperature $T_g$. The evolution of the aging process is usually followed by plotting the relaxed enthalpy versus the accompanying decrease in volume. Here, we focus on the slope of such plots, which are found to be similar to the inverse value of the isothermal compressibility close to $T_g$. An explanation of this empirical result is attempted in the frame of a model that interconnects the defect Gibbs energy with properties of the bulk material.

Thomas Klimpel Apr 20 2017 09:16 UTC

This paper [appeared][1] in February 2016 in the peer reviewed interdisciplinary journal Chaos by the American Institute of Physics (AIP).

It has been reviewed publicly by amateurs both [favorably][2] and [unfavorably][3]. The favorable review took the last sentence of the abstract ("These invalid

...(continued)
Veaceslav Molodiuc Apr 19 2017 07:26 UTC

http://ibiblio.org/e-notes/Chaos/intermit.htm

Zoltán Zimborás Apr 18 2017 09:47 UTC

Great note. I real like the two end-sentences: "Of course, any given new approach to a hard and extensively studied problem has a very low probability to lead to a direct solution (some popular accounts may not have emphasized this to the degree we would have preferred). But arguably, this makes the

...(continued)
James Wootton Apr 18 2017 08:29 UTC

Interesting to start getting perspectives from actual end users. But this does focus massively on quantum annealing, rather than a 'true' universal and fault-tolerant QC.

Aram Harrow Apr 17 2017 13:45 UTC

It must feel good to get this one out there! :)

Planat Apr 14 2017 08:11 UTC

First of all, thanks to all for helping to clarify some hidden points of our paper.
As you can see, the field norm generalizes the standard Hilbert-Schmidt norm.
It works for SIC [e.g. d=2, d=3 (the Hesse) and d=8 (the Hoggar)].

The first non-trivial case is with d=4 when one needs to extend th

...(continued)
Robin Blume-Kohout Apr 14 2017 03:03 UTC

Okay, I see the resolution to my confusion now (and admit that I was confused). Thanks to Michel, Marcus, Blake, and Steve!

Since I don't know the first thing about cyclotomic field norms... can anybody explain the utility of this norm, for this problem? I mean, just to be extreme, I could define

...(continued)
Steve Flammia Apr 13 2017 19:16 UTC

Just to clarify Michel's earlier remark, the field norm for the cyclotomics defines the norm in which these vectors are equiangular, and then they will generally **not** be equiangular in the standard norm based on the Hilbert-Schmidt inner product. In the example that he quotes,
\|(7\pm 3 \sqrt{

...(continued)
Marcus Appleby Apr 13 2017 19:16 UTC

I worded that badly, since you clearly have explained the sense in which you are using the word. I am wondering, however, how your definition relates to the usual one. Is it a generalization? Or just plain different? For instance, would a SIC be equiangular relative to your definition (using SI

...(continued)
Marcus Appleby Apr 13 2017 18:54 UTC

I am a little confused by this. As I use the term, lines are equiangular if and only if the "trace of pairwise product of (distinct) projectors is constant". You seem to be using the word in a different sense. It might be helpful if you were to explain exactly what is that sense.