results for au:Davis_M in:cs
Oct 13 2017 cs.NI
Optical network virtualisation is a promising technology enabler to support the currently growing and dynamic network services over the widely distributed network resources. The variation of physical layer impairments in the network and the traffic pattern dynamics themselves will jointly impact the accommodation of these services as a whole. In this paper, we introduce real-time multi-technology transport layer monitoring to facilitate the coordinated virtualisation of optical and Ethernet networks supported by optical virtualise-able transceivers (V-BVT). We present a monitoring and network resource configuration scheme to include hardware monitoring in both Ethernet and Optical layers. The scheme also depicts the data and control interactions among multiple network layers under software defined network (SDN) background, as well as the application that analyse the monitored data obtained from the database. The use-case of the scheme in an inter-data centre scenario is also described. In addition, a re-configuration algorithm is described to adaptively modify the composition of virtual optical networks based on two criteria: (i) according to the monitored Ethernet traffic, a re-configuration is made in Ethernet layer to groom the traffic, and (ii) depends on the monitored optical transmission quality of transmission (QoT), a reconfiguration is made in optical layer to modify the creation of virtual transceivers within the V-BVT. Furthermore, the proposed monitoring scheme is experimentally demonstrated, including the monitoring of server network interface cards (NIC), ports of Ethernet switches and the optical transmission links. The outcome from the algorithm is executed by the SDN control through OpenFlow (OF) extensions, and a holistic (re-)configuration across both layers in Ethernet switches and V-BVTs are presented and studied.
Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. We describe a set of best practices for scientific software development that have solid foundations in research and experience, and that improve scientists' productivity and the reliability of their software.
In this article we extend earlier work on the jump-diffusion risk-sensitive asset management problem [SIAM J. Fin. Math. (2011) 22-54] by allowing jumps in both the factor process and the asset prices, as well as stochastic volatility and investment constraints. In this case, the HJB equation is a partial integro-differential equation (PIDE). By combining viscosity solutions with a change of notation, a policy improvement argument and classical results on parabolic PDEs we prove that the HJB PIDE admits a unique smooth solution. A verification theorem concludes the resolution of this problem.
An asymmetric information model is introduced for the situation in which there is a small agent who is more susceptible to the flow of information in the market than the general market participant, and who tries to implement strategies based on the additional information. In this model market participants have access to a stream of noisy information concerning the future return of an asset, whereas the informed trader has access to a further information source which is obscured by an additional noise that may be correlated with the market noise. The informed trader uses the extraneous information source to seek statistical arbitrage opportunities, while at the same time accommodating the additional risk. The amount of information available to the general market participant concerning the asset return is measured by the mutual information of the asset price and the associated cash flow. The worth of the additional information source is then measured in terms of the difference of mutual information between the general market participant and the informed trader. This difference is shown to be nonnegative when the signal-to-noise ratio of the information flow is known in advance. Explicit trading strategies leading to statistical arbitrage opportunities, taking advantage of the additional information, are constructed, illustrating how excess information can be translated into profit.
Searching techniques for Case Based Reasoning systems involve extensive methods of elimination. In this paper, we look at a new method of arriving at the right solution by performing a series of transformations upon the data. These involve N-gram based comparison and deduction of the input data with the case data, using Morphemes and Phonemes as the deciding parameters. A similar technique for eliminating possible errors using a noise removal function is performed. The error tracking and elimination is performed through a statistical analysis of obtained data, where the entire data set is analyzed as sub-categories of various etymological derivatives. A probability analysis for the closest match is then performed, which yields the final expression. This final expression is referred to the Case Base. The output is redirected through an Expert System based on best possible match. The threshold for the match is customizable, and could be set by the Knowledge-Architect.