An outstanding challenge in nonlinear systems theory is identification or learning of a given nonlinear system's Koopman operator directly from data or models. Advances in extended dynamic mode decomposition approaches and machine learning methods have enabled data-driven discovery of Koopman operators, for both continuous and discrete-time systems. Since Koopman operators are often infinite-dimensional, they are approximated in practice using finite-dimensional systems. The fidelity and convergence of a given finite-dimensional Koopman approximation is a subject of ongoing research. In this paper we introduce a class of Koopman observable functions that confer an approximate closure property on their corresponding finite-dimensional approximations of the Koopman operator. We derive error bounds for the fidelity of this class of observable functions, as well as identify two key learning parameters which can be used to tune performance. We illustrate our approach on two classical nonlinear system models: the Van Der Pol oscillator and the bistable toggle switch.
Dec 01 2017 cs.HC
Interactive Music Systems (IMS) have introduced a new world of music-making modalities. But can we really say that they create music, as in true autonomous creation? Here we discuss Video Interactive VST Orchestra (VIVO), an IMS that considers extra-musical information by adopting a simple salience based model of user-system interaction when simulating intentionality in automatic music generation. Key features of the theoretical framework, a brief overview of pilot research, and a case study providing validation of the model are presented. This research demonstrates that a meaningful user/system interplay is established in what we define as reflexive multidominance.
The current generation of radio and millimeter telescopes, particularly the Atacama Large Millimeter Array (ALMA), offers enormous advances in observing capabilities. While these advances represent an unprecedented opportunity to advance scientific understanding, the increased complexity in the spatial and spectral structure of even a single spectral line is hard to interpret. The complexity present in current ALMA data cubes therefore challenges not only the existing tools for fundamental analysis of these datasets, but also users' ability to explore and visualize their data. We have performed a feasibility study for applying forms of topological data analysis and visualization never before tested by the ALMA community. Through contour tree-based data analysis, we seek to improve upon existing data cube analysis and visualization workflows, in the forms of improved accuracy and speed in extracting features. In this paper, we review our design process in building effective analysis and visualization capabilities for the astrophysicist users. We summarize effective design practices, in particular, we identify domain-specific needs of simplicity, integrability and reproducibility, in order to best target and service the large astrophysics community.
Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.
Sep 23 2015 cs.CY
Cloud Computing services are increasingly being made available by the UK Government through the Government digital marketplace to reduce costs and improve IT efficiency; however, little is known about factors influencing the decision-making process to adopt cloud services within the UK Government. This research aims to develop a theoretical framework to understand risk perception and risk acceptance of cloud computing services. Study subjects (N=24) were recruited from three UK Government organizations to attend a semi- structured interview. Transcribed texts were analyzed using the approach termed interpretive phenomenological analysis. Results showed that the most important factors influencing risk acceptance of cloud services are: perceived benefits and opportunities, organization risk culture and perceived risks. We focused on perceived risks and perceived security concerns. Based on these results, we suggest a number of implications for risk managers, policy makers and cloud service providers.
Sep 23 2015 cs.CY
Cloud computing is revolutionising the way software services are procured and used by Government organizations and SMEs. Quantitative risk assessment of Cloud services is complex and undermined by specific security concerns regarding data confidentiality, integrity and availability. This study explores how the gap between the quantitative risk assessment and the perception of the risk can produce a bias in the decision-making process about Cloud computing adoption. The risk perception of experts in Cloud computing (N=37) and laypeople (N=81) about ten Cloud computing services was investigated using the psychometric paradigm. Results suggest that the risk perception of Cloud services can be represented by two components, called dread risk and unknown risk, which may explain up to 46% of the variance. Other factors influencing the risk perception were perceived benefits, trust in regulatory authorities and technology attitude. This study suggests some implications that could support Government and non-Government organizations in their strategies for Cloud computing adoption.
Spatial and temporal variability of HfOx-based resistive random access memory (RRAM) are investigated for manufacturing and product designs. Manufacturing variability is characterized at different levels including lots, wafers, and chips. Bit-error-rate (BER) is proposed as a holistic parameter for the write cycle resistance statistics. Using the electrical in-line-test cycle data, a method is developed to derive BERs as functions of the design margin, to provide guidance for technology evaluation and product design. The proposed BER calculation can also be used in the off-line bench test and build-in-self-test (BIST) for adaptive error correction and for the other types of random access memories.
Apr 29 2014 cs.SE
We sketch a series of studies and experiments designed to provide empirical evidence about the truth or falsity of claims that non-prescriptive approaches to standards demand greater competence from regulators than prescriptive approaches require.
Feb 25 2014 cs.SY
Navigation satellites are a core component of navigation satellite based systems such as GPS, GLONASS and Galileo which provide location and timing information for a variety of uses. Such satellites are designed for operating on orbit to perform tasks and have lifetimes of 10 years or more. Reliability, availability and maintainability (RAM) analysis of systems has been indispensable in the design phase of satellites in order to achieve minimum failures or to increase mean time between failures (MTBF) and thus to plan maintenance strategies, optimise reliability and maximise availability. In this paper, we present formal models of both a single satellite and a navigation satellite constellation and logical specification of their reliability, availability and maintainability properties respectively. The probabilistic model checker PRISM has been used to perform automated analysis of these quantitative properties.
Oct 01 2013 cs.CY
Constructionism is a learning theory that states that we learn more when we construct tangible objects. In the process of building and presenting our work, we make concrete the abstract mental models we've formed, see where they breakdown through the feedback we receive, and revise the models accordingly. Computer programming has long been taught under a constructionist approach using sensory-rich contexts like robots, media, and Logo-style environments. Now, with affordable 3-D printers in the hands of consumers, we have a new medium in which learners may realize their computational ideas. In this demonstration, we share a mobile development environment named Madeup, which empowers its users to navigate 3-D space using a Logo-like imperative and functional language. Every stop in space becomes a vertex in a 3-D model. The generated models may be exported or uploaded to a 3-D printing service.
Non-Equilibrium Social Science (NESS) emphasizes dynamical phenomena, for instance the way political movements emerge or competing organizations interact. This paper argues that predictive analysis is an essential element of NESS, occupying a central role in its scientific inquiry and representing a key activity of practitioners in domains such as economics, public policy, and national security. We begin by clarifying the distinction between models which are useful for prediction and the much more common explanatory models studied in the social sciences. We then investigate a challenging real-world predictive analysis case study, and find evidence that the poor performance of standard prediction methods does not indicate an absence of human predictability but instead reflects (1.) incorrect assumptions concerning the predictive utility of explanatory models, (2.) misunderstanding regarding which features of social dynamics actually possess predictive power, and (3.) practical difficulties exploiting predictive representations.
In this paper we consider the task of estimating the non-zero pattern of the sparse inverse covariance matrix of a zero-mean Gaussian random vector from a set of iid samples. Note that this is also equivalent to recovering the underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We present two novel greedy approaches to solving this problem. The first estimates the non-zero covariates of the overall inverse covariance matrix using a series of global forward and backward greedy steps. The second estimates the neighborhood of each node in the graph separately, again using greedy forward and backward steps, and combines the intermediate neighborhoods to form an overall estimate. The principal contribution of this paper is a rigorous analysis of the sparsistency, or consistency in recovering the sparsity pattern of the inverse covariance matrix. Surprisingly, we show that both the local and global greedy methods learn the full structure of the model with high probability given just $O(d\log(p))$ samples, which is a \emphsignificant improvement over state of the art $\ell_1$-regularized Gaussian MLE (Graphical Lasso) that requires $O(d^2\log(p))$ samples. Moreover, the restricted eigenvalue and smoothness conditions imposed by our greedy methods are much weaker than the strong irrepresentable conditions required by the $\ell_1$-regularization based methods. We corroborate our results with extensive simulations and examples, comparing our local and global greedy methods to the $\ell_1$-regularized Gaussian MLE as well as the Neighborhood Greedy method to that of nodewise $\ell_1$-regularized linear regression (Neighborhood Lasso).
We present an overview of a modeling environment, consisting of a coupled atmosphere-wildfire model, utilities for visualization, data processing, and diagnostics, open source software repositories, and a community wiki. The fire model, called SFIRE, is based on a fire-spread model, implemented by the level-set method, and it is coupled with the Weather Research Forecasting (WRF) model. A version with a subset of the features is distributed with WRF 3.3 as WRF-Fire. In each time step, the fire module takes the wind as input and returns the latent and sensible heat fluxes. The software architecture uses WRF parallel infrastructure for massively parallel computing. Recent features of the code include interpolation from an ideal logarithmic wind profile for nonhomogeneous fuels and ignition from a fire perimeter with an atmosphere and fire spin-up. Real runs use online sources for fuel maps, fine-scale topography, and meteorological data, and can run faster than real time. Visualization pathways allow generating images and animations in many packages, including VisTrails, VAPOR, MayaVi, and Paraview, as well as output to Google Earth. The environment is available from openwfm.org. New diagnostic variables were added to the code recently, including a new kind of fireline intensity, which takes into account also the speed of burning, unlike Byram's fireline intensity.
In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forward-backward greedy algorithm as applied to general statistical models. As a special case, we then apply this algorithm to learn the structure of a discrete graphical model via neighborhood estimation. As a corollary of our general result, we derive sufficient conditions on the number of samples n, the maximum node-degree d and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result guarantees graph selection for samples scaling as n = Omega(d^2 log(p)), in contrast to existing convex-optimization based algorithms that require a sample complexity of \Omega(d^3 log(p)). Further, the greedy algorithm only requires a restricted strong convexity condition which is typically milder than irrepresentability assumptions. We corroborate these results using numerical simulations at the end.
May 14 2003 cs.LO
A method is presented for computing minimal answers in disjunctive deductive databases under the disjunctive stable model semantics. Such answers are constructed by repeatedly extending partial answers. Our method is complete (in that every minimal answer can be computed) and does not admit redundancy (in the sense that every partial answer generated can be extended to a minimal answer), whence no non-minimal answer is generated. For stratified databases, the method does not (necessarily) require the computation of models of the database in their entirety. Compilation is proposed as a tool by which problems relating to computational efficiency and the non-existence of disjunctive stable models can be overcome. The extension of our method to other semantics is also considered.
Here we present the results of the NSF-funded Workshop on Computational Topology, which met on June 11 and 12 in Miami Beach, Florida. This report identifies important problems involving both computation and topology.
A distributed heap storage manager has been implemented on the Fujitsu AP1000 multicomputer. The performance of various pre-fetching strategies is experimentally compared. Subjective programming benefits and objective performance benefits of up to 10% in pre-fetching are found for certain applications, but not for all. The performance benefits of pre-fetching depend on the specific data structure and access patterns. We suggest that control of pre-fetching strategy be dynamically under the control of the application.