results for au:Bernardo_M in:cs

- Nov 09 2016 cs.SY arXiv:1611.02518v2The aim of this paper is to present the application of an approach to study contraction theory recently developed for piecewise smooth and switched systems. The approach that can be used to analyze incremental stability properties of so-called Filippov systems (or variable structure systems) is based on the use of regularization, a procedure to make the vector field of interest differentiable before analyzing its properties. We show that by using this extension of contraction theory to nondifferentiable vector fields, it is possible to design observers for a large class of piecewise smooth systems using not only Euclidean norms, as also done in previous literature, but also non-Euclidean norms. This allows greater flexibility in the design and encompasses the case of both piecewise-linear and piecewise-smooth (nonlinear) systems. The theoretical methodology is illustrated via a set of representative examples.
- Sep 20 2016 cs.SY arXiv:1609.05245v1Atomic force microscopes have proved to be fundamental research tools in many situations where a gentle imaging process is required, and in a variety of environmental conditions, such as the study of biological samples. Among the possible modes of operation, intermittent contact mode is one that causes less wear to both the sample and the instrument; therefore, it is ideal when imaging soft samples. However, intermittent contact mode is not particularly fast when compared to other imaging strategies. In this paper, we introduce three enhanced control approaches, applied at both the dither and z-axis piezos, to address the limitations of existing control schemes. Our proposed strategies are able to eliminate different image artefacts, automatically adapt scan speed to the sample being scanned and predict its features in real time. The result is that both the image quality and the scan time are improved.
- Movement coordination in human ensembles has been studied little in the current literature. In the existing experimental works, situations where all subjects are connected with each other through direct visual and auditory coupling, and social interaction affects their coordination, have been investigated. Here, we study coordination in human ensembles via a novel computer-based set-up that enables individuals to coordinate each other's motion from a distance so as to minimize the influence of social interaction. The proposed platform makes it possible to implement different visual interaction patterns among the players, so that participants take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players and their own dynamics. Our set-up enables also the deployment of virtual players to investigate dyadic interaction between a human and a virtual agent, as well as group synchronization in mixed teams of human and virtual agents. We use this novel set-up to study coordination both in dyads and in groups over different structures of interconnections, with and without virtual agents. We find that, in dual interaction, virtual players manage to interact with participants in a human-like fashion, thus confirming findings in previous work. We also observe that, in group interaction, the level of coordination among humans in the absence of direct visual and auditory coupling depends on the structure of interconnections among participants. This confirms, as recently suggested in the literature, that different coordination levels are achieved over diverse visual pairings in the presence and in the absence of social interaction. We present preliminary experimental results on the effect on group coordination of deploying virtual computer agents in the human ensemble.
- Mar 02 2016 cs.SY arXiv:1603.00322v4We present new conditions to obtain synchronization and consensus patterns in complex network systems. The key idea is to exploit symmetries of the nodes' vector fields to induce a desired synchronization/consensus pattern, where nodes are clustered in different groups each converging towards a different synchronized evolution. We show that the new conditions we present offer a systematic methodology to design a distributed network controller able to drive a network of interest towards a desired synchronization/consensus pattern.
- Jan 26 2016 cs.LO arXiv:1601.06198v3Larsen and Skou characterized probabilistic bisimilarity over reactive probabilistic systems with a logic including true, negation, conjunction, and a diamond modality decorated with a probabilistic lower bound. Later on, Desharnais, Edalat, and Panangaden showed that negation is not necessary to characterize the same equivalence. In this paper, we prove that the logical characterization holds also when conjunction is replaced by disjunction, with negation still being not necessary. To this end, we introduce reactive probabilistic trees, a fully abstract model for reactive probabilistic systems that allows us to demonstrate expressiveness of the disjunctive probabilistic modal logic, as well as of the previously mentioned logics, by means of a compactness argument.
- Oct 29 2015 cs.SY arXiv:1510.08368v2In this paper we present a switching control strategy to incrementally stabilize a class of nonlinear dynamical systems. Exploiting recent results on contraction analysis of switched Filippov systems derived using regularization, sufficient conditions are presented to prove incremental stability of the closed-loop system. Furthermore, based on these sufficient conditions, a design procedure is proposed to design a switched control action that is active only where the open-loop system is not sufficiently incrementally stable in order to reduce the required control effort. The design procedure to either locally or globally incrementally stabilize a dynamical system is then illustrated by means of a representative example.
- Sep 30 2015 cs.LO arXiv:1509.08559v1A new weak bisimulation semantics is defined for Markov automata that, in addition to abstracting from internal actions, sums up the expected values of consecutive exponentially distributed delays possibly intertwined with internal actions. The resulting equivalence is shown to be a congruence with respect to parallel composition for Markov automata. Moreover, it turns out to be comparable with weak bisimilarity for timed labeled transition systems, thus constituting a step towards reconciling the semantics for stochastic time and deterministic time.
- Aug 17 2015 cs.NI arXiv:1508.03583v1We present a simple yet effective routing strategy inspired by coverage control, which delays the onset of congestion on traffic networks, by introducing a control parameter. The routing algorithm allows a trade-off between the congestion level and the distance to the destination. Numerical verification of the strategy is provided on a number of representative examples in SUMO, a well known micro agent simulator used for the analysis of traffic networks. We find that it is crucial in many cases to tune the given control parameters to some optimal value in order to reduce congestion in the most effective way. The effects of different network structural properties are connected to the level of congestion and the optimal range for setting the control parameters.
- Jul 28 2015 cs.SY arXiv:1507.07126v3We study incremental stability and convergence of switched (bimodal) Filippov systems via contraction analysis. In particular, by using results on regularization of switched dynamical systems, we derive sufficient conditions for convergence of any two trajectories of the Filippov system between each other within some region of interest. We then apply these conditions to the study of different classes of Filippov systems including piecewise smooth (PWS) systems, piecewise affine (PWA) systems and relay feedback systems. We show that contrary to previous approaches, our conditions allow the system to be studied in metrics other than the Euclidean norm. The theoretical results are illustrated by numerical simulations on a set of representative examples that confirm their effectiveness and ease of application.
- Jun 03 2015 cs.SY arXiv:1506.00880v2In this paper, we propose a multiplex proportional-integral approach, for solving consensus problems in networks of heterogeneous nodes dynamics affected by constant disturbances. The proportional and integral actions are deployed on two different layers across the network, each with its own topology. Sufficient conditions for convergence are derived that depend upon the structure of the network, the parameters characterizing the control layers and the node dynamics. The effectiveness of the theoretical results is illustrated using a power network model as a representative example.
- Feb 24 2014 cs.LO arXiv:1402.5365v3Two of the most studied extensions of trace and testing equivalences to nondeterministic and probabilistic processes induce distinctions that have been questioned and lack properties that are desirable. Probabilistic trace-distribution equivalence differentiates systems that can perform the same set of traces with the same probabilities, and is not a congruence for parallel composition. Probabilistic testing equivalence, which relies only on extremal success probabilities, is backward compatible with testing equivalences for restricted classes of processes, such as fully nondeterministic processes or generative/reactive probabilistic processes, only if specific sets of tests are admitted. In this paper, new versions of probabilistic trace and testing equivalences are presented for the general class of nondeterministic and probabilistic processes. The new trace equivalence is coarser because it compares execution probabilities of single traces instead of entire trace distributions, and turns out to be compositional. The new testing equivalence requires matching all resolutions of nondeterminism on the basis of their success probabilities, rather than comparing only extremal success probabilities, and considers success probabilities in a trace-by-trace fashion, rather than cumulatively on entire resolutions. It is fully backward compatible with testing equivalences for restricted classes of processes; as a consequence, the trace-by-trace approach uniformly captures the standard probabilistic testing equivalences for generative and reactive probabilistic processes. The paper discusses in full details the new equivalences and provides a simple spectrum that relates them with existing ones in the setting of nondeterministic and probabilistic processes.
- Jun 13 2013 cs.LO arXiv:1306.2696v1We present a spectrum of trace-based, testing, and bisimulation equivalences for nondeterministic and probabilistic processes whose activities are all observable. For every equivalence under study, we examine the discriminating power of three variants stemming from three approaches that differ for the way probabilities of events are compared when nondeterministic choices are resolved via deterministic schedulers. We show that the first approach - which compares two resolutions relatively to the probability distributions of all considered events - results in a fragment of the spectrum compatible with the spectrum of behavioral equivalences for fully probabilistic processes. In contrast, the second approach - which compares the probabilities of the events of a resolution with the probabilities of the same events in possibly different resolutions - gives rise to another fragment composed of coarser equivalences that exhibits several analogies with the spectrum of behavioral equivalences for fully nondeterministic processes. Finally, the third approach - which only compares the extremal probabilities of each event stemming from the different resolutions - yields even coarser equivalences that, however, give rise to a hierarchy similar to that stemming from the second approach.
- May 03 2013 cs.LO arXiv:1305.0538v3In the paper "Relating Strong Behavioral Equivalences for Processes with Nondeterminism and Probabilities" to appear in TCS, we present a comparison of behavioral equivalences for nondeterministic and probabilistic processes. In particular, we consider strong trace, failure, testing, and bisimulation equivalences. For each of these groups of equivalences, we examine the discriminating power of three variants stemming from three approaches that differ for the way probabilities of events are compared when nondeterministic choices are resolved via deterministic schedulers. The established relationships are summarized in a so-called spectrum. However, the equivalences we consider in that paper are only a small subset of those considered in the original spectrum of equivalences for nondeterministic systems introduced by Rob van Glabbeek. In this companion paper we we enlarge the spectrum by considering variants of trace equivalences (completed-trace equivalences), additional decorated-trace equivalences (failure-trace, readiness, and ready-trace equivalences), and variants of bisimulation equivalences (kernels of simulation, completed-simulation, failure-simulation, and ready-simulation preorders). Moreover, we study how the spectrum changes when randomized schedulers are used instead of deterministic ones.
- Jul 05 2012 cs.LO arXiv:1207.0874v1We have recently defined a weak Markovian bisimulation equivalence in an integrated-time setting, which reduces sequences of exponentially timed internal actions to individual exponentially timed internal actions having the same average duration and execution probability as the corresponding sequences. This weak Markovian bisimulation equivalence is a congruence for sequential processes with abstraction and turns out to induce an exact CTMC-level aggregation at steady state for all the considered processes. However, it is not a congruence with respect to parallel composition. In this paper, we show how to generalize the equivalence in a way that a reasonable tradeoff among abstraction, compositionality, and exactness is achieved for concurrent processes. We will see that, by enhancing the abstraction capability in the presence of concurrent computations, it is possible to retrieve the congruence property with respect to parallel composition, with the resulting CTMC-level aggregation being exact at steady state only for a certain subset of the considered processes.
- Labeled transition systems are typically used to represent the behavior of nondeterministic processes, with labeled transitions defining a one-step state to-state reachability relation. This model has been recently made more general by modifying the transition relation in such a way that it associates with any source state and transition label a reachability distribution, i.e., a function mapping each possible target state to a value of some domain that expresses the degree of one-step reachability of that target state. In this extended abstract, we show how the resulting model, called ULTraS from Uniform Labeled Transition System, can be naturally used to give semantics to a fully nondeterministic, a fully probabilistic, and a fully stochastic variant of a CSP-like process language.
- Jun 09 2010 cs.LO arXiv:1006.1412v1Several Markovian process calculi have been proposed in the literature, which differ from each other for various aspects. With regard to the action representation, we distinguish between integrated-time Markovian process calculi, in which every action has an exponentially distributed duration associated with it, and orthogonal-time Markovian process calculi, in which action execution is separated from time passing. Similar to deterministically timed process calculi, we show that these two options are not irreconcilable by exhibiting three mappings from an integrated-time Markovian process calculus to an orthogonal-time Markovian process calculus that preserve the behavioral equivalence of process terms under different interpretations of action execution: eagerness, laziness, and maximal progress. The mappings are limited to classes of process terms of the integrated-time Markovian process calculus with restrictions on parallel composition and do not involve the full capability of the orthogonal-time Markovian process calculus of expressing nondeterministic choices, thus elucidating the only two important differences between the two calculi: their synchronization disciplines and their ways of solving choices.
- Dec 18 2009 cs.MS arXiv:0912.3398v1NetEvo is a computational framework designed to help understand the evolution of dynamical complex networks. It provides flexible tools for the simulation of dynamical processes on networks and methods for the evolution of underlying topological structures. The concept of a supervisor is used to bring together both these aspects in a coherent way. It is the job of the supervisor to rewire the network topology and alter model parameters such that a user specified performance measure is minimised. This performance measure can make use of current topological information and simulated dynamical output from the system. Such an abstraction provides a suitable basis in which to study many outstanding questions related to complex system design and evolution.
- In the theory of testing for Markovian processes developed so far, exponentially timed internal actions are not admitted within processes. When present, these actions cannot be abstracted away, because their execution takes a nonzero amount of time and hence can be observed. On the other hand, they must be carefully taken into account, in order not to equate processes that are distinguishable from a timing viewpoint. In this paper, we recast the definition of Markovian testing equivalence in the framework of a Markovian process calculus including exponentially timed internal actions. Then, we show that the resulting behavioral equivalence is a congruence, has a sound and complete axiomatization, has a modal logic characterization, and can be decided in polynomial time.
- Apr 01 2005 cs.NI arXiv:cs/0503090v1This paper is concerned with the characterization of the relationship between topology and traffic dynamics. We use a model of network generation that allows the transition from random to scale free networks. Specifically, we consider three different topological types of network: random, scale-free with \gamma = 3, scale-free with \gamma = 2. By using a novel LRD traffic generator, we observe best performance, in terms of transmission rates and delivered packets, in the case of random networks. We show that, even if scale-free networks are characterized by shorter characteristic-path- length (the lower the exponent, the lower the path-length), they show worst performances in terms of communication. We conjecture this could be explained in terms of changes in the load distribution, defined here as the number of shortest paths going through a given vertex. In fact, that distribu- tion is characterized by (i) a decreasing mean (ii) an increas- ing standard deviation, as the networks becomes scale-free (especially scale-free networks with low exponents). The use of a degree-independent server also discriminates against a scale-free structure. As a result, since the model is un- controlled, most packets will go through the same vertices, favoring the onset of congestion.