Quantitative Assessments of Distributed Systems (eBook)
400 Seiten
Wiley-Scrivener (Verlag)
978-1-119-13113-7 (ISBN)
Distributed systems employed in critical infrastructures must fulfill dependability, timeliness, and performance specifications. Since these systems most often operate in an unpredictable environment, their design and maintenance require quantitative evaluation of deterministic and probabilistic timed models. This need gave birth to an abundant literature devoted to formal modeling languages combined with analytical and simulative solution techniques
The aim of the book is to provide an overview of techniques and methodologies dealing with such specific issues in the context of distributed systems and covering aspects such as performance evaluation, reliability/availability, energy efficiency, scalability, and sustainability. Specifically, techniques for checking and verifying if and how a distributed system satisfies the requirements, as well as how to properly evaluate non-functional aspects, or how to optimize the overall behavior of the system, are all discussed in the book. The scope has been selected to provide a thorough coverage on issues, models. and techniques relating to validation, evaluation and optimization of distributed systems. The key objective of this book is to help to bridge the gaps between modeling theory and the practice in distributed systems through specific examples.
Dario Bruneo received his Degree in Computer Engineering from the Engineering Faculty of the University of Palermo (Italy) in 2000 and the PhD in Advanced Technologies for Information Engineering at the University of Messina (Italy) in 2005. Since then he has been engaged in research on distributed systems. He is currently an associate researcher at the Engineering Faculty of the University of Messina. The research activity of Dario Bruneo has been focused on the study of distributed systems with particular regards to the management of advanced service provisioning, to?the?system modeling and performance evaluation. Different research fields have been investigated ranging from the Quality of Service management, to the distributed programming, from ad-hoc and sensor networks to the performance analysis through analytical and simulative techniques. Is coauthor of more than 40 scientific papers on international journals and conference proceedings. Salvatore Distefano is an assistant professor of the Politecnico di Milano. His research interests ?include performance evaluation, parallel?and distributed?computing, software engineering, and reliability techniques. During his research activity, he has contributed in the development of several tools such as WebSPN, ArgoPerformance and GS3.?He has been involved in several national and international research projects. He is author and co-author of more than 80 scientific papers.
CHAPTER 1
MODELING AND VERIFICATION OF DISTRIBUTED SYSTEMS USING MARKOV DECISION PROCESSES
MARCO BECCUTI1, GIULIANA FRANCESCHINIS2 AND JEREMY SPROSTON1
1Dipartimento di Informatica, Università di Torino, Italy.
{beccuti,sproston}@di.unito.it
2DiSIT, Istituto di Informatica, Università del Piemonte Orientale, Italy.
giuliana.franceschinis@di.unipmn.it
Abstract.
The Markov Decision Process (MDP) formalism is a well-known mathematical formalism to study systems with unknown scheduling mechanisms or with transitions whose next-state probability distribution is not known with precision. Analysis methods for MDPs are based generally on the identification of the strategies that maximize (or minimize) a target function based on the MDP’s rewards (or costs). Alternatively, formal languages can be defined to express quantitative properties that we want to be ensured by an MDP, including those which extend classical temporal logics with probabilistic operators.
The MDP formalism is low level: to facilitate the representation of complex real-life distributed systems higher-level languages have been proposed. In this chapter we consider Markov Decision Well-formed Nets (MDWN), which are probabilistic extensions of Petri nets that allow one to describe complex nondeterministic (probabilistic) behavior as a composition of simpler nondeterministic (probabilistic) steps, and which inherit the efficient analysis algorithms originally devised for well-formed Petri nets. The features of the formalism and the type of properties that can be studied are illustrated by an example of a peer-to-peer illegal botnet.
Keywords. Markov decision processes, modeling and verification.
1.1 Introduction
The mathematical formalism of Markov Decision Processes (MDPs) was introduced in the 1950s by Bellman and Howard [17, 7] in the context of operations research and dynamic programming, and has been used in a wide area of disciplines including economics, manufacturing, robotics, automated control and communication systems. An MDP can be regarded as a Markov chain extended with nondeterministic choice over actions, and is typically equipped with rewards (or costs) associated with transitions from state to state.
A key notion for MDPs is that of strategy, which defines the choice of action to be taken after any possible time step of the MDP. Analysis methods for MDPs are based on the identification of the strategies which maximize (or minimize) a target function either based on the MDP’s rewards (or costs), or based on properties satisfied by the MDP’s execution paths. For example, in a distributed system, there may be different recovery and preventive maintenance policies (modeled by different actions in the MDP); we can model the system using an MDP in order to identify the optimal strategy with respect to reliability, e.g., the optimal recovery and preventive maintenance policy that maximizes system availability. Reward-based performance indices rely on standard methods for MDPs, whereas path-based properties rely on probabilistic model checking methods [8, 3].
It is important to observe that the formalism of MDPs is low level, and it could be difficult to represent directly at this level a complex real-life distributed system. To cope with this problem, a number of higher-level formalisms have been proposed in the literature (e.g., stochastic transition systems [13], dynamic decision networks [14], probabilistic extensions of reactive modules [1], Markov decision Petri nets and Markov decision well-formed nets [5], etc.).
In this chapter we introduce the MDP formalism in the context of distributed systems and discuss how to express and compute (quantitative) properties which should be ensured by an MDP model (Sec. 1.2). Markov decision well-formed nets (MDWNs) are presented highlighting how they can be a good choice to model multi-component distributed systems (Sec. 1.3) such as an illegal botnet example. Standard MDP analysis and probabilistic model checking techniques are used to compute a number of performance indices on the illegal botnet example (Sec. 1.4).
An application example: peer-to-peer botnet. The application example presented in this chapter is inspired by the peer-to-peer illegal botnet model presented in [23]. Illegal botnets are networks of compromised machines under the remote control of an attacker that is able to use the computing power of these compromised machines for different malicious purposes (e.g., e-mail spam, distributed denial-of-service attacks, spyware, scareware, etc.). Typically, infection begins by exploiting web browser vulnerabilities or by involving a specific malware (a Trojan horse) to install malicious code on a target machine. Then the injected malicious code begins its bootstrap process and attempts to join to the botnet. When a machine is connected to the botnet it is called a bot, and can be used for a malicious purpose (we say that it becomes a working bot) or specifically to infect new machines (it becomes a propagation bot). This choice is a crucial aspect for the success of the malicious activity, meaning that the trade-off between the number of working bots and the number of propagation bots should be carefully investigated. To reduce the probability to be detected, the working and propagation bots are inactive most of the time. A machine can only be recovered if an anti-malware software discovers the infection, or if the computer is physically disconnected from the network.
Our MDP model is similar to that of [23], apart from the fact that we let the choice between the type of malicious activity, working or propagating, be nondeterministic, rather than a fixed probabilistic choice. In this way, we represent all possible choices of assignment of activity to an infected machine, including dynamic strategies that adapt their behaviour to the current global state of the botnet. We consider performance indices such as the average number of working or propagation bots at time t, and the probability that the number of working machines exceeds some threshold within time t. The performance indices obtained from our model are often significantly different from those obtained from a purely probabilistic version of the model in which the choices of activity of a newly infected machine have equal probability.
1.2 Markov Decision Processes
Since the aim of this chapter is to describe how dynamic distributed systems can be effectively studied using MDPs, in this section we introduce the MDP formalism, while in the next section we consider a more high-level formalism for the description of systems which are based on MDPs (more precisely, MDPs provide the underlying semantics of the high-level formalism).
An MDP comprises a set of states, which for the purposes of this chapter we can consider as being finite, together a description of the possible transitions among the states. In MDPs the choice as to which transition to take from a state s is made according to two phases: the first phase comprises a nondeterministic choice among a number of actions available in the state s; whereas the second phase comprises a probabilistic choice between the possible target states of the transition. The probability distribution used to choose the next state of the model in the second phase is determined by the choice of action made in the first phase.
The possibility to combine both nondeterministic and probabilistic choice in MDPs is useful in a number of different contexts. In the context of the formal modeling of systems, nondeterministic choice can be used to represent such factors as interleaving between concurrent processes, unknown implementation details, and (automatic or manual) abstraction.
In the following, we use a set of atomic propositions denoted by AP, which will be used to label the states of an MDP. For example, states corresponding to a system error will be labeled with a certain atomic proposition to distinguish them from non-error states.
A discrete probability distribution over a finite set Q is a function μ : Q → [0, 1] such that = 1. We use Dist(Q) to denote the set of distributions over Q.
We now formally present the definition of MDPs. An MDP M = (S, A, p, r, l) comprises: 1) a set S of states; 2) a set A of actions; 3) a partial transition function p : S x A → Dist(S); 4) a partial reward function r : S x A → 5) a labeling function l : S → 2AP.
The transition function, when defined for a state s S and action a A, maps to a distribution p(s, a) over states. For each state s S, we let As denote the set of actions a A for which p(s, a) is defined. We assume that As is non-empty for each state s S. Intuitively, when in the state s, the choice of action a is made nondeterministically from As, and subsequently the next state will be s′ with probability p(s, a)(s′) (see Fig. 1.1). The partial reward function is defined for all states s S and actions a As, and maps to a rational value r(s, a). The labeling function associates with each state s S a set of atomic propositions l(s), which represents a set of observable events that are satisfied in s. The labeling function...
| Erscheint lt. Verlag | 8.4.2015 |
|---|---|
| Reihe/Serie | Performability Engineering Series |
| Performability Engineering Series | Performability Engineering Series |
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Informatik ► Netzwerke |
| Technik ► Elektrotechnik / Energietechnik | |
| Schlagworte | Analysis • basic predicates • conclusion appendices • Decision Processes • Distributed • Electrical & Electronics Engineering • Elektrotechnik u. Elektronik • Industrial Engineering • Industrial Engineering / Quality Control • Industrielle Verfahrenstechnik • Markov • markov decision wellformed • processes • Qualitätssicherung i. d. Industriellen Verfahrenstechnik • Qualität u. Zuverlässigkeit • Qualitätssicherung i. d. Industriellen Verfahrenstechnik • Qualität u. Zuverlässigkeit • Quality & Reliability • Quantitative • References • stoklaim • Syntax • Systems • Systems Engineering & Management • Systemtechnik u. -management • Tutorial • verification • wellformed |
| ISBN-10 | 1-119-13113-8 / 1119131138 |
| ISBN-13 | 978-1-119-13113-7 / 9781119131137 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich