Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Probability and Statistics with Reliability, Queuing, and Computer Science Applications (eBook)

eBook Download: EPUB
2016 | 2. Auflage
John Wiley & Sons (Verlag)
978-1-119-31420-2 (ISBN)

Lese- und Medienproben

Probability and Statistics with Reliability, Queuing, and Computer Science Applications - Kishor S. Trivedi
Systemvoraussetzungen
113,99 inkl. MwSt
(CHF 109,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
An accessible introduction to probability, stochastic processes, and statistics for computer science and engineering applications
Second edition now also available in Paperback. This updated and revised edition of the popular classic first edition relates fundamental concepts in probability and statistics to the computer sciences and engineering. The author uses Markov chains and other statistical tools to illustrate processes in reliability of computer systems and networks, fault tolerance, and performance.
This edition features an entirely new section on stochastic Petri nets-as well as new sections on system availability modeling, wireless system modeling, numerical solution techniques for Markov chains, and software reliability modeling, among other subjects. Extensive revisions take new developments in solution techniques and applications into account and bring this work totally up to date. It includes more than 200 worked examples and self-study exercises for each section.
Probability and Statistics with Reliability, Queuing and Computer Science Applications, Second Edition offers a comprehensive introduction to probability, stochastic processes, and statistics for students of computer science, electrical and computer engineering, and applied mathematics. Its wealth of practical examples and up-to-date information makes it an excellent resource for practitioners as well.

An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.



Kishor S. Trivedi, PhD, is the Hudson Professor of Electrical and Computer Engineering at Duke University, Durham, North Carolina. His research interests are in reliability and performance assessment of computer and communication systems. Dr. Trivedi has published extensively in these fields, with more than 600 articles and three books to his name. Dr. Trivedi is a Fellow of the IEEE and a Golden Core Member of the IEEE Computer Society.


An accessible introduction to probability, stochastic processes, and statistics for computer science and engineering applicationsSecond edition now also available in Paperback. This updated and revised edition of the popular classic first edition relates fundamental concepts in probability and statistics to the computer sciences and engineering. The author uses Markov chains and other statistical tools to illustrate processes in reliability of computer systems and networks, fault tolerance, and performance.This edition features an entirely new section on stochastic Petri nets as well as new sections on system availability modeling, wireless system modeling, numerical solution techniques for Markov chains, and software reliability modeling, among other subjects. Extensive revisions take new developments in solution techniques and applications into account and bring this work totally up to date. It includes more than 200 worked examples and self-study exercises for each section.Probability and Statistics with Reliability, Queuing and Computer Science Applications, Second Edition offers a comprehensive introduction to probability, stochastic processes, and statistics for students of computer science, electrical and computer engineering, and applied mathematics. Its wealth of practical examples and up-to-date information makes it an excellent resource for practitioners as well. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.

Kishor S. Trivedi, PhD, is the Hudson Professor of Electrical and Computer Engineering at Duke University, Durham, North Carolina. His research interests are in reliability and performance assessment of computer and communication systems. Dr. Trivedi has published extensively in these fields, with more than 600 articles and three books to his name. Dr. Trivedi is a Fellow of the IEEE and a Golden Core Member of the IEEE Computer Society.

"The book offers a comprehensive introduction to probability, stochastic processes, and statistics for students of computer science, electrical and computer engineering, and applied mathematics. Its wealth of practical examples and up-to-date information makes it an excellent resource for practitioners as well." (Zentralblatt MATH, 2016)

"I highly recommend this book for academics for use as a textbook and for researchers and professionals in the field as a useful reference." (Interfaces, September/ October 2004)

"This introduction...uses Markov chains and other statistical tools to illustrate process in reliability of computer systems, fault tolerance, and performance." (SciTech Book News, Vol. 26, No. 2, June 2002)

"...an excellent self-contained book.... I recommend the book to beginners and veterans in the field..." (Computer Journal, Vol.45, No.6, 2002)

"This book is a tour de force of clear, virtually error-free exposition of probability as it is applied in a host of up-to-date contexts.... It will richly reward the...reader.... Read this book cover to cover. It's worth the effort." (Technometrics, Vol. 45, No. 1, February 2003)

Chapter 1
Introduction


1.1 Motivation


Computer scientists and engineers need powerful techniques to analyze algorithms and computer systems. Similarly, networking engineers need methods to analyze the behavior of protocols, routing algorithms, and congestion in networks. Computer systems and networks are subject to failure, and hence methods for their reliability and availability are needed. Many of the tools necessary for these analyses have their foundations in probability theory. For example, in the analysis of algorithm execution times, it is common to draw a distinction between the worst-case and the average-case behavior of an algorithm. The distinction is based on the fact that for certain problems, while an algorithm may require an inordinately long time to solve the least favorable instance of the problem, the average solution time is considerably shorter. When many instances of a problem have to be solved, the probabilistic (or average-case) analysis of the algorithm is likely to be more useful. Such an analysis accounts for the fact that the performance of an algorithm is dependent on the distributions of input data items. Of course, we have to specify the relevant probability distributions before the analysis can be carried out. Thus, for instance, while analyzing a sorting algorithm, a common assumption is that every permutation of the input sequence is equally likely to occur.

Similarly, if the storage is dynamically allocated, a probabilistic analysis of the storage requirement is more appropriate than a worst-case analysis. In a like fashion, a worst-case analysis of the accumulation of roundoff errors in a numerical algorithm tends to be rather pessimistic; a probabilistic analysis, although harder, is more useful.

When we consider the analysis of a Web server serving a large number of users, several types of random phenomena need to be accounted for. First, the arrival pattern of requests is subject to randomness due to a large population of diverse users. Second, the resource requirements of requests will likely fluctuate from request to request as well as during the execution of a single request. Finally, the resources of the Web server are subject to random failures due to environmental conditions and aging phenomena. The theory of stochastic (random) processes is very useful in evaluating various measures of system effectiveness such as throughput, response time, reliability, and availability.

Before an algorithm (or protocol) or a system can be analyzed, various probability distributions have to be specified. Where do the distributions come from? We may collect data during the actual operation of the system (or the algorithm). These measurements can be performed by hardware monitors, software monitors, or both. Such data must be analyzed and compressed to obtain the necessary distributions that drive the analytical models discussed above. Mathematical statistics provides us with techniques for this purpose, such as the design of experiments, hypothesis testing, estimation, analysis of variance, and linear and nonlinear regression.

1.2 Probability Models


Probability theory is concerned with the study of random (or chance) phenomena. Such phenomena are characterized by the fact that their future behavior is not predictable in a deterministic fashion. Nevertheless, such phenomena are usually capable of mathematical descriptions due to certain statistical regularities. This can be accomplished by constructing an idealized probabilistic model of the real-world situation. Such a model consists of a list of all possible outcomes and an assignment of their respective probabilities. The theory of probability then allows us to predict or deduce patterns of future outcomes.

Since a model is an abstraction of the real-world problem, predictions based on the model must be validated against actual measurements collected from the real phenomena. A poor validation may suggest modifications to the original model. The theory of statistics facilitates the process of validation. Statistics is concerned with the inductive process of drawing inferences about the model and its parameters based on the limited information contained in real data.

The role of probability theory is to analyze the behavior of a system or an algorithm assuming the given probability assignments and distributions. The results of this analysis are as good as the underlying assumptions. Statistics helps us in choosing these probability assignments and in the process of validating model assumptions. The behavior of the system (or the algorithm) is observed, and an attempt is made to draw inferences about the underlying unknown distributions of random variables that describe system activity. Methods of statistics, in turn, make heavy use of probability theory.

Consider the problem of predicting the number of request arrivals to a Web server in a fixed time interval (0,t]. A common model of this situation is to assume that the number of arrivals in this period has a particular distribution, such as the Poisson distribution (see Chapter 2). Thus we have replaced a complex physical situation by a simple model with a single unknown parameter, namely, the average arrival rate . With the help of probability theory we can then deduce the pattern of future arrivals. On the other hand, statistical techniques help us estimate the unknown parameter based on actual observations of past arrival patterns. Statistical techniques also allow us to test the validity of the Poisson model.

As another example, consider a fault-tolerant computer system with automatic error recovery capability. Model this situation as follows. The probability of successful recovery is c and probability of an abortive error is . The uncertainty of the physical situation is once again reduced to a simple probability model with a single unknown parameter c. In order to estimate parameter c in this model, we observe N errors out of which n are successfully recovered. A reasonable estimate of c is the relative frequency n/N, since we expect this ratio to converge to c in the limit . Note that this limit is a limit in a probabilistic sense:

Axiomatic approaches to probability allow us to define such limits in a mathematically consistent fashion (e.g., see the law of large numbers in Chapter 4) and hence allow us to use relative frequencies as estimates of probabilities.

1.3 Sample Space


Probability theory is rooted in the real-life situation where a person performs an experiment the outcome of which may not be certain. Such an experiment is called a random experiment. Thus, an experiment may consist of the simple process of noting whether a component is functioning properly or has failed; it may consist of determining the execution time of a program; or it may consist of determining the response time of a server request. The result of any such observations, whether they are simple “yes” or “no” answers, meter readings, or whatever, are called outcomes of the experiment.

Definition Sample Space


The totality of the possible outcomes of a random experiment is called the sample space of the experiment and it will be denoted by the letter S.

We point out that the sample space is not determined completely by the experiment. It is partially determined by the purpose for which the experiment is carried out. If the status of two components is observed, for some purposes it is sufficient to consider only three possible outcomes: two functioning, two malfunctioning, one functioning and one malfunctioning. These three outcomes constitute the sample space S. On the other hand, we might be interested in exactly which of the components has failed, if any has failed. In this case the sample space S must be considered as four possible outcomes where the earlier single outcome of one failed, one functioning is split into two outcomes: first failed, second functioning and first functioning, second failed. Many other sample spaces can be defined if we take into account such things as type of failure and so on.

Frequently, we use a larger sample space than is strictly necessary because it is easier to use; specifically, it is always easier to discard excess information than to recover lost information. For instance, in the preceding illustration, the first sample space might be denoted (where each number indicates how many components are functioning) and the second sample space might be denoted (where 0 = failed, 1 = functioning). Given a selection from S2, we can always add the two components to determine the corresponding choice from S1; but, given a choice from S1 (in particular 1), we cannot necessarily recover the corresponding choice from S2.

It is useful to think of the outcomes of an experiment, the elements of the sample space, as points in a space of one or more dimensions. For example, if an experiment consists of examining the state of a single component, it may be functioning properly (denoted by the number 1), or it may have failed (denoted by the number 0). The sample space is one-dimensional, as shown in Figure 1.1. If a system consists of two components there are four possible outcomes, as shown in the two-dimensional sample space of Figure 1.2. Here each coordinate is 0 or 1 depending on whether the corresponding component is functioning properly or has failed. In general, if a system has n components, there are possible outcomes each of which can be...

Erscheint lt. Verlag 30.6.2016
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Theorie / Studium
Mathematik / Informatik Mathematik Statistik
Mathematik / Informatik Mathematik Wahrscheinlichkeit / Kombinatorik
Technik Elektrotechnik / Energietechnik
Schlagworte Analysis of Variance • Angewandte Wahrscheinlichkeitsrechnung u. Statistik • Applied Probability & Statistics • conditional expectation • Conditional probability • condition distribution • Continuous Random Variables • Continuous Time Markov Chains • Discrete Random Variables • Discrete time Markov Chains • Electrical & Electronics Engineering • Elektrotechnik • Elektrotechnik u. Elektronik • Engineering statistics • networks of queues • Probability • Probability Models • Qualität u. Zuverlässigkeit • Quality & Reliability • Regression • Statistical Inference • Statistics • Statistik • Statistik in den Ingenieurwissenschaften • Stochastic Processes • Wahrscheinlichkeitsrechnung
ISBN-10 1-119-31420-8 / 1119314208
ISBN-13 978-1-119-31420-2 / 9781119314202
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Design scalable and high-performance Java applications with Spring

von Wanderson Xesquevixos

eBook Download (2025)
Packt Publishing (Verlag)
CHF 31,65
The expert's guide to building secure, scalable, and reliable …

von Alexander Shuiskov

eBook Download (2025)
Packt Publishing (Verlag)
CHF 31,65