Probability, Random Variables, Statistics, and Random Processes (eBook)
John Wiley & Sons (Verlag)
978-1-119-30083-0 (ISBN)
Probability, Random Variables, Statistics, and Random Processes: Fundamentals & Applications is a comprehensive undergraduate-level textbook. With its excellent topical coverage, the focus of this book is on the basic principles and practical applications of the fundamental concepts that are extensively used in various Engineering disciplines as well as in a variety of programs in Life and Social Sciences. The text provides students with the requisite building blocks of knowledge they require to understand and progress in their areas of interest. With a simple, clear-cut style of writing, the intuitive explanations, insightful examples, and practical applications are the hallmarks of this book.
The text consists of twelve chapters divided into four parts. Part-I, Probability (Chapters 1 - 3), lays a solid groundwork for probability theory, and introduces applications in counting, gambling, reliability, and security. Part-II, Random Variables (Chapters 4 - 7), discusses in detail multiple random variables, along with a multitude of frequently-encountered probability distributions. Part-III, Statistics (Chapters 8 - 10), highlights estimation and hypothesis testing. Part-IV, Random Processes (Chapters 11 - 12), delves into the characterization and processing of random processes. Other notable features include:
- Most of the text assumes no knowledge of subject matter past first year calculus and linear algebra
- With its independent chapter structure and rich choice of topics, a variety of syllabi for different courses at the junior, senior, and graduate levels can be supported
- A supplemental website includes solutions to about 250 practice problems, lecture slides, and figures and tables from the text
Given its engaging tone, grounded approach, methodically-paced flow, thorough coverage, and flexible structure, Probability, Random Variables, Statistics, and Random Processes: Fundamentals & Applications clearly serves as a must textbook for courses not only in Electrical Engineering, but also in Computer Engineering, Software Engineering, and Computer Science.
Ali Grami is a founding faculty member at the University of Ontario Institute of Technology (UOIT), Canada. He holds B.Sc., M.Eng., and Ph.D. degrees in Electrical Engineering from the University of Manitoba, McGill University and the University of Toronto, respectively. Before joining academia, he was with the high-tech industry for many years, where he??was the principal designer of the first North-American broadband access satellite system. He has taught at the University of Ottawa and Concordia University. At UOIT, he has also led the development of programs toward bachelor's, master's, and doctoral degrees in Electrical and Computer Engineering.
Ali Grami is a founding faculty member at the University of Ontario Institute of Technology (UOIT), Canada. He holds B.Sc., M.Eng., and Ph.D. degrees in Electrical Engineering from the University of Manitoba, McGill University and the University of Toronto, respectively. Before joining academia, he was with the high-tech industry for many years, where he??was the principal designer of the first North-American broadband access satellite system. He has taught at the University of Ottawa and Concordia University. At UOIT, he has also led the development of programs toward bachelor's, master's, and doctoral degrees in Electrical and Computer Engineering.
1
Basic Concepts of Probability Theory
Randomness and uncertainty, which always go hand in hand, exist in virtually every aspect of life. To this effect, almost everyone has a basic understanding of the term probability through intuition or experience. The study of probability stems from the analysis of certain games of chance. Probability is the measure of chance that an event will occur, and as such finds applications in disciplines that involve uncertainty. Probability theory is extensively used in a host of areas in science, engineering, medicine, and business, to name just a few. As claimed by Pierre‐Simon Laplace, a prominent French scholar, probability theory is nothing but common sense reduced to calculation. The basic concepts of probability theory are discussed in this chapter.
1.1 Statistical Regularity and Relative Frequency
An experiment is a measurement procedure or observation process. The outcome is the end result of an experiment, where if one outcome occurs, then no other outcome can occur at the same time. An event is a single outcome or a collection of outcomes of an experiment.
If the outcome of an experiment is certain, that is the outcome is always the same, it is then a deterministic experiment. In other words, a deterministic experiment always produces the same output from a given starting condition or initial state. The measurement of the temperature in a certain location at a given time using a thermometer is an example of a deterministic experiment.
In a random experiment, the outcome may unpredictably vary when the experiment is repeated, as the conditions under which it is performed cannot be predetermined with sufficient accuracy. In studying probability, we are concerned with experiments, real or conceptual, and their random outcomes. In a random experiment, the outcome is not uniquely determined by the causes and cannot be known in advance, because it is subject to chance. In a lottery game, as an example, the random experiment is the drawing, and the outcomes are the lottery number sequences. In such a game, the outcomes are uncertain, not because of the inaccuracy in how the experiment may be performed, but because how it has been designed to produce uncertain results. As another example, in a dice game, such as craps or backgammon, the random experiment is rolling a pair of dice and the outcomes are positive integers in the range of one to six inclusive. In such a game, the outcomes are uncertain, because it has been designed to produce uncertain results as well as the fact that there exists some inaccuracy and inconsistency in how the experiment may be carried out, i.e. how a pair of dice may be rolled.
In random experiments, the making of each measurement or observation, i.e. each repetition of the experiment, is called a trial. In independent trials, the observable conditions are the same and the outcome of one trial has no bearing on the other. In other words, the outcome of a trial is independent of the outcomes of the preceding and subsequent trials. For instance, it is reasonable to assume that in coin tossing and dice rolling repeated trials are independent. The conditions under which a random experiment is performed influence the probabilities of the outcomes of an experiment. To account for uncertainties in a random experiment, a probabilistic model is required. A probability model, as a simplified approximation to an actual random experiment, details enough to include all major aspects of the random phenomenon. Probability models are generally based on the fact that averages obtained in long sequences of independent trials of random experiments almost always give rise to the same value. This property, known as statistical regularity, is an experimentally verifiable phenomenon in many cases.
The ratio that represents the number of times a particular event occurs over the number of times the trial has been repeated is defined as the relative frequency of the event. When the number of times the experiment being repeated approaches infinity, the relative frequency of the event, which approaches a limit because of statistical regularity, is called the relative‐frequency definition of probability. Note that this limit, which is based on an a posteriori approach, cannot truly exist, as the number of times a physical experiment can be repeated may be very large, but always finite.
Figure 1.1 shows a sequence of trials in a coin‐tossing experiment, where the coin is fair (unbiased) and Nh represents the number of heads in a sequence of N independent trials. The relative frequency fluctuates widely when the number of independent trials is small, but eventually settles down near when the number of independent trials is increased. If an ideal (fair) coin is tossed infinitely many times, the probability of heads is then
Figure 1.1 Relative frequency in a tossing of a fair coin.
If outcomes are equally likely, then the probability of an event is equal to the number of outcomes that the event can have divided by the total number of possible outcomes in a random experiment. This probability, known as the classical definition of probability, is determined a priori without actually carrying out the random experiment. For instance, if an ideal (fair) die is rolled, the probability of getting a 4 is then .
1.2 Set Theory and Its Applications to Probability
In probability models associated with random experiments, simple events can be combined using set operations to obtain complicated events. We thus briefly review the set theory as it is the mathematical basis for probability.
A set is a collection of objects or things, which are called elements or members. As shorthand, a set is generally represented by the symbol { }. It is customary to use capital letters to denote sets and lowercase letters to refer to set elements. If x is an element of the set A, we use the notation x ∈ A, and if x does not belong to the set A, we write x ∉ A. It is essential to have a clear definition of a set either by using the set roster method, that is listing all its elements between curly brackets, such as {3, 6, 9}, or by using the set builder notation, that is describing some property held only by all members of the set, such as {x | x is a positive integer less than 10 that is a multiple of 3}. Noting the order of elements presented in a set is immaterial, the number of distinct elements in a set A is called the cardinality of A, written as ∣A∣. The cardinality of a set may be finite or infinite.
We use Venn diagrams, as shown in Figure 1.2, to pictorially illustrate an important collection of widely known sets and their logical relationships through geometric intuition.
Figure 1.2 Sets and their relationships through geometric intuition: (a) universal set; (b) subset; (c) equal sets; (d) union of sets; (e) intersection of sets; (f) complement of a set; (g) difference of sets; and (h) mutually exclusive sets.
The universal set U, also represented by the symbol Ω, is defined to include all possible elements in a given setting. The universal set U is usually represented pictorially as the set of all points within a rectangle, as shown in Figure 1.2 a. The set B is a subset of the set A if every member of B is also a member of A. We use the symbol ⊂ to denote subset, B ⊂ A thus implies B is a subset of A, as shown in Figure 1.2 b. Every set is thus a subset of the universal set. The empty set or null set, denoted by ∅, is defined as the set with no elements. The empty set is thus a subset of every set.
The set of all subsets of a set A, which also includes the empty set ∅ and the set A itself, is called the power set of A. Given two sets of A and B, the Cartesian product of A and B, denoted by A × B and read as A cross B, is the set of all ordered pairs (a, b), where a ∈ A and b ∈ B. The number of ordered pairs in the Cartesian product of A and B is equal to the product of the number of elements in the set A and the number of elements in the set B. In general, we have A × B ≠ B × A, unless A = B.
The sets...
| Erscheint lt. Verlag | 4.3.2019 |
|---|---|
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Mathematik ► Statistik |
| Mathematik / Informatik ► Mathematik ► Wahrscheinlichkeit / Kombinatorik | |
| Technik ► Elektrotechnik / Energietechnik | |
| Schlagworte | Angewandte Wahrscheinlichkeitsrechnung u. Statistik • Applied Probability & Statistics • A priori probability • Autocorrelation and Autocovariance Functions • Axioms of Probability • Bar • Boxplot • coefficient of variation • Computer Engineering • Computer Science • Conditional Distribution • Conditional probability • Continuous and Discrete Processes • Cross-correlation and Cross-covariance Functions • Cumulative distribution function • Definitions of Probability • descriptive statistics • Distributions of Time Samples • Electrical & Electronics Engineering • Electrical Engineering • Elektrotechnik u. Elektronik • expected value • Gaussian processes • histogram • Information Technology • joint probability • Law of total probability • Life Sciences • Marginal Probability • mean • Median • Mode • Moment • Mutually-Exclusive Events • Numerical Methods & Algorithms • Numerische Methoden u. Algorithmen • Pareto and Pie Charts • percentiles • Population Parameter • Probability • Probability & Mathematical Statistics • Probability density function • Probability mass function • quantiles • Quartiles • random processes • random variables • Sample Functions • Sample Statistic • set theory • Software engineering • Statistically-Independent Events • Statistics • Statistik • Variance • Wahrscheinlichkeitsrechnung • Wahrscheinlichkeitsrechnung u. mathematische Statistik |
| ISBN-10 | 1-119-30083-5 / 1119300835 |
| ISBN-13 | 978-1-119-30083-0 / 9781119300830 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich