Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

An Introduction to Probability and Statistics (eBook)

eBook Download: EPUB
2015 | 3. Auflage
John Wiley & Sons (Verlag)
978-1-118-79965-9 (ISBN)

Lese- und Medienproben

An Introduction to Probability and Statistics - Vijay K. Rohatgi, A. K. Md. Ehsanes Saleh
Systemvoraussetzungen
120,99 inkl. MwSt
(CHF 118,20)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

A well-balanced introduction to probability theory and mathematical statistics

Featuring updated material, An Introduction to Probability and Statistics, Third Edition remains a solid overview to probability theory and mathematical statistics. Divided intothree parts, the Third Edition begins by presenting the fundamentals and foundationsof probability. The second part addresses statistical inference, and the remainingchapters focus on special topics.

An Introduction to Probability and Statistics, Third Edition includes:

  • A new section on regression analysis to include multiple regression, logistic regression, and Poisson regression
  • A reorganized chapter on large sample theory to emphasize the growing role of asymptotic statistics
  • Additional topical coverage on bootstrapping, estimation procedures, and resampling
  • Discussions on invariance, ancillary statistics, conjugate prior distributions, and invariant confidence intervals
  • Over 550 problems and answers to most problems, as well as 350 worked out examples and 200 remarks
  • Numerous figures to further illustrate examples and proofs throughout

An Introduction to Probability and Statistics, Third Edition is an ideal reference and resource for scientists and engineers in the fields of statistics, mathematics, physics, industrial management, and engineering. The book is also an excellent text for upper-undergraduate and graduate-level students majoring in probability and statistics.



Vijay K. Rohatgi, PhD, is Professor Emeritus in the Department of Mathematics and Statistics at Bowling Green State University.  An Investment Research Consultant for PRI Investments, he is also the author of several books and over 100 research papers. 

A. K. Md. Ehsanes Saleh, PhD, is Distinguished Research Professor in the School of Mathematics and Statistics at Carleton University. Dr. Saleh is the author of more than 200 journal articles, and his research interests include nonparametric statistics, order statistics, and robust estimation. 


A well-balanced introduction to probability theory and mathematical statistics Featuring updated material, An Introduction to Probability and Statistics, Third Edition remains a solid overview to probability theory and mathematical statistics. Divided intothree parts, the Third Edition begins by presenting the fundamentals and foundationsof probability. The second part addresses statistical inference, and the remainingchapters focus on special topics. An Introduction to Probability and Statistics, Third Edition includes: A new section on regression analysis to include multiple regression, logistic regression, and Poisson regression A reorganized chapter on large sample theory to emphasize the growing role of asymptotic statistics Additional topical coverage on bootstrapping, estimation procedures, and resampling Discussions on invariance, ancillary statistics, conjugate prior distributions, and invariant confidence intervals Over 550 problems and answers to most problems, as well as 350 worked out examples and 200 remarks Numerous figures to further illustrate examples and proofs throughout An Introduction to Probability and Statistics, Third Edition is an ideal reference and resource for scientists and engineers in the fields of statistics, mathematics, physics, industrial management, and engineering. The book is also an excellent text for upper-undergraduate and graduate-level students majoring in probability and statistics.

Vijay K. Rohatgi, PhD, is Professor Emeritus in the Department of Mathematics and Statistics at Bowling Green State University. An Investment Research Consultant for PRI Investments, he is also the author of several books and over 100 research papers. A. K. Md. Ehsanes Saleh, PhD, is Distinguished Research Professor in the School of Mathematics and Statistics at Carleton University. Dr. Saleh is the author of more than 200 journal articles, and his research interests include nonparametric statistics, order statistics, and robust estimation.

"The book is an ideal reference and resource for scientists and engineers in the elds of statistics, mathematics, physics, industrial management, and engineering. The book is also an excellent text for upper-undergraduate and graduate-level students majoring in probability and statistics." (Zentralblatt MATH, 2016)

1
PROBABILITY


1.1 INTRODUCTION


The theory of probability had its origin in gambling and games of chance. It owes much to the curiosity of gamblers who pestered their friends in the mathematical world with all sorts of questions. Unfortunately this association with gambling contributed to a very slow and sporadic growth of probability theory as a mathematical discipline. The mathematicians of the day took little or no interest in the development of any theory but looked only at the combinatorial reasoning involved in each problem.

The first attempt at some mathematical rigor is credited to Laplace. In his monumental work, Theorie analytique des probabilités (1812), Laplace gave the classical definition of the probability of an event that can occur only in a finite number of ways as the proportion of the number of favorable outcomes to the total number of all possible outcomes, provided that all the outcomes are equally likely. According to this definition, the computation of the probability of events was reduced to combinatorial counting problems. Even in those days, this definition was found inadequate. In addition to being circular and restrictive, it did not answer the question of what probability is,it only gave a practical method of computing the probabilities of some simple events.

An extension of the classical definition of Laplace was used to evaluate the probabilities of sets of events with infinite outcomes. The notion of equal likelihood of certain events played a key role in this development. According to this extension, if Ω is some region with a well-defined measure (length, area, volume, etc.), the probability that a point chosen atrandom lies in a subregion A of Ω is the ratio measure(A)/measure(Ω). Many problems of geometric probability were solved using this extension. The trouble is that one can define “at random” in any way one pleases, and different definitions therefore lead to different answers. Joseph Bertrand, for example, in his book Calcul des probabilités (Paris, 1889) cited a number of problems in geometric probability where the result depended on the method of solution. In Example 9 we will discuss the famous Bertrand paradox and show that in reality there is nothing paradoxical about Bertrand’s paradoxes; once we define “probability spaces” carefully, the paradox is resolved. Nevertheless difficulties encountered in the field of geometric probability have been largely responsible for the slow growth of probability theory and its tardy acceptance by mathematicians as a mathematical discipline.

The mathematical theory of probability, as we know it today, is of comparatively recent origin. It was A. N. Kolmogorov who axiomatized probability in his fundamental work, Foundations of the Theory of Probability (Berlin), in 1933. According to this development, random events are represented by sets and probability is just a normed measure defined on these sets. This measure-theoretic development not only provided a logically consistent foundation for probability theory but also, at the same time, joined it to the mainstream of modern mathematics.

In this book we follow Kolmogorov’s axiomatic development. In Section 1.2 we introduce the notion of a sample space. In Section 1.3 we state Kolmogorov’s axioms of probability and study some simple consequences of these axioms. Section 1.4 is devoted to the computation of probability on finite sample spaces. Section 1.5 deals with conditional probability and Bayes’s rule while Section 1.6 examines the independence of events.

1.2 SAMPLE SPACE


In most branches of knowledge, experiments are a way of life. In probability and statistics, too, we concern ourselves with special types of experiments. Consider the following examples.

Example 1.


A coin is tossed. Assuming that the coin does not land on the side, there are two possible outcomes of the experiment: heads and tails. On any performance of this experiment one does not know what the outcome will be. The coin can be tossed as many times as desired.

Example 2.


A roulette wheel is a circular disk divided into 38 equal sectors numbered from 0 to 36 and 00. A ball is rolled on the edge of the wheel, and the wheel is rolled in the opposite direction. One bets on any of the 38 numbers or some combinations of them. One can also bet on a color, red or black. If the ball lands in the sector numbered 32, say, anybody who bet on 32 or combinations including 32 wins, and so on. In this experiment, all possible outcomes are known in advance, namely 00, 0, 1, 2,…,36, but on any performance of the experiment there is uncertainty as to what the outcome will be, provided, of course, that the wheel is not rigged in any manner. Clearly, the wheel can be rolled any number of times.

Example 3.


A manufacturer produces footrules. The experiment consists in measuring the length of a footrule produced by the manufacturer as accurately as possible. Because of errors in the production process one does not know what the true length of the footrule selected will be. It is clear, however, that the length will be, say, between 11 and 13 in., or, if one wants to be safe, between 6 and 18 in.

Example 4.


The length of life of a light bulb produced by a certain manufacturer is recorded. In this case one does not know what the length of life will be for the light bulb selected, but clearly one is aware in advance that it will be some number between 0 and ∞hours.

The experiments described above have certain common features. For each experiment, we know in advance all possible outcomes, that is, there are no surprises in store after the performance of any experiment. On any performance of the experiment, however, we do not know what the specific outcome will be, that is, there is uncertainty about the outcome on any performance of the experiment. Moreover, the experiment can be repeated under identical conditions. These features describe a random (or a statistical) experiment.

Definition 1.


A random (or a statistical) experiment is an experiment in which

  1. all outcomes of the experiment are known in advance,
  2. any performance of the experiment results in an outcome that is not known in advance, and
  3. theexperiment can be repeated under identical conditions.

In probability theory we study this uncertainty of a random experiment. It is convenient to associate with each such experiment a set Ω, the set of all possible outcomes of the experiment. To engage in any meaningful discussion about the experiment, we associate with Ω a σ -field , of subsets of Ω. We recall that a σ -field is a nonempty class of subsets of Ω that is closed under the formation of countable unions and complements and contains the null set Φ.

Definition 2.


The sample space of a statistical experiment is a pair (Ω, ), where

  1. Ω is the set of all possible outcomes of the experiment and
  2. is a σ -field of subsets of Ω.

The elements of Ω are called sample points. Any set A ∈is known as an event. Clearly A is a collection of sample points. We say that an event A happens if the outcome of the experiment corresponds to a point in A. Each one-point set is known as a simple or an elementary event . If the set C contains only a finite number of points, we say that (Ω, ) is a finite sample space . If Ωcontains at most a countable number of points, we call (Ω, ) a discrete sample space. If, however, Ω contains uncountably many points, we say that (Ω, )is an uncountable sample space. In particular, if Ω = k or some rectangle in k , we call it a continuous sample space.

Remark 1. The choice of is an important one, and some remarks are in order. If Ω contains at most a countable number of points, we can always take to be the class of all subsets of Ω This is certainly a σ -field. Each one point set is a member of and is the fundamental object of interest. Every subset of Ω is an event. If Ω has uncountably many points, the class of all subsets of Ω is still a σ -field, but it is much too large a class of sets to be of interest. It may not be possible to choose the class of all subsets of Ω as . One of the most important examples of an uncountable sample space is the case in which Ω=or Ω is an interval in . In this case we would like all one-point subsets of Ω and all intervals (closed, open, or semiclosed) to be events. We use our knowledge of analysis to specify . We will not go into details here except to recall that the class of all semiclosed intervals (a,b ] generates a class 1 which is a σ -field on . This class contains all one-point sets and all intervals (finite or infinite). We take 1. Since we will be dealing mostly with the one-dimensional case, we will write instead of 1. There are many subsets of R that are not in 1, but we will not demonstrate this fact here. We refer the reader to Halmos [42] , Royden [96] , or Kolmogorov and Fomin [54] for further details.

Example 5.


Let us toss a...

Erscheint lt. Verlag 1.9.2015
Reihe/Serie Wiley Series in Probability and Statistics
Wiley Series in Probability and Statistics
Wiley Series in Probability and Statistics
Sprache englisch
Themenwelt Mathematik / Informatik Mathematik Statistik
Mathematik / Informatik Mathematik Wahrscheinlichkeit / Kombinatorik
Technik
Schlagworte Asymptotic Statistics • Engineering • Engineering statistics • estimation • Mathematical Statistics • Probability • Probability & Mathematical Statistics • Regression Analysis • Regressionsanalyse • resampling • Sampling theory • Statistics • Statistik • Statistik in den Ingenieurwissenschaften • Wahrscheinlichkeitsrechnung • Wahrscheinlichkeitsrechnung u. mathematische Statistik
ISBN-10 1-118-79965-8 / 1118799658
ISBN-13 978-1-118-79965-9 / 9781118799659
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich

von Jim Sizemore; John Paul Mueller

eBook Download (2024)
Wiley-VCH GmbH (Verlag)
CHF 24,40