Nonparametric Statistics (eBook)
John Wiley & Sons (Verlag)
978-1-118-84042-9 (ISBN)
'...a very useful resource for courses in nonparametric statistics in which the emphasis is on applications rather than on theory. It also deserves a place in libraries of all institutions where introductory statistics courses are taught.' -CHOICE
This Second Edition presents a practical and understandable approach that enhances and expands the statistical toolset for readers. This book includes:
- New coverage of the sign test and the Kolmogorov-Smirnov two-sample test in an effort to offer a logical and natural progression to statistical power
- SPSS® (Version 21) software and updated screen captures to demonstrate how to perform and recognize the steps in the various procedures
- Data sets and odd-numbered solutions provided in an appendix, and tables of critical values
- Supplementary material to aid in reader comprehension, which includes: narrated videos and screen animations with step-by-step instructions on how to follow the tests using SPSS; online decision trees to help users determine the needed type of statistical test; and additional solutions not found within the book.
Greg W. Corder is Adjunct Instructor in the Department of Physics and Astronomy at James Madison University. He is also Adjunct Instructor of graduate education at Mary Baldwin College.
Dale I. Foreman is Professor Emeritus in the School of Education and Human Development at Shenandoah University.
Greg W. Corder is Adjunct Instructor in the Department of Physics and Astronomy at James Madison University. He is also Adjunct Instructor of graduate education at Mary Baldwin College. Dale I. Foreman is Professor Emeritus in the School of Education and Human Development at Shenandoah University.
CHAPTER 1
Nonparametric Statistics: An Introduction
1.1 Objectives
In this chapter, you will learn the following items:
- The difference between parametric and nonparametric statistics.
- How to rank data.
- How to determine counts of observations.
1.2 Introduction
If you are using this book, it is possible that you have taken some type of introductory statistics class in the past. Most likely, your class began with a discussion about probability and later focused on particular methods of dealing with populations and samples. Correlations, z-scores, and t-tests were just some of the tools you might have used to describe populations and/or make inferences about a population using a simple random sample.
Many of the tests in a traditional, introductory statistics text are based on samples that follow certain assumptions called parameters. Such tests are called parametric tests. Specifically, parametric assumptions include samples that
- are randomly drawn from a normally distributed population,
- consist of independent observations, except for paired values,
- consist of values on an interval or ratio measurement scale,
- have respective populations of approximately equal variances,
- are adequately large,* and
- approximately resemble a normal distribution.
If any of your samples breaks one of these rules, you violate the assumptions of a parametric test. You do have some options, however.
You might change the nature of your study so that your data meet the needed parameters. For instance, if you are using an ordinal or nominal measurement scale, you might redesign your study to use an interval or ratio scale. (See Box 1.1 for a description of measurement scales.) Also, you might seek additional participants to enlarge your sample sizes. Unfortunately, there are times when one or neither of these changes is appropriate or even possible.
Box 1.1 Measurement Scales.
We can measure and convey variables in several ways. Nominal data, also called categorical data, are represented by counting the number of times a particular event or condition occurs. For example, you might categorize the political alignment of a group of voters. Group members could either be labeled democratic, republican, independent, undecided, or other. No single person should fall into more than one category.
A dichotomous variable is a special classification of nominal data; it is simply a measure of two conditions. A dichotomous variable is either discrete or continuous. A discrete dichotomous variable has no particular order and might include such examples as gender (male vs. female) or a coin toss (heads vs. tails). A continuous dichotomous variable has some type of order to the two conditions and might include measurements such as pass/fail or young/old.
Ordinal scale data describe values that occur in some order of rank. However, distance between any two ordinal values holds no particular meaning. For example, imagine lining up a group of people according to height. It would be very unlikely that the individual heights would increase evenly. Another example of an ordinal scale is a Likert-type scale. This scale asks the respondent to make a judgment using a scale of three, five, or seven items. The range of such a scale might use a 1 to represent strongly disagree while a 5 might represent strongly agree. This type of scale can be considered an ordinal measurement since any two respondents will vary in their interpretation of scale values.
An interval scale is a measure in which the relative distances between any two sequential values are the same. To borrow an example from the physical sciences, we consider the Celsius scale for measuring temperature. An increase from −8 to −7°C degrees is identical to an increase from 55 to 56°C.
A ratio scale is slightly different from an interval scale. Unlike an interval scale, a ratio scale has an absolute zero value. In such a case, the zero value indicates a measurement limit or a complete absence of a particular condition. To borrow another example from the physical sciences, it would be appropriate to measure light intensity with a ratio scale. Total darkness is a complete absence of light and would receive a value of zero.
On a general note, we have presented a classification of measurement scales similar to those used in many introductory statistics texts. To the best of our knowledge, this hierarchy of scales was first made popular by Stevens (1946). While Stevens has received agreement (Stake, 1960; Townsend & Ashby, 1984) and criticism (Anderson, 1961; Gaito, 1980; Velleman & Wilkinson, 1993), we believe the scale classification we present suits the nature and organization of this book. We direct anyone seeking additional information on this subject to the preceding citations.
If your samples do not resemble a normal distribution, you might have learned a strategy that modifies your data for use with a parametric test. First, if you can justify your reasons, you might remove extreme values from your samples called outliers. For example, imagine that you test a group of children and you wish to generalize the findings to typical children in a normal state of mind. After you collect the test results, most children earn scores around 80% with some scoring above and below the average. Suppose, however, that one child scored a 5%. If you find that this child speaks no English because he arrived in your country just yesterday, it would be reasonable to exclude his score from your analysis. Unfortunately, outlier removal is rarely this straightforward and deserves a much more lengthy discussion than we offer here.* Second, you might utilize a parametric test by applying a mathematical transformation to the sample values. For example, you might square every value in a sample. However, some researchers argue that transformations are a form of data tampering or can distort the results. In addition, transformations do not always work, such as circumstances when data sets have particularly long tails. Third, there are more complicated methods for analyzing data that are beyond the scope of most introductory statistics texts. In such a case, you would be referred to a statistician.
Fortunately, there is a family of statistical tests that do not demand all the parameters, or rules, that we listed earlier. They are called nonparametric tests, and this book will focus on several such tests.
1.3 The Nonparametric Statistical Procedures Presented in this Book
This book describes several popular nonparametric statistical procedures used in research today. Table 1.1 identifies an overview of the types of tests presented in this book and their parametric counterparts.
| Type of analysis | Nonparametric test | Parametric equivalent |
|---|
| Comparing two related samples | Wilcoxon signed ranks test and sign test | t-Test for dependent samples |
| Comparing two unrelated samples | Mann–Whitney U-test and Kolmogorov–Smirnov two-sample test | t-Test for independent samples |
| Comparing three or more related samples | Friedman test | Repeated measures, analysis of variance (ANOVA) |
| Comparing three or more unrelated samples | Kruskal–Wallis H-test | One-way ANOVA |
| Comparing categorical data | Chi square (χ2) tests and Fisher exact test | None |
| Comparing two rank-ordered variables | Spearman rank-order correlation | Pearson product–moment correlation |
| Comparing two variables when one variable is discrete dichotomous | Point-biserial correlation | Pearson product–moment correlation |
| Comparing two variables when one variable is continuous dichotomous | Biserial correlation | Pearson product–moment correlation |
| Examining a sample for randomness | Runs test | None |
When demonstrating each nonparametric procedure, we will use a particular step-by-step method.
1.3.1 State the Null and Research Hypotheses
First, we state the hypotheses for performing the test. The two types of hypotheses are null and alternate. The null hypothesis (HO) is a statement that indicates no difference exists between conditions, groups, or variables. The alternate hypothesis (HA), also called a research hypothesis, is the statement that predicts a difference or relationship between conditions, groups, or variables.
The alternate hypothesis may be directional or nondirectional, depending on the context of the research. A directional, or one-tailed, hypothesis predicts a statistically significant change in a particular direction. For example, a treatment that predicts an improvement would be directional. A nondirectional, or two-tailed, hypothesis predicts a statistically significant change, but in no particular direction. For example, a researcher may compare two new conditions and predict a difference between them. However, he or she would not...
| Erscheint lt. Verlag | 14.4.2014 |
|---|---|
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Mathematik ► Statistik |
| Mathematik / Informatik ► Mathematik ► Wahrscheinlichkeit / Kombinatorik | |
| Technik | |
| Schlagworte | accessible • Angewandte Wahrscheinlichkeitsrechnung u. Statistik • Applications • Applied Probability & Statistics • Approach • Choice • contextbased • Continues • Courses • Edition • emphasis • First • manner • nichtparametrische Statistik • nichtparametrische Verfahren • Nonparametric • Nonparametric Analysis • present • presents • Psychological Methods, Research & Statistics • Psychologie • Psychologische Methoden, Forschung u. Statistik • Psychology • resource • Statistical • Statistics • Statistik • stepbystep • stepbystep fashion • theory • useful |
| ISBN-10 | 1-118-84042-9 / 1118840429 |
| ISBN-13 | 978-1-118-84042-9 / 9781118840429 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich