Program Evaluation in Practice (eBook)
224 Seiten
Jossey-Bass (Verlag)
978-1-118-45020-8 (ISBN)
This updated edition of Program Evaluation in Practice covers the core concepts of program evaluation and uses case studies to touch on real-world issues that arise when conducting an evaluation project. This important resource is filled with illustrative examples written in accessible terms and provides a wide variety of evaluation projects that can be used for discussion, analysis, and reflection. The book addresses foundations and theories of evaluation, tools and methods for collecting data, writing of reports, and the sharing of findings. The discussion questions and class activities at the end of each chapter are designed to help process the information in that chapter and to integrate the information from the other chapters, thus facilitating the learning process. As useful for students as it is for evaluators in training, Program Evaluation in Practice is a must-have text for those aspiring to be effective evaluators.
- Includes expanded discussion of basic theories and approaches to program evaluation
- Features a new chapter on objective-based evaluation and a new section on ethics in program evaluation
- Provides more detailed information and in-depth description for each case, including evaluation approaches, fresh references, new readings, and the new Joint Committee Standards for Evaluation
Dean T. Spaulding is a professional evaluator and also serves on the faculty at the College of Saint Rose in Albany, New York in the Department of Educational Psychology, where he teaches educational research methodology and program evaluation. He is a coauthor of Methods in Educational Research: From Theory to Practice from Jossey-Bass.
Dean T. Spaulding is a professional evaluator and also serves on the faculty at the College of Saint Rose in Albany, New York in the Department of Educational Psychology, where he teaches educational research methodology and program evaluation. He is a coauthor of Methods in Educational Research: From Theory to Practice from Jossey-Bass.
Chapter 2
Ethics in Program Evaluation and an Overview of Evaluation Approaches
Learning Objectives
After reading this chapter you should be able to
- Understand ethical dilemmas faced by evaluators
- Understand the Joint Committee on Standards for Educational Evaluation’s standards and how evaluators may use them in the profession
- Understand the key similarities and differences among the various evaluation approaches
- Understand the key benefits and challenges of the different evaluation approaches
Ethics in Program Evaluation
When conducting an evaluation, a program evaluator may face not only methodological challenges (for example, what data collection instrument to use) but ethical challenges as well. Ethics in program evaluation refers to ensuring that the actions of the program evaluator are in no way causing harm or potential harm to program participants, vested stakeholders, or the greater community.
In some cases, evaluators may find themselves in an ethical dilemma because of the report they have created. For example, an evaluator might be tempted to suppress negative findings from a program evaluation for fear of angering the client and losing the evaluation contract. In other cases, evaluators may find themselves in a dilemma not because of their report, per se, but because of how others use it. For example, how should an evaluator move forward if he or she knows that a report supports one stakeholder group over another and will no doubt spark a situation? For example, a school superintendent who finds an after-school program too expensive might use the evaluation report to support canceling the program even though parents and students find the program beneficial. Evaluators faced with a multitude of ethical challenges each day turn to the Joint Committee on Standards for Educational Evaluation for guidance (Newman & Brown, 1996).
Established in 1975, the Joint Committee was created to develop a set of standards to ensure the highest quality of program evaluation in educational settings. The Joint Committee is made up of several contributing organizations, one of which is the American Evaluation Association (AEA). Although the AEA, which sends delegates to Joint Committee meetings, has not officially adopted the standards, the organization does recognize the standards and support the work of the committee. The standards are broken down into five main areas: utility, feasibility, propriety, accuracy, and evaluation accountability.
Utility standards. The purpose of these standards is to increase the likelihood that stakeholders will find both the process and the product associated with the evaluation to be valuable. These standards include, for example, making sure the evaluation focuses on the needs of all stakeholders involved in the program, making sure the evaluation addresses the different values and perspectives of all stakeholders, and making sure that the evaluation is not misused.
Feasibility standards. The purpose of these standards is to ensure that the evaluation is conducted using appropriate project management techniques and uses resources appropriately.
Propriety standards. These standards are designed to support what is fair, legal, and right in program evaluation. These standards include, for example, ensuring that the human rights and safety of program participants are upheld and maintained indefinitely throughout the evaluation process; that reports provide a comprehensive evaluation that includes a summary of goals, data collection methods, findings, and recommendations; and that evaluations are conducted for the good of the stakeholders and the community.
Accuracy standards. The purpose of these standards is to ensure that evaluations are dependable and truthful in their data collection and findings. These standards include making sure that the evaluation report is both reliable and valid, and that data collection tools and methodologies were sound and rigorous in nature.
Evaluation accountability standards. These standards call for both the rigorous documentation of evaluations and the use of internal and external meta-evaluations to improve the ongoing processes and products associated with evaluation.
A complete list of the standards can be found at www.eval.org/evaluationdocuments/progeval.xhtml.
What is an Evaluation Approach?
As noted in Chapter One, program evaluation is the process of systematically collecting data to determine if a set of objectives has been met. This process is done to determine a program’s worth or merit (see Figure 2.1).
Figure 2.1 Determining a Program’s Worth or Merit
evaluation approach The model that an evaluator uses to undertake an evaluation
The evaluation approach is the process by which the evaluator goes about collecting data. Two evaluators working to evaluate the same program not only may use different methods for collecting data but also may have very different perspectives on the overall purpose or role of the evaluation. Although many beginning evaluators may believe that simply changing the type of data being collected (for example, from quantitative to qualitative) is changing the approach, in reality an evaluation approach is based on more than simply data collection techniques. Changing an approach to program evaluation entails a shift not only in philosophy but also in the “reason for being” or purpose of the evaluation.
In this chapter you will read about some of the main approaches used in program evaluation today (see Figure 2.2 for an overview). When considering these approaches, think about the criteria used to evaluate a program and who will ultimately judge the program using the criteria.
Figure 2.2 Overview of Evaluation Approaches
Objectives-Based Approach
Just as there are many applied research approaches, there are several different approaches to program evaluation. The most common approach program evaluators can use is the objectives-based approach, which involves objectives written by both the creators of the program and the evaluator. An evaluation objective is a written statement that depicts an overarching purpose of the evaluation and clearly states the types of information that will be collected. Often these objectives are further supported through the use of benchmarks. A benchmark is more detailed than an objective in that it specifically states what quantitative goals the participants in the program need to reach for the program to be successful. Box 2.1 presents an evaluation objective followed by a benchmark.
objectives-based approach An evaluation model whereby the evaluator focuses on a series of preestablished objectives, and then collects only the necessary data to measure whether the program met those objectives
benchmarks Specific outcomes that define the success or worth of a program
Box 2.1. Example of an Evaluation Objective and Benchmark
Evaluation objective: To document middle school students’ changes in academic achievement, particularly in the area of reading and literacy skills.
Benchmark: Students in fifth through eighth grade will show a 10 percent gain on the English language arts (ELA) state assessment in year one, and there will be a 20 percent increase in students passing the ELA in program years two and three.
Evaluators will often start with the objectives for the evaluation and build evaluation data collection activities from those objectives. Evaluation objectives may guide either formative or summative data collection. Either way, quantitative or qualitative data, or both, is collected, and findings are compared to the project’s objectives. Objectives are certainly helpful in shaping the evaluation, but there is a risk that evaluators may become so focused on the objectives that they lose sight of other unanticipated outcomes or benefits to participants as a result of the program.
Although objectives assist in guiding an evaluation, there is another method—the goal-free approach —that doesn’t prescribe using evaluation objectives. This approach is guided by the perspective that there are many findings and outcomes that do not fall within the strict confines of the goals and objectives established by both the project developers and the evaluator. Those who practice goal-free evaluation believe that the unforeseen outcomes may be more important than outcomes that the program developers emphasize. One difficulty in conducting a goal-free evaluation is that projects that receive funding are required to show specific outcomes based on objectives. If the outcomes are not included in the evaluation, the appropriate data to present to funding bodies may not end up being collected.
goal-free approach An evaluation model designed to control for bias, whereby the evaluator purposely does not learn the goals and objectives of the program being evaluated but tries to determine these through careful data collection
Early Objectives-Based Approach
Evidence of program evaluation in the United States dates back to the early 1800s, but the Tylerian approach, named after its creator Ralph Tyler, was the first to focus on the use of behavioral objectives as a method of determining or judging the worth of a program. Beginning in 1932, Tyler, now considered the father of education...
| Erscheint lt. Verlag | 19.12.2016 |
|---|---|
| Reihe/Serie | Research Methods for the Social Sciences |
| Research Methods for the Social Sciences | Research Methods for the Social Sciences |
| Sprache | englisch |
| Themenwelt | Schulbuch / Wörterbuch |
| Geisteswissenschaften | |
| Sozialwissenschaften ► Pädagogik | |
| Sozialwissenschaften ► Soziologie ► Empirische Sozialforschung | |
| Schlagworte | approaches to program evaluation • Assessment, Evaluation & Research (Higher Education) • best practices in program evaluation • Bildungswesen • Dean T. Spaulding • Education • Evaluation & Research Methods • evaluation of a training program • evaluators in training • Evaluierung • Evaluierung u. Researchmethoden • Hochschulen / Qualitätskontrolle, Evaluierung • Hochschulen / Qualitätskontrolle, Evaluierung • how to evaluate a program • inquiry-based instruction • Joint Committee Standards for evaluation • models of program evaluation • Program evaluation • Research Methodologies • Sociology • Soziologie • Soziologische Forschungsmethoden • theories of program evaluation |
| ISBN-10 | 1-118-45020-5 / 1118450205 |
| ISBN-13 | 978-1-118-45020-8 / 9781118450208 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich