Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Wiley Handbook of Cognition and Assessment (eBook)

Frameworks, Methodologies, and Applications
eBook Download: EPUB
2016
Wiley (Verlag)
978-1-118-95661-8 (ISBN)

Lese- und Medienproben

Wiley Handbook of Cognition and Assessment -
Systemvoraussetzungen
177,99 inkl. MwSt
(CHF 173,90)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This state-of-the-art resource brings together the most innovative scholars and thinkers in the field of testing to capture the changing conceptual, methodological, and applied landscape of cognitively-grounded educational assessments. 

  • Offers a methodologically-rigorous review of cognitive and learning sciences models for testing purposes, as well as the latest statistical and technological know-how for designing, scoring, and interpreting results
  • Written by an international team of contributors at the cutting-edge of cognitive psychology and educational measurement under the editorship of a research director at the Educational Testing Service and an esteemed professor of educational psychology at the University of Alberta as well as supported by an expert advisory board
  • Covers conceptual frameworks, modern methodologies, and applied topics, in a style and at a level of technical detail that will appeal to a wide range of readers from both applied and scientific backgrounds
  • Considers emerging topics in cognitively-grounded assessment, including applications of emerging socio-cognitive models, cognitive models for human and automated scoring, and various innovative virtual performance assessments


André A. Rupp is Research Director at Educational Testing Service (ETS) in Princeton, NJ, where he works with teams that conduct comprehensive evaluation work for mature and emerging automated scoring systems. His research has focused on applications of principled assessment design frameworks in innovative assessment contexts as well as translating the statistical complexities of diagnostic measurement models into practical guidelines for applied specialists. Through dissemination and professional development efforts he is deeply dedicated to helping interdisciplinary teams navigate the complicated trade-offs between scientific, educational, political, and financial drivers of decision-making in order to help shape best methodological practices for evidentiary reasoning for complex assessment design and deployment lifecycles. He is co-author of Diagnostic Measurement: Theory, Methods, and Applications (2010).

Jacqueline P. Leighton is Professor and Chair of Educational Psychology at the University of Alberta, Canada. She is past Director of the University of Alberta's Centre for Research in Applied Measurement and Evaluation (CRAME). As a registered psychologist with the College of Alberta Psychologists, her research is focused on measuring the cognitive and socio-emotional processes underlying learning and assessment outcomes, including cognitive diagnostic assessment and feedback delivery and uptake. She has published in a variety of educational measurement journals and is past editor of Educational Measurement: Issues and Practice. She is co-author of The Learning Sciences in Educational Assessment (2011) and Cognitive Diagnostic Assessment for Education: Theory and Applications (2007) and co-editor of The Nature of Reasoning (2004).


This state-of-the-art resource brings together the most innovative scholars and thinkers in the field of testing to capture the changing conceptual, methodological, and applied landscape of cognitively-grounded educational assessments. Offers a methodologically-rigorous review of cognitive and learning sciences models for testing purposes, as well as the latest statistical and technological know-how for designing, scoring, and interpreting results Written by an international team of contributors at the cutting-edge of cognitive psychology and educational measurement under the editorship of a research director at the Educational Testing Service and an esteemed professor of educational psychology at the University of Alberta as well as supported by an expert advisory board Covers conceptual frameworks, modern methodologies, and applied topics, in a style and at a level of technical detail that will appeal to a wide range of readers from both applied and scientific backgrounds Considers emerging topics in cognitively-grounded assessment, including applications of emerging socio-cognitive models, cognitive models for human and automated scoring, and various innovative virtual performance assessments

André A. Rupp is Research Director at Educational Testing Service (ETS) in Princeton, NJ, where he works with teams that conduct comprehensive evaluation work for mature and emerging automated scoring systems. His research has focused on applications of principled assessment design frameworks in innovative assessment contexts as well as translating the statistical complexities of diagnostic measurement models into practical guidelines for applied specialists. Through dissemination and professional development efforts he is deeply dedicated to helping interdisciplinary teams navigate the complicated trade-offs between scientific, educational, political, and financial drivers of decision-making in order to help shape best methodological practices for evidentiary reasoning for complex assessment design and deployment lifecycles. He is co-author of Diagnostic Measurement: Theory, Methods, and Applications (2010). Jacqueline P. Leighton is Professor and Chair of Educational Psychology at the University of Alberta, Canada. She is past Director of the University of Alberta's Centre for Research in Applied Measurement and Evaluation (CRAME). As a registered psychologist with the College of Alberta Psychologists, her research is focused on measuring the cognitive and socio-emotional processes underlying learning and assessment outcomes, including cognitive diagnostic assessment and feedback delivery and uptake. She has published in a variety of educational measurement journals and is past editor of Educational Measurement: Issues and Practice. She is co-author of The Learning Sciences in Educational Assessment (2011) and Cognitive Diagnostic Assessment for Education: Theory and Applications (2007) and co-editor of The Nature of Reasoning (2004).

Notes on Contributors ix

Foreword xix

Acknowledgements xxi

1 Introduction to Handbook 1
André A. Rupp and Jacqueline P. Leighton

Part I Frameworks 13

2 The Role of Theories of Learning and Cognition in Assessment Design and Development 15
Paul D. Nichols, Jennifer L. Kobrin, Emily Lai, and James Koepfler

3 Principled Approaches to Assessment Design, Development, and Implementation 41
Steve Ferrara, Emily Lai, Amy Reilly, and Paul D. Nichols

4 Developing and Validating Cognitive Models in Assessment 75
Madeleine Keehner, Joanna S. Gorin, Gary Feng, and Irvin R. Katz

5 An Integrative Framework for Construct Validity 102
Susan Embretson

6 The Role of Cognitive Models in Automatic Item Generation 124
Mark J. Gierl and Hollis Lai

7 Social Models of Learning and Assessment 146
William R. Penuel and Lorrie A. Shepard

8 Socio?]emotional and Self?]management Variables in Learning and Assessment 174
Patrick C. Kyllonen

9 Understanding and Improving Accessibility for Special Populations 198
Leanne R. Ketterlin?]Geller

10 Automated Scoring with Validity in Mind 226
Isaac I. Bejar, Robert J. Mislevy, and Mo Zhang

Part II Methodologies 247

11 Explanatory Item Response Models 249
Paul De Boeck, Sun?]Joo Cho, and Mark Wilson

12 Longitudinal Models for Repeated Measures Data 267
Jeffrey R. Harring and Ari Houser

13 Diagnostic Classification Models 297
Laine Bradshaw

14 Bayesian Networks 328
José P. González?]Brenes, John T. Behrens, Robert J. Mislevy, Roy Levy,and Kristen E. DiCerbo

15 The Rule Space and Attribute Hierarchy Methods 354
Ying Cui, Mark J. Gierl, and Qi Guo

16 Educational Data Mining and Learning Analytics 379
Ryan S. Baker, Taylor Martin, and Lisa M. Rossi

Part III Applications 397

17 Large?]Scale Standards?]Based Assessments of Educational Achievement 399
Kristen Huff, Zachary Warner, and Jason Schweid

18 Educational Survey Assessments 427
Andreas Oranje, Madeleine Keehner, Hilary Persky, Gabrielle Cayton?]Hodges, and Gary Feng

19 Professional Certification and Licensure Examinations 446
Richard M. Luecht

20 The In?]Task Assessment Framework for Behavioral Data 472
Deirdre Kerr, Jessica J. Andrews, and Robert J. Mislevy

21 Digital Assessment Environments for Scientific Inquiry Practices 508
Janice D. Gobert and Michael A. Sao Pedro

22 Assessing and Supporting Hard?]to?]Measure Constructs in Video Games 535
Valerie Shute and Lubin Wang

23 Conversation?]Based Assessment 563
G. Tanner Jackson and Diego Zapata?]Rivera

24 Conclusion to Handbook 580
Jacqueline P. Leighton and André A. Rupp

Glossary 588

Index 603

Notes on Contributors


Jessica J. Andrews is an Associate Research Scientist in the Computational Psychometrics Research Center at Educational Testing Service (ETS) in Princeton, NJ. She received her Ph.D. in Learning Sciences at Northwestern University. Her research examines the cognitive processes underlying collaborative learning, and the use of technological environments (e.g., simulations, learning management systems) in supporting student learning and assessing individuals’ cognitive and noncognitive (e.g., collaborative) skills.

Ryan S. Baker is Associate Professor of Cognitive Studies at Teachers College, Columbia University, and Program Coordinator of TC’s Masters of Learning Analytics. He earned his Ph.D in Human‐Computer Interaction from Carnegie Mellon University. Dr. Baker was previously Assistant Professor of Psychology and the Learning Sciences at Worcester Polytechnic Institute, and served as the first Technical Director of the Pittsburgh Science of Learning Center DataShop, the largest public repository for data on the interaction between learners and educational software. He was the founding president of the International Educational Data Mining Society, and is currently Associate Editor of the Journal of Educational Data Mining. He has taught two MOOCs, Big Data and Education (twice), and (co‐taught) Data, Analytics, and Learning. His research combines educational data mining and quantitative field observation methods to better understand how students respond to educational software, and how these responses impact their learning. He studies these issues within intelligent tutors, simulations, multi‐user virtual environments, MOOCs, and educational games.

John T. Behrens is Vice President, Advanced Computing & Data Science Lab at Pearson and Adjunct Assistant Research Professor in the Department of Psychology at the University of Notre Dame. He develops and studies learning and assessment systems that integrate advances in the learning, computing, and data sciences. He has written extensively about the use of evidence‐centered design to guide development of complex educational systems as well as about the foundational logics of data analysis/data science and the methodological impacts of the digital revolution.

Isaac I. Bejar holds the title of Principal Research Scientist with Educational Testing Service (ETS) in Princeton, NJ. He is interested in improving methods of testing by incorporating advances in psychometric theory, cognitive psychology, natural language processing, and computer technology. He was a member of the editorial board and advisory board of Applied Psychological Measurement from 1981 to 1989, and was awarded the ETS Research Scientist Award in 2000. He published Cognitive and Psychometric Analysis of Analogical Problem Solving and co‐edited Automated Scoring of Complex Tasks in Computer‐Based Testing.

Laine Bradshaw is an Assistant Professor of Quantitative Methodology in the Educational Psychology Department in the College of Education at the University of Georgia (UGA). Her primary research focuses on advancing multidimensional psychometric methodology to support the diagnostic assessment of complex knowledge structures for educational purposes. With a Master’s degree in Mathematics Education, she is also active in collaborations on interdisciplinary assessment development projects that require tailoring psychometrics to cognitive theories. Her work has been published in journals such as Psychometrika and Educational Measurement: Issues and Practice. Her early career program of research was recently recognized by the National Council of Measurement in Education’s Jason Millman Award.

Gabrielle Cayton‐Hodges is a Research Scientist in the Learning Sciences Group at Educational Testing Service (ETS) in Princeton, NJ. She earned her BS degree in Brain and Cognitive Sciences from MIT and her PhD in Mathematics, Science, Technology, and Engineering Education from Tufts University. Gabrielle’s specialty is mathematical cognition and elementary mathematics education, focusing on the application of cognitive and learning sciences to mathematics assessment and the use of technology to support innovative approaches to gathering evidence about what students know and can do. She has a specific expertise in student understandings of numerical concepts such as place value and the use of multiple representations in mathematics and has also spent several years studying early algebra and learning progressions in the understanding of area and volume.

Sun‐Joo Cho is an Assistant Professor at Peabody College, Vanderbilt University. Her research topics include generalized latent variable modeling and its parameter estimation, with a focus on item response modeling.

Ying Cui is an Associate Professor at the University of Alberta. Her research interests include cognitive diagnostic assessment, person fit analysis, and applied statistical methods.

Paul De Boeck is Professor of Quantitative Psychology at The Ohio State University and emeritus from the KU Leuven (Belgium). He is especially interested in how psychometric models can be redefined as explanatory models or supplemented with explanatory components for applications in psychology and education.

Kristen E. DiCerbo’s research program centers on digital technologies in learning and assessment, particularly on the use of data generated from interactions to inform instructional decisions. She is the Vice President of Education Research at Pearson and has conducted qualitative and quantitative investigations of games and simulations, particularly focusing on the identification and accumulation of evidence. She previously worked as an educational researcher at Cisco and as a school psychologist. She holds doctorate and master’s degrees in Educational Psychology from Arizona State University.

Susan Embretson is Professor of Psychology at the Georgia Institute of Technology. Previously, she was Professor at the University of Kansas. Her research concerns integrating cognitive theory into psychometric item response theory models and into the design of measurement tasks. She has been recognized for this research, including the Career Contribution Award (2013) and the Technical and Scientific Contribution Award (1994–1997) from the National Council on Measurement and Education; the Distinguished Lifetime Achievement Award (2011) from the American Educational Research Association: Assessment and Cognition; and the Distinguished Scientist Award from American Psychological Association Division (5) for Measurement, Evaluation and Statistics for research and theory on item generation from cognitive theory. Embretson has also served as president for three societies in her area of specialization.

Gary Feng is a Research Scientist in the Research and Development division at Educational Testing Service (ETS) in Princeton, NJ. He works in the Cognitive, Accessibility, and Technology Sciences Center. He received his PhD in Developmental Psychology and MS in Statistics from the University of Illinois at Champaign‐Urbana. Before joining ETS, he was a faculty member at Duke University and held visiting and research positions at the University of Michigan and the University of Potsdam, Germany. He is broadly interested in the acquisition of reading skills and neurocognitive processes in reading. His past work uses eye‐tracking to examine cognitive processes of skilled and developing readers across different cultures. Gary contributes to the development of innovative literacy assessments.

Steve Ferrara was Vice President for Performance Assessment and led the Center for Next Generation Learning and Performance in Pearson’s Research and Innovation Network. Steve conducts psychometric research and designs large scale and formative assessments and automated language learning systems. He specializes in principled design, development, implementation, and validation of performance assessments and in research content, cognitive, and linguistic response demands placed on examinees and predicts technical characteristics of items. Steve earned an MEd in Special Education from Boston State College and an EdS in Program Evaluation and a PhD in Educational Psychology and measurement from Stanford University.

Mark J. Gierl is Professor of Educational Psychology and the Director of the Centre for Research in Applied Measurement and Evaluation (CRAME) at the University of Alberta. His specialization is educational and psychological testing, with an emphasis on the application of cognitive principles to assessment practices. Professor Gierl’s current research is focused on automatic item generation and automated essay scoring. His research is funded by the Medical Council of Canada, Elsevier, ACT Inc., and the Social Sciences and Humanities Research Council of Canada. He holds the Tier I Canada Research Chair in Educational Measurement.

Janice D. Gobert is a Professor of Learning Sciences and Educational Psychology at Rutgers. Formerly, she was the Co‐director of the Learning Sciences and Technologies Program at Worcester polytechnic Institute. Her specialty is in technology‐based with visualizations and simulations in scientific domains; her research areas are: intelligent tutoring systems for science, skill acquisition, performance assessment via log files, learning with visualizations, learner characteristics, and epistemology. She is also the Founding CEO of a start‐up company...

Erscheint lt. Verlag 21.11.2016
Reihe/Serie Wiley Handbooks in Education
Wiley Handbooks in Education
Wiley Handbooks in Education
Sprache englisch
Themenwelt Geisteswissenschaften Psychologie Entwicklungspsychologie
Geisteswissenschaften Psychologie Pädagogische Psychologie
Sozialwissenschaften Pädagogik Allgemeines / Lexika
Sozialwissenschaften Soziologie Empirische Sozialforschung
Schlagworte Assessment • Assessment, Evaluation & Research (Higher Education) • Bildungswesen • Cognition • Cognitive Psychology • Design • Education • Educational assessment • educational measurement • Educational Testing • ETS • Framework • Handbook • Hochschulen / Qualitätskontrolle, Evaluierung • Hochschulen / Qualitätskontrolle, Evaluierung • Kognitive Psychologie • Learning Sciences • Measurement • Methodology • Psychologie • Psychology • Psychometrics • standardized tests • Statistics • Testing
ISBN-10 1-118-95661-3 / 1118956613
ISBN-13 978-1-118-95661-8 / 9781118956618
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich