Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Robust Intelligence and Trust in Autonomous Systems (eBook)

eBook Download: PDF
2016 | 1st ed. 2016
XII, 270 Seiten
Springer US (Verlag)
978-1-4899-7668-0 (ISBN)

Lese- und Medienproben

Robust Intelligence and Trust in Autonomous Systems -
Systemvoraussetzungen
96,29 inkl. MwSt
(CHF 93,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This volume explores the intersection of robust intelligence (RI) and trust in autonomous systems across multiple contexts among autonomous hybrid systems, where hybrids are arbitrary combinations of humans, machines and robots. To better understand the relationships between artificial intelligence (AI) and RI in a way that promotes trust between autonomous systems and human users, this book explores the underlying theory, mathematics, computational models, and field applications.  It uniquely unifies the fields of RI and trust and frames it in a broader context, namely the effective integration of human-autonomous systems. 

A description of the current state of the art in RI and trust introduces the research work in this area.  With this foundation, the chapters further elaborate on key research areas and gaps that are at the heart of effective human-systems integration, including workload management, human computer interfaces, team integration and performance, advanced analytics, behavior modeling, training, and, lastly, test and evaluation. 

Written by international leading researchers from across the field of autonomous systems research, Robust Intelligence and Trust in Autonomous Systems dedicates itself to thoroughly examining the challenges and trends of systems that exhibit RI, the fundamental implications of RI in developing trusted relationships with present and future autonomous systems, and the effective human systems integration that must result for trust to be sustained.

Contributing authors:  David W. Aha, Jenny Burke, Joseph Coyne, M.L. Cummings, Munjal Desai, Michael Drinkwater, Jill L. Drury, Michael W. Floyd, Fei Gao, Vladimir Gontar, Ayanna M. Howard, Mo Jamshidi, W.F. Lawless, Kapil Madathil, Ranjeev Mittu, Arezou Moussavi, Gari Palmer, Paul Robinette, Behzad Sadrfaridpour, Hamed Saeidi, Kristin E. Schaefer, Anne Selwyn, Ciara Sibley, Donald A. Sofge, Erin Solovey, Aaron Steinfeld, Barney Tannahill, Gavin Taylor, Alan R. Wagner, Yue Wang, Holly A. Yanco, Dan Zwillinger.



Contributing authors:  David W. Aha, Jenny Burke, Joseph Coyne, M.L. Cummings, Munjal Desai, Michael Drinkwater, Jill L. Drury, Michael W. Floyd, Fei Gao, Vladimir Gontar, Ayanna M. Howard, Mo Jamshidi, W.F. Lawless, Kapil Madathil, Ranjeev Mittu, Arezou Moussavi, Gari Palmer, Paul Robinette, Behzad Sadrfaridpour, Hamed Saeidi, Kristin E. Schaefer, Anne Selwyn, Ciara Sibley, Donald A. Sofge, Erin Solovey, Aaron Steinfeld, Barney Tannahill, Gavin Taylor, Alan R. Wagner, Yue Wang, Holly A. Yanco, Dan Zwillinger.  


This volume explores the intersection of robust intelligence (RI) and trust in autonomous systems across multiple contexts among autonomous hybrid systems, where hybrids are arbitrary combinations of humans, machines and robots. To better understand the relationships between artificial intelligence (AI) and RI in a way that promotes trust between autonomous systems and human users, this book explores the underlying theory, mathematics, computational models, and field applications.  It uniquely unifies the fields of RI and trust and frames it in a broader context, namely the effective integration of human-autonomous systems.  A description of the current state of the art in RI and trust introduces the research work in this area.  With this foundation, the chapters further elaborate on key research areas and gaps that are at the heart of effective human-systems integration, including workload management, human computer interfaces, team integration and performance, advanced analytics, behavior modeling, training, and, lastly, test and evaluation. Written by international leading researchers from across the field of autonomous systems research, Robust Intelligence and Trust in Autonomous Systems dedicates itself to thoroughly examining the challenges and trends of systems that exhibit RI, the fundamental implications of RI in developing trusted relationships with present and future autonomous systems, and the effective human systems integration that must result for trust to be sustained.Contributing authors:  David W. Aha, Jenny Burke, Joseph Coyne, M.L. Cummings, Munjal Desai, Michael Drinkwater, Jill L. Drury, Michael W. Floyd, Fei Gao, Vladimir Gontar, Ayanna M. Howard, Mo Jamshidi, W.F. Lawless, Kapil Madathil, Ranjeev Mittu, Arezou Moussavi, Gari Palmer, Paul Robinette, Behzad Sadrfaridpour, Hamed Saeidi, Kristin E. Schaefer, Anne Selwyn, Ciara Sibley, Donald A. Sofge, Erin Solovey, Aaron Steinfeld, Barney Tannahill, Gavin Taylor, Alan R. Wagner, Yue Wang, Holly A. Yanco, Dan Zwillinger.

Contributing authors:  David W. Aha, Jenny Burke, Joseph Coyne, M.L. Cummings, Munjal Desai, Michael Drinkwater, Jill L. Drury, Michael W. Floyd, Fei Gao, Vladimir Gontar, Ayanna M. Howard, Mo Jamshidi, W.F. Lawless, Kapil Madathil, Ranjeev Mittu, Arezou Moussavi, Gari Palmer, Paul Robinette, Behzad Sadrfaridpour, Hamed Saeidi, Kristin E. Schaefer, Anne Selwyn, Ciara Sibley, Donald A. Sofge, Erin Solovey, Aaron Steinfeld, Barney Tannahill, Gavin Taylor, Alan R. Wagner, Yue Wang, Holly A. Yanco, Dan Zwillinger.  

Preface 6
AAAI-2014 Spring Symposium Organizers 7
AAAI-2014 Spring Symposium: Keynote Speakers 7
Symposium Program Committee 8
Contents 12
1 Introduction 14
1.1 The Intersection of Robust Intelligence (RI) and Trust in Autonomous Systems 14
1.2 Background of the 2014 Symposium 15
1.3 Contributed Chapters 17
References 22
2 Towards Modeling the Behavior of Autonomous Systems and Humans for Trusted Operations 24
2.1 Introduction 24
2.2 Understanding the Value of Context 26
2.3 Context and the Complexity of Anomaly Detection 26
2.3.1 Manifolds for Anomaly Detection 27
2.4 Reinforcement Learning for Anomaly Detection 28
2.4.1 Reinforcement Learning 29
2.4.2 Supervised Autonomy 30
2.4.3 Feature Identification and Selection 31
2.4.4 Approximation Error for Alarming and Analysis 32
2.4.5 Illustration 33
2.4.5.1 Synthetic Domain 33
2.4.5.2 Real-World Domain 35
2.5 Predictive and Prescriptive Analytics 39
2.6 Capturing User Interactions and Inference 39
2.7 Challenges and Opportunities 41
2.8 Summary 42
References 43
3 Learning Trustworthy Behaviors Using an Inverse Trust Metric 45
3.1 Introduction 45
3.2 Related Work 47
3.2.1 Human-Robot Trust 47
3.2.2 Behavior Adaptation 47
3.3 Agent Behavior 49
3.4 Inverse Trust Estimate 49
3.5 Trust-Guided Behavior Adaptation 51
3.5.1 Evaluated Behaviors 52
3.5.2 Behavior Adaptation 53
3.6 Evaluation 54
3.6.1 eBotworks Simulator 55
3.6.2 Experimental Conditions 55
3.6.3 Evaluation Scenarios 56
3.6.3.1 Movement Scenario 56
3.6.3.2 Patrolling Scenario 58
3.6.4 Trustworthy Behaviors 59
3.6.5 Efficiency 62
3.6.6 Discussion 63
3.7 Conclusions 63
References 64
4 The “Trust V”: Building and Measuring Trust in Autonomous Systems 66
4.1 Introduction 66
4.2 Autonomy, Automation, and Trust 68
4.3 Dimensions of Trust 73
4.3.1 Trust Dimensions Arising from Automated Systems Attributes 73
4.3.2 Trust Dimensions Arising from Autonomous Systems Attributes 74
4.3.3 Another Trust Dimension: SoS 74
4.4 Creating Trust 75
4.4.1 Building Trust In 76
4.5 The Systems Engineering V-Model 77
4.6 The Trust V-Model 78
4.6.1 The Trust V Representation: Graphic 79
4.6.2 The Trust V Representation: Array 80
4.6.3 Trust V “Toolbox” 81
4.7 Specific Trust Example: Chatter 83
4.8 Measures of Effectiveness 84
4.9 Conclusions and Next Steps 86
A.1 Appendix 87
References10 87
5 Big Data Analytic Paradigms: From Principle Component Analysis to Deep Learning 89
5.1 Introduction 89
5.2 Wind Data Description 90
5.3 Wind Power Forecasting Via Nonparametric Models 90
5.3.1 Advanced Neural Network Architectures Application 91
5.3.2 Wind Speed Results 93
5.4 Introduction to Deep Architectures 94
5.4.1 Training Deep Architectures 100
5.4.2 Training Restricted Boltzmann Machines 100
5.4.3 Training Autoencoders 102
5.5 Conclusions 104
References 105
6 Artificial Brain Systems Based on Neural Network Discrete Chaotic Dynamics. Toward the Development of Conscious and Rational Robots 106
6.1 Introduction 106
6.2 Background 108
6.3 Numerical Simulations 114
6.4 Conclusion 121
References 122
7 Modeling and Control of Trust in Human-Robot Collaborative Manufacturing 123
7.1 Introduction 123
7.2 Trust Model 126
7.2.1 Time-Series Trust Model for Dynamic HRC Manufacturing 126
7.2.2 Robot Performance Model 127
7.2.3 Human Performance Model 127
7.3 Neural Network Based Robust Intelligent Controller 129
7.4 Control Approaches: Intersection of Trust and Robust Intelligence 130
7.4.1 Manual Mode 131
7.4.2 Autonomous Mode 131
7.4.3 Collaborative Mode 132
7.5 Simulation 132
7.5.1 Manual Mode 133
7.5.2 Autonomous Mode 135
7.5.3 Collaborative Mode 135
7.5.4 Comparison of Control Schemes 135
7.6 Experimental Validation 136
7.6.1 Experimental Test Bed 136
7.6.2 Experimental Design 136
7.6.2.1 Experiment Scenario 137
7.6.2.2 Controlled Behavioral Study 139
7.6.2.3 Imposing Fatigue 139
7.6.2.4 Experiment Procedure 141
7.6.2.5 Measurements and Scales 141
7.6.3 Experimental Results 142
7.6.3.1 Trust Model Identification Procedure 142
7.6.3.2 Manual Mode 142
7.6.3.3 Autonomous Mode 143
7.6.3.4 Collaborative Mode 144
7.6.4 Comparison and Conclusion 145
7.7 Conclusion 147
References 147
8 Investigating Human-Robot Trust in Emergency Scenarios: Methodological Lessons Learned 150
8.1 Introduction 150
8.2 Conceptualizing Trust 151
8.2.1 Conditions for Situational Trust 153
8.3 Related Work on Trust and Robots 155
8.4 Crowdsourced Narratives in Trust Research 155
8.4.1 Iterative Development of Narrative Phrasing 157
8.5 Crowdsourced Robot Evacuation 162
8.5.1 Single Round Experimental Setup 162
8.5.2 Multi-Round Experimental Setup 163
8.5.3 Asking About Trust 164
8.5.4 Measuring Trust 165
8.5.5 Incentives to Participants 165
8.5.6 Communicating Failed Robot Behavior 168
8.6 Conclusion 170
References 171
9 Designing for Robust and Effective Teamwork in Human-Agent Teams 174
9.1 Introduction 174
9.2 Related Work 175
9.2.1 Team Structure 175
9.2.2 Shared Mental Model and Team Situation Awareness 176
9.2.3 Communication 177
9.3 Experiment 1: Team Structure and Robustness 178
9.3.1 Testbed 178
9.3.2 Experiment Design 180
9.3.3 Results 181
9.3.3.1 Duplicated Work 181
9.3.3.2 Under Utilization of Vehicles 183
9.3.3.3 Infrequent Communication 184
9.4 Experiment 2: Information-Sharing 185
9.4.1 Independent Variables 185
9.4.2 Dependent Variables 187
9.4.3 Participants 187
9.4.4 Procedure 188
9.4.5 Results 188
9.4.5.1 Team Performance 188
9.4.5.2 Team Coordination 190
9.4.5.3 Workload 193
9.4.5.4 User Preference and Comments 194
9.5 Discussion 195
9.6 Conclusion 195
References 196
10 Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI” 198
10.1 Introduction 198
10.2 Creation of an Item Pool 200
10.3 Initial Item Pool Reduction 202
10.3.1 Experimental Method 203
10.3.2 Experimental Results 204
10.3.3 Key Findings and Changes 205
10.4 Content Validation 205
10.4.1 Experimental Method 206
10.4.2 Experimental Results 207
10.5 Task-Based Validity Testing: Does the Score Change Over Time with an Intervention? 210
10.5.1 Experimental Method 211
10.5.2 Experimental Results 212
10.5.2.1 Individual Item Analysis 212
10.5.2.2 Trust Score Validation 212
10.5.2.3 40 Items Versus 14 Items 214
10.6 Task-Based Validity Testing: Does the Scale Measure Trust? 215
10.6.1 Experimental Method 215
10.6.2 Experimental Results 216
10.6.2.1 Correlation Analysis of the Three Scales 216
10.6.2.2 Pre-post Interaction Analysis 217
10.6.2.3 Differences Across Scales and Conditions 218
10.6.3 Experimental Discussion 219
10.7 Conclusion 219
10.7.1 The Trust Perception Scale-HRI 219
10.7.2 Instruction for Use 221
10.7.3 Current and Future Applications 222
References 223
11 Methods for Developing Trust Models for Intelligent Systems 226
11.1 Introduction 226
11.2 Prior Work in the Development of Trust Models 228
11.2.1 Trust Models 230
11.2.2 Trust in Human-Robot Interaction (HRI) 231
11.3 The Use of Surveys as a Method for Developing Trust Models 233
11.3.1 Methodology 234
11.3.2 Results and Discussion 235
11.3.3 Modeling Trust 242
11.4 Robot Studies as a Method for Developing Trust Models 243
11.4.1 Methodology 243
11.4.2 Results and Discussion 250
11.4.2.1 Reducing Situation Awareness (SA) 250
11.4.2.2 Providing Feedback 251
11.4.2.3 Reducing Task Difficulty 253
11.4.2.4 Long-Term Interaction 254
11.4.2.5 Impact of Timing of Periods of Low Reliability 256
11.4.2.6 Impact of Age 256
11.4.3 Modeling Trust 257
11.5 Conclusions and Future Work 258
References 259
12 The Intersection of Robust Intelligence and Trust: Hybrid Teams, Firms and Systems 262
12.1 Introduction 262
12.1.1 Background 263
12.2 Theory 265
12.3 Outline of the Mathematics 267
12.3.1 Field Model 267
12.3.2 Interdependence 269
12.3.3 Incompleteness and Uncertainty 269
12.4 Evidence of Incompleteness for Groups 270
12.4.1 The Evidence from Studies of Organizations 271
12.4.2 Modeling Competing Groups with Limit Cycles 271
12.5 Gaps 273
12.6 Conclusions 274
References 275

Erscheint lt. Verlag 7.4.2016
Zusatzinfo XII, 270 p. 89 illus., 64 illus. in color.
Verlagsort New York
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Technik Elektrotechnik / Energietechnik
Technik Maschinenbau
Schlagworte Automation • Autonomy • Big Data • Collaboration • Control • Cyber-security • Deep learning • Driverless cards • Emergency • Hostile environments • Intelligence • metrics • Robot-human teams • Robotic medicine • Robust • Safety • Trust
ISBN-10 1-4899-7668-X / 148997668X
ISBN-13 978-1-4899-7668-0 / 9781489976680
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 6,6 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
CHF 37,95