Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Perception-Action Cycle (eBook)

Models, Architectures, and Hardware
eBook Download: PDF
2011 | 2011
XIV, 784 Seiten
Springer New York (Verlag)
978-1-4419-1452-1 (ISBN)

Lese- und Medienproben

Perception-Action Cycle -
Systemvoraussetzungen
341,33 inkl. MwSt
(CHF 329,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
The perception-action cycle is the circular flow of information that takes place between the organism and its environment in the course of a sensory-guided sequence of behaviour towards a goal. Each action causes changes in the environment that are analyzed bottom-up through the perceptual hierarchy and lead to the processing of further action, top-down through the executive hierarchy, toward motor effectors. These actions cause new changes that are analyzed and lead to new action, and so the cycle continues. The Perception-action cycle: Models, architectures and hardware book provides focused and easily accessible reviews of various aspects of the perception-action cycle. It is an unparalleled resource of information that will be an invaluable companion to anyone in constructing and developing models, algorithms and hardware implementations of autonomous machines empowered with cognitive capabilities. The book is divided into three main parts. In the first part, leading computational neuroscientists present brain-inspired models of perception, attention, cognitive control, decision making, conflict resolution and monitoring, knowledge representation and reasoning, learning and memory, planning and action, and consciousness grounded on experimental data. In the second part, architectures, algorithms, and systems with cognitive capabilities and minimal guidance from the brain, are discussed. These architectures, algorithms, and systems are inspired from the areas of cognitive science, computer vision, robotics, information theory, machine learning, computer agents and artificial intelligence. In the third part, the analysis, design and implementation of hardware systems with robust cognitive abilities from the areas of mechatronics, sensing technology, sensor fusion, smart sensor networks, control rules, controllability, stability, model/knowledge representation, and reasoning are discussed.
The perception-action cycle has been described by the eminent neuroscientist JM Fuster as the circular flow of information that takes place between the organism and its environment in the course of a sensory-guided sequence of behaviour towards a goal. Each action in the sequence causes certain changes in the environment that are analyzed bottom-up through the perceptual hierarchy and lead to the processing of further action, top-down through the executive hierarchy, toward motor effectors. These cause new changes that are analyzed and lead to new action, and so on and so forth. This book will provide a snapshot and a rsum of the current state-of-the-art of the ongoing research avenues concerning the perception-reason-action cycle. The central aims of the volume are to provide an informational resource and a methodology for anyone interested in constructing and developing models, algorithms and systems of autonomous machines empowered with cognitive capabilities.  

Perception-Action Cycle: Models, Architectures, and Hardware 1
Preface 5
Contents 
7 
Contributors 11
Part I Computational Neuroscience Models 15
1 The Role of Attention in Shaping Visual Perceptual Processes 18
1.1 Introduction 18
1.2 Connecting Attention, Recognition, and Binding 21
1.3 Finding the Right Subset of Neural Pathways on a Recurrent Pass 28
1.4 Vision as Dynamic Tuning of a General Purpose Processor 32
References 33
2 Sensory Fusion 35
2.1 Introduction 35
2.2 Audio--Visual Integration 38
2.2.1 Audio--Visual Integration in the Superior Colliculus: Neurophysiological and Behavioral Evidence(Overview and Model Justification) 38
2.2.2 Model Components 41
2.2.3 Results 43
2.2.3.1 Enhancement and Inverse Effectiveness 43
2.2.3.2 Cross-Modal Suppression 45
2.2.3.3 Within-Modal Suppression Without Cross-Modal Suppression 48
2.2.3.4 Cross-Modal Facilitation and Ventriloquism Phenomenon 48
2.2.4 Successes, Limitations and Future Challenges 52
2.3 Visual--Tactile Integration 54
2.3.1 Visual--Tactile Representation of Peripersonal Space: Neurophysiological and Behavioral Evidence(Overview and Model Justification) 54
2.3.2 A Neural Network Model for Peri-Hand Space Representation: Simulation of a Healthy Subject and a RBD Patient (Model Components and Results 1) 56
2.3.2.1 Simulation of the Healthy Subject 59
2.3.2.2 Simulation of the RBD Patient with Left Tactile Extinction 61
2.3.3 Modeling Peri-Hand Space Resizing: Simulation of Tool-Use Training (Model Components and Results 2) 63
2.3.4 Successes, Limitations and Future Challenges 67
2.4 Conclusions 70
References 72
3 Modelling Memory and Learning Consistently from Psychology to Physiology 75
3.1 Introduction 76
3.2 The Recommendation Architecture Model 77
3.3 Review of Experimental Data Literature 83
3.3.1 Semantic Memory 83
3.3.2 Episodic Memory 84
3.3.3 Procedural Memory 84
3.3.4 Working Memory 85
3.3.5 Priming Memory 86
3.3.6 Dissociations Indicating Separate Memory Systems 87
3.4 Other Modelling Approaches 87
3.5 Brain Anatomy and the Recommendation Architecture Model 90
3.5.1 Cortical Structure 90
3.5.2 Cortical Information Models 90
3.5.2.1 Information Model for a Cortical Area 91
3.5.2.2 Information Model for a Cortical Column 95
3.5.2.3 Information Model for a Pyramidal Neuron 98
3.5.2.4 Pyramidal Neuron Dynamics 101
3.5.3 Structure of the Basal Ganglia and Thalamus 104
3.5.4 Information Models for the Thalamus and Basal Ganglia 105
3.5.4.1 Information Model for the Thalamus 106
3.5.4.2 Information Model for the Striatum 107
3.5.4.3 Information Model for the GPi and SNr 109
3.5.4.4 Information Model for the Nucleus Accumbens 109
3.5.5 Structure of the Hippocampal System 109
3.5.6 Information Model for the Hippocampal System 110
3.5.7 Structure of the Cerebellum 114
3.5.8 Information Model for the Cerebellum 114
3.6 Modelling of Memory and Learning Phenomena 116
3.6.1 Receptive Fields Stability and Memory 116
3.6.2 Development and Evolution of Indirect Activation Recommendation Strengths 118
3.6.3 Semantic Memory 119
3.6.4 Working Memory 122
3.6.5 Episodic Memory 124
3.6.6 Priming Memory 126
3.6.7 Procedural Memory 127
3.7 Mapping Between Different Levels of Description 128
3.8 More Complex Cognitive Processes 130
3.8.1 Attention 130
3.8.2 Emotion and Reward 131
3.8.3 Sleep 132
3.8.4 Mental Image Manipulation 133
3.8.5 Self-Awareness 134
3.8.6 Imagination 135
3.8.7 Planning 137
3.8.8 Stream of Consciousness 138
3.9 Electronic Implementations 139
3.10 Conclusions 140
References 141
4 Value Maps, Drives, and Emotions 146
4.1 Overview of the DECIDER Model 146
4.2 Review of Experimental Data 147
4.2.1 Behavioral Data on Risky Decision Making 147
4.2.2 Data on Neural Bases of Cognitive-Emotional Decision Making 151
4.3 Review of Previous Decision Models 155
4.3.1 Psychological Models Without Explicit Brain Components 155
4.3.2 Models of Brain Area Involvement in Cognitive-Emotional Decision Making 157
4.4 Organization of the Model 158
4.4.1 Fuzzy Emotional Traces and Adaptive Resonance 158
4.4.2 Effects of Learning 162
4.4.3 Adaptive Resonance and Its Discontents: Relative Versus Absolute Emotional Values 163
4.4.4 Higher Level and Deliberative Rules 167
4.5 A Simplified Simulation 169
4.6 Concluding Remarks 172
4.6.1 Predictions and Syntheses 172
4.6.2 Extension to a Multi-Drive Multiattribute Decision Model 174
4.6.3 The Larger Human Picture 175
References 176
5 Computational Neuroscience Models: Error Monitoring, Conflict Resolution, and Decision Making 180
5.1 Models of Cognitive Control 182
5.1.1 Biased Competition Model 182
5.1.2 Neural Models of Decision-Making 183
5.2 Medial Prefrontal Cortex and Performance Monitoring 183
5.2.1 Models of Performance Monitoring 184
5.2.1.1 Comparator Model 184
5.2.1.2 Conflict Monitoring Model 184
5.2.1.3 Action Selection Model 186
5.3 Error Likelihood Model 186
5.3.1 Testing the Error Likelihood Model 187
5.3.2 Risk 189
5.3.3 Multiple Response Effects 190
5.3.3.1 Cognitive Control Effects Driven by Error Likelihood, Conflict, and Errors 191
5.4 Future Challenges 192
5.4.1 Reward as well as Error Likelihood? 192
5.5 Toward a More Comprehensive Model of Performance Monitoring 192
5.6 Concluding Remarks 193
References 194
6 Neural Network Models for Reaching and Dexterous Manipulation in Humans and Anthropomorphic Robotic Systems 197
6.1 Introduction 198
6.2 Overview 199
6.2.1 Overview of the Neural Network Model for Arm Reaching and Grasping 199
6.2.2 Modular Multinetwork Architecture for Learning Reaching, and Grasping Tasks 199
6.3 Experimental and Computational Neurosciences Background 200
6.3.1 Review of Experimental Data Literature 200
6.3.2 Review of Previous Modeling Attempts 201
6.4 The Neural Network Model Architecture 203
6.4.1 Model Components 203
6.4.1.1 The Basic Module 203
6.4.1.2 Learning the Inverse Kinematics of the Arm and Fingers: LM1 204
6.4.1.3 Learning to Associate Object's Intrinsic Properties and Grasping Postures: LM2 208
6.5 Simulations Results, Limitations, and Future Challenges 211
6.5.1 Simulation Results 211
6.5.1.1 Generation of Reaching and Grasping Trajectories 211
6.5.1.2 Learning Capabilities, Training, and Generalization Errors of the GRASP Module 213
6.5.1.3 Analysis of the Neural Activity of the GRASP Module During Performance 216
6.5.2 Discussion 218
6.5.2.1 Reaching and Grasping Performance 218
6.5.2.2 Neural Activity of the GRASP Module 219
6.5.2.3 Model Assumptions, Limitations, and Possible Solutions to Challenge Them 221
6.6 Conclusion 222
References 223
7 Schemata Learning 228
7.1 Introduction 228
7.2 Review of Prior Models and Neuroscience Evidences 229
7.3 Proposed Model 233
7.3.1 General 233
7.3.2 Training 236
7.3.3 Action Generation in Physical Environment and Motor Imagery 237
7.4 Setup of Humanoid Robot Experiments 237
7.5 Experimental Results 239
7.5.1 Overall Task Performances in the End of Development 239
7.5.2 Development Processes 240
7.5.3 Analyses 242
7.6 Discussion 244
7.6.1 Summary of the Robot Experiments 244
7.6.2 Schemata Learning from Developmental Psychological Views 245
7.7 Summary 247
References 248
8 The Perception-Conceptualisation-Knowledge Representation-Reasoning Representation-Action Cycle: The View from the Brain 251
8.1 Introduction 251
8.2 The GNOSYS Model 255
8.2.1 The Basic GNOSYS Robot Platform and Environment 255
8.2.2 Information Flow and GNOSYS Sub-systems 255
8.2.2.1 Perception 256
8.2.2.2 Memory 257
8.2.2.3 Action Execution 258
8.3 The GNOSYS Model Processing Details 259
8.3.1 The GNOSYS Perception System 259
8.3.2 Learning Attended Object Representations 261
8.3.3 Learning Expectation of Reward 264
8.3.4 The GNOSYS Concept System 266
8.4 The Development of Internal Models in the Brain 267
8.5 Thinking as Mental Simulation 271
8.6 Creativity as Unattended Mental Simulation 273
8.6.1 Simulation Results for Unusual Uses of a Cardboard Box 276
8.6.1.1 Meta Goal 277
8.6.1.2 Object Codes 277
8.6.1.3 Affordance Codes 277
8.6.1.4 Mental Simulation Loop 277
8.6.1.5 Error Monitor 278
8.6.1.6 Attention 278
8.7 Reasoning as Rewarded Mental Simulation 280
8.7.1 Non-linguistic Reasoning 280
8.7.2 Setting Up the Linguistic Machinery 283
8.7.3 Linguistic Reasoning 283
8.8 Overall Results of the System 285
8.8.1 Experiments 285
8.8.2 Results 287
8.8.3 Extensions Needed 287
8.9 Relation to Other Cognitive System Architectures 288
8.10 Conclusions 289
References 291
9 Consciousness, Decision-Making and Neural Computation 294
9.1 Introduction 295
9.2 A Higher Order Syntactic Thought Theory of Consciousness 296
9.2.1 Multiple Routes to Action 296
9.2.2 A Computational Hypothesis of Consciousness 299
9.2.3 Adaptive Value of Processing in the System That Is Related to Consciousness 300
9.2.4 Symbol Grounding 302
9.2.5 Qualia 303
9.2.6 Pathways 304
9.2.7 Consciousness and Causality 305
9.2.8 Consciousness, a Computational System for Higher Order Syntactic Manipulation of Symbols, and a Commentary or Reporting Functionality 306
9.3 Selection Between Conscious vs. Unconscious Decision-Making and Free Will 308
9.3.1 Dual Routes to Action 308
9.3.2 The Selfish Gene vs. the Selfish Phene 311
9.3.3 Decision-Making Between the Implicit and Explicit Systems 313
9.3.4 Free Will 314
9.4 Decision-Making and ``Subjective Confidence'' 315
9.4.1 Neural Networks for Decision-Making That Reflect ``Subjective Confidence'' in Their Firing Rates 316
9.4.2 A Model for Decisions About Confidence Estimates 320
9.5 Oscillations and Stimulus-Dependent Neuronal Synchrony: Their Role in Information Processing in the Ventral Visual System and in Consciousness 324
9.6 A Neural Threshold for Consciousness: The Neurophysiology of Backward Masking 327
9.6.1 The Neurophysiology and Psychophysics of Backward Masking 327
9.6.2 The Relation to Blindsight 329
9.7 The Speed of Visual Processing Within a Cortical Visual Area Shows That Top-Down Interactions with Bottom-Up Processes Are Not Essential for Conscious Visual Perception 330
9.8 Comparisons with Other Approaches to Consciousness 331
References 335
10 A Review of Models of Consciousness 341
10.1 Introduction 341
10.2 The Models of Consciousness 343
10.2.1 The Higher Order Thought Model 343
10.2.2 The Working Memory Model 344
10.2.3 The Global Workspace Model 345
10.2.4 The Complexity Model 346
10.2.5 The Recurrent Model 347
10.2.6 The Neural Field Model 347
10.2.7 The Relational Mind 348
10.2.8 The Attention-Based CODAM Model 348
10.2.9 Further Models of Consciousness 350
10.3 Criteria for the Review 350
10.3.1 Fits to Experimental Data 351
10.3.2 The Presence of Attention 352
10.3.3 As Providing an Explanation of Mental Diseases 354
10.3.4 Existence of an Inner Self 355
10.4 The Test Results 356
10.4.1 Higher Order Thought 357
10.4.2 Working Memory 358
10.4.3 Global Workspace 358
10.4.4 Complexity 358
10.4.5 Recurrence 358
10.4.6 Neural Field Theory 359
10.4.7 Relational Mind 359
10.4.8 CODAM 359
10.4.9 Possible Model Fusion 359
10.5 Conclusions 360
References 361
Part II Cognitive Architectures 364
11 Vision, Attention Control, and Goals Creation System 367
11.1 Overview 367
11.2 Computational Models of Visual Attention 368
11.2.1 Bottom-Up Visual Attention 368
11.2.2 Top-Down Visual Attention 370
11.2.3 Attentional Selection: Attention as a Controller 370
11.2.4 CODAM: COrollary Discharge of Attention Movement 371
11.3 Applications 372
11.3.1 Scene/Object Recognition 372
11.3.2 Novelty Detection and Video Summarization 373
11.3.3 Robotic Vision 374
11.4 Volumetric Saliency by Feature Competition 375
11.5 Problem Formulation 376
11.6 Saliency-Based Video Classification 378
11.7 Evaluation of Classification Performance 380
11.8 Action Recognition 384
11.9 Spatiotemporal Point Detection 385
11.10 Discussion 386
References 387
12 Semantics Extraction From Multimedia Data: An Ontology-Based Machine Learning Approach 391
12.1 Introduction 391
12.2 Fusing at the Semantic Level 393
12.2.1 Low-, Mid- and High-Level Fusion 393
12.2.1.1 Low-Level Fusion 394
12.2.1.2 Mid-Level Fusion 394
12.2.1.3 High-Level Fusion 394
12.2.2 Redundancy and Complementarity of Multimedia Information 395
12.2.2.1 Complementarity 395
12.2.2.2 Redundancy 396
12.2.3 Physical and Logical Document Structure 397
12.2.4 Practical Considerations 398
12.3 Methodology 400
12.3.1 Motivation 400
12.3.2 Problem Formulation 401
12.3.2.1 Reference Functions 402
12.3.2.2 Approximation Functions 402
12.3.2.3 Distance Between A-Boxes 402
12.3.3 Using Directed Graphs 403
12.3.3.1 Set of DL Assertions as Directed Graphs 403
12.3.4 Optimal Graph Expansion Operators 405
12.3.4.1 Elementary Operators 405
12.3.4.2 Greedy Search for Optimal Operators 405
12.3.4.3 Optimal Elementary Operators 406
12.3.4.4 Optimal Edge Addition 406
12.3.4.5 Optimal Vertex Addition 406
12.3.4.6 Complexity Issues 407
12.3.5 Scoring Functions for Graph Expansion Operators 407
12.3.5.1 Graph Local Representations 408
12.3.5.2 Representing Graph Paths as Features 409
12.3.5.3 Example 409
12.3.5.4 Representing Uncertainty 410
12.3.5.5 Complexity Issues 410
12.3.5.6 Soft Classifiers as scoring functions 411
12.4 Evaluation 412
12.4.1 Experimental Setting 412
12.4.1.1 Data 412
12.4.1.2 Methodology 413
12.4.2 Evaluation Results 415
12.5 Related Work 417
12.6 Conclusions 418
References 418
13 Cognitive Algorithms and Systems of Episodic Memory, Semantic Memory, and Their Learnings 420
13.1 Introduction 420
13.2 Computational Systems of Episodic Memory, Semantic Memory, and Their Learnings 422
13.2.1 Cognitive Systems of Learning and Memory 422
13.2.1.1 Collins and Quillian's Hierarchical Network Model 423
13.2.1.2 ACT-R 424
13.2.1.3 CLARION 426
13.2.2 Connectionist Systems of Episodic Memory, Semantic Memory and Their Learnings 427
13.3 A Multileveled Network System of Episodic Memory, Semantic Memory, and Their Learnings 430
13.3.1 Single Memory: To Locally Store Information 433
13.3.2 Memory Triangle: To Learn Meanings or Common Features 434
13.3.3 Organizing Memory Triangles: To Learn a Knowledge Structure 435
13.3.4 Conceptual Learning: To Ground Symbols to Their Meanings 435
13.3.5 Episodic Storage: To Store Episodic Memory 436
13.4 Simulating Episodic Memory, Semantic Memory, and Their Learnings 438
13.4.1 Episodic Learning, Serial Recall, and Recognition 438
13.4.2 Dreaming, Learning, and Memory Consolidation 440
13.4.3 Retrograde Amnesia and Anterograde Amnesia 442
13.4.4 Developmental Amnesia 443
13.4.5 Dense Amnesia and Direct Semantic Learning 445
13.4.6 Robustness and Flexibility 447
13.5 Future Challenges 447
References 448
14 Motivational Processes Within the Perception--Action Cycle 452
14.1 Overview 452
14.2 Background: Data and Models Relevant to Motivational Representations, Processes, and Structures 454
14.2.1 Previous Work on Motivation 454
14.2.2 Previous Work on Personality 456
14.2.3 Previous Work on Cognitive Architectures 458
14.2.4 Essential Desiderata 459
14.3 The CLARION Cognitive Architecture: The Role of Motivational Variables 459
14.3.1 Overview of CLARION 459
14.3.2 The Action-Centered Subsystem 461
14.3.3 The Non-Action-Centered Subsystem 463
14.3.4 The Motivational Subsystem 464
14.3.5 The Meta-Cognitive Subsystem 467
14.3.6 Model of Personality Within CLARION 468
14.4 Results, Successes, Limitations, and Future Challenges 469
14.4.1 Some Simulation Results 469
14.4.2 Implications, Limitations, and Future Work 473
References 474
15 Cognitive Algorithms and Systems of Error Monitoring, Conflict Resolution and Decision Making 476
15.1 Overview 476
15.2 Algorithm/System Justification 478
15.3 The Algorithm/System and How It Deviates from Its Predecessors 479
15.3.1 Robot Task Model Components 480
15.3.1.1 Plan Coordination Components 481
15.3.1.2 Plan Components 482
15.3.1.3 Resources 483
15.3.2 Functional Architecture 483
15.3.3 Information Flow 484
15.3.4 Petri Net Model of Task Plans 486
15.4 Successes, Limitations and Future Challenges 496
References 498
16 Developmental Learning of Cooperative Robot Skills: A Hierarchical Multi-Agent Architecture 500
16.1 Introduction 501
16.2 Hierarchical Multi-Agent Control Framework 503
16.2.1 Mapping Agents to Degrees of Freedom 504
16.2.2 Hierarchical Architecture 504
16.2.3 Continuous Problem Setting 504
16.3 Agent Architecture: The Case of Robot Kinematic Chains 507
16.3.1 Basic Internal Functions of an Agent 509
16.3.2 Continuous Reinforcement Learning: Kinematic Chain 510
16.3.2.1 Q: Learning Method 510
16.3.2.2 State-Space Fuzzification for Continuous Problem Sets 511
16.3.2.3 Action Selection and Reward Function 512
16.4 Agent Architecture: The Case of Collaborative Mobile Robots 514
16.4.1 Continuous Reinforcement Learning: Mobile Robots 516
16.4.1.1 TD() Learning Method 516
16.4.1.2 TD() Learning with Linear Function Approximation 518
16.4.1.3 TD() Learning Method with Gradient Correction 519
16.4.1.4 Linear Function Approximation Using a Fuzzy Rule Base 520
16.5 The RL-Based Robot Control Architecture 523
16.6 Numerical Experiments: Results and Discussion 524
16.6.1 Single Kinematic Chain 524
16.6.2 Multi-Finger Grasp 529
16.6.3 Collaborative Mobile Robots: Box-Pushing Task 533
16.7 Conclusion and Future Work 539
References 540
17 Actions and Imagined Actions in Cognitive Robots 542
17.1 Introduction 543
17.2 The GNOSYS Playground 547
17.3 Forward/Inverse Model for Reaching: The Passive Motion Paradigm 550
17.4 Spatial Map and Pushing Sensorimotor Space 555
17.4.1 Acquisition of the Sensorimotor Space 555
17.4.2 Dynamics of the Sensorimotor Space 558
17.4.3 Value Field Dynamics: How Goal Influences Activity in SMS 561
17.4.4 Reaching Spatial Goals Using the Spatial Sensorimotor Space 562
17.4.5 Learning the Reward Structure in ``Pushing'' Sensorimotor Space 564
17.5 A Goal-Directed, Mental Sequence of ``Push--Move--Reach'' 569
17.6 Discussion 571
References 573
18 Cognitive Algorithms and Systems: Reasoning and Knowledge Representation 576
18.1 Introduction 576
18.2 Neurons and Symbols 578
18.2.1 Abstraction 579
18.2.2 Modularity 579
18.2.3 Applications 580
18.2.4 Expressiveness 580
18.2.5 Representation 581
18.2.6 Nonclassical Reasoning 581
18.3 Neural-Symbolic Learning Systems 582
18.4 Technical Background 584
18.4.1 Neural Networks and Neural-Symbolic Systems 584
18.4.2 The Language of Connectionist Modal Logic 586
18.4.3 Reasoning About Time and Knowledge 588
18.5 Connectionist Nonclassical Reasoning 589
18.5.1 Connectionist Modal Reasoning 590
18.5.2 Connectionist Temporal Reasoning 591
18.5.3 Case Study 593
18.6 Fibring Neural Networks 596
18.7 Concluding Remarks 597
References 600
19 Information Theory of Decisions and Actions 604
19.1 Introduction 605
19.2 Rationale 606
19.3 Notation 608
19.3.1 Probabilistic Quantities 608
19.3.2 Entropy and Information 608
19.4 Markov Decision Processes 610
19.4.1 MDP: Definition 610
19.4.2 The Value Function of an MDP and Its Optimization 611
19.5 Coupling Information with Decisions and Actions 613
19.5.1 Information and the Perception--Action Cycle 613
19.5.1.1 Causal Bayesian Networks 614
19.5.1.2 Bayesian Network for a Reactive Agent 614
19.5.1.3 Bayesian Network for a General Agent 615
19.5.2 Actions as Coding 616
19.5.3 Information-To-Go 619
19.5.3.1 A Bellman Picture 619
19.5.3.2 Perfectly Adapted Environments 619
19.5.3.3 Predictive Information 620
19.5.3.4 Symmetry 621
19.5.4 The Balance of Information 621
19.5.4.1 The Data Processing Inequality and Chain Rules for Information 622
19.5.4.2 Multi-Information and Information in Directed Acyclic Graphs 623
19.6 Bellman Recursion for Sequential Information Processing 624
19.6.1 Introductory Remarks 625
19.6.2 Decision Complexity 626
19.6.3 Recursion Equation for the MDP Information-To-Go 628
19.6.3.1 The Environmental Response Term 628
19.6.3.2 The Decision Complexity Term 629
19.7 Trading Information and Value 629
19.7.1 The ``Free-Energy'' Functional 629
19.7.2 Perfectly Adapted Environments 632
19.8 Experiments and Discussion 633
19.8.1 Information-Value Trade-Off in a Maze 633
19.8.2 Soft vs. Sharp Policies 634
19.9 Conclusions 636
References 637
20 Artificial Consciousness 640
20.1 Introduction 640
20.2 Goals of Artificial Consciousness 643
20.2.1 Environment Coupling 645
20.2.2 Autonomy and Resilience 647
20.2.3 Phenomenal Experience 648
20.2.4 Semantics or Intentionality of the First Type 650
20.2.5 Self-Motivations or Intentionality of the Second Type 651
20.2.6 Information Integration 652
20.2.7 Attention 653
20.3 A Consciousness-Oriented Architecture 655
20.3.1 The Elementary Intentional Unit 657
20.3.2 The Intentional Module 659
20.3.3 The Intentional Architecture 662
20.3.4 Check List for Consciousness-Oriented Architectures 664
20.3.5 A Comparison with Other Approaches 666
20.4 Conclusion 668
References 670
Part III Hardware Implementations 675
21 Smart Sensor Networks 677
21.1 Overview 677
21.1.1 Wireless Sensor Networks Technology 678
21.1.2 Design Requirements and Issues 680
21.1.3 Implementation Issues 681
21.2 Engineering Technology Justifications 682
21.2.1 Application of Perception--Reason--ActionSensor Networks 682
21.2.1.1 Distributed Multi-robot Perception, Navigation, and Manipulation 683
21.2.1.2 Distributed Sense-and-Response Systems 684
21.2.1.3 Dynamic Situation Awareness and Decision Support Systems 686
21.2.2 Review of Application Challenges 686
21.2.3 Review of Previous Engineering Technology Systems 687
21.3 The System 688
21.3.1 Components 690
21.3.1.1 Data-Centric Sensor Network Protocols 690
21.3.1.2 Distributed Services 691
21.3.1.3 Distributed Perception--Reason--Action Modules 695
21.3.2 Proof of Concept 700
21.3.2.1 Multi-robot Control Applications 700
21.3.2.2 Real-Time Target Tracking Applications 702
21.3.3 Preliminary Results 703
21.3.3.1 Performance of Multi-robot Control Applications 703
21.3.3.2 Performance of Target Tracking Applications 704
21.4 Future Work 706
21.4.1 Future Extensions 706
References 708
22 Multisensor Fusion for Low-Power Wireless Microsystems 712
22.1 Introduction 712
22.2 ANNs in Electrochemical Sensor Fusion 715
22.3 Neural Hardware in VLSI Technology 718
22.3.1 Supervised ANN-Based Hardware 719
22.3.2 Unsupervised ANN-Based Hardware 719
22.4 Analytical Techniques for Counteracting Drift 721
22.4.1 Recalibration 721
22.4.2 Data Filtering 722
22.4.3 Drift Insensitivity 722
22.4.4 Fault Isolation 723
22.5 Lab-in-a-Pill 723
22.6 The ``Neural" Solution: Adaptive Stochastic Classifier 726
22.6.1 Continuous Restricted Boltzmann Machine 726
22.6.1.1 Continuous Stochastic Neuron 726
22.6.1.2 CRBM Learning Rule 727
22.6.2 Training Methodology 728
22.6.3 Simulation Results 729
22.6.3.1 With Simple, Multidimensional Overlapping Clusters 730
22.6.3.2 With 2D Non-Gaussian Meshed Clusters 732
22.6.3.3 With Real Drifting Data 734
22.7 CRBM Hardware and Experimental Results 737
22.7.1 Chip Implementation 737
22.7.2 Learning in Hardware 738
22.7.3 Regenerating Data With a Symmetric Distribution 740
22.7.4 Regenerating Data with a Nonsymmetric Distribution 741
22.7.5 Regenerating Data with a Doughnut-Shaped Distribution 742
22.8 Discussion and Future Works 743
22.9 Summary 745
References 745
23 Bio-Inspired Mechatronics and Control Interfaces 750
23.1 Overview 750
23.2 Previous Work 752
23.3 System Architecture 754
23.3.1 Background and Problem Definition 754
23.3.2 System Training Phase 754
23.3.2.1 Recording Arm Motion 755
23.3.2.2 Recording Muscle Activity 756
23.3.3 Data Representation 757
23.3.4 Decoding Arm Motion from EMG Signals 759
23.3.5 Modeling Human Arm Movement 761
23.3.5.1 Graphical Models 761
23.3.5.2 Building the Model 762
23.3.5.3 Inference Using the Graphical Model 765
23.3.6 Filtering Motion Estimates Using the Graphical Model 766
23.3.7 Robot Control 766
23.4 Experimental Results 768
23.4.1 Hardware and Experiment Design 768
23.4.2 Efficiency Assessment 769
23.5 Conclusion and Future Extensions 772
References 774
Index 777

Erscheint lt. Verlag 2.2.2011
Reihe/Serie Springer Series in Cognitive and Neural Systems
Zusatzinfo XIV, 784 p. 237 illus., 76 illus. in color.
Verlagsort New York
Sprache englisch
Themenwelt Informatik Software Entwicklung User Interfaces (HCI)
Medizin / Pharmazie Medizinische Fachgebiete Neurologie
Medizin / Pharmazie Physiotherapie / Ergotherapie Orthopädie
Studium 1. Studienabschnitt (Vorklinik) Biochemie / Molekularbiologie
Naturwissenschaften Biologie Humanbiologie
Naturwissenschaften Biologie Zoologie
Technik Elektrotechnik / Energietechnik
Technik Medizintechnik
ISBN-10 1-4419-1452-8 / 1441914528
ISBN-13 978-1-4419-1452-1 / 9781441914521
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 29,3 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
A roadmap to data value realization and measurable business outcomes

von Pui Shing Lee

eBook Download (2024)
Packt Publishing (Verlag)
CHF 35,15
Unlock the power of deep learning for swift and enhanced results

von Giuseppe Ciaburro

eBook Download (2024)
Packt Publishing Limited (Verlag)
CHF 35,15