Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Machine Learning Proceedings 1990 -

Machine Learning Proceedings 1990 (eBook)

Proceedings of the Seventh International Conference on Machine Learning, University of Texas, Austin, Texas, June 21-23 1990
eBook Download: PDF
2014 | 1. Auflage
427 Seiten
Elsevier Science (Verlag)
978-1-4832-9858-0 (ISBN)
Systemvoraussetzungen
53,87 inkl. MwSt
(CHF 52,60)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Machine Learning Proceedings 1990
Machine Learning Proceedings 1990

Front Cover 1
Machine Learning: Proceedings of the Seventh International Conference (1990) 2
Copyright Page 3
Table of Contents 4
Preface 8
PART 1: EMPIRICAL LEARNING 10
Chapter 1. Knowledge Acquisition from Examples using Maximal Representation Learning 11
Abstract 11
1 Introduction 11
2 Learning Class Description from Examples 11
3 Redundancy of an attribute at a node of a GT 12
4 Importance of an attribute 13
5 The Inference Process 13
6 Practical Applications 14
7 Conclusions 15
Acknowledgements 15
References 15
Chapter 2. KBG : A Knowledge Based Generalizer 18
Abstract 18
1 Introduction 18
2 Structural Matching 18
3 The Algorithm 20
4 Conclusion 24
REFERENCES 24
Chapter 3. Performance Analysis of A Probabilistic Inductive Learning System 25
Abstract 25
1. Introduction 25
2. An Efficient Classification Method 25
3. Experimental Results 28
4. Discussions and Summary 29
5· References 30
Chapter 4. A Comparative Study of ID3 and Backpropagation for English Text-to-Speech Mapping 33
Abstract 33
1 Introduction 33
2 A Simple Comparative Study 34
3 Three Hypotheses 35
4 Discussion 38
5 Conclusions 39
6 Acknowledgements 39
7 References 39
Chapter 5. Learning from Data with Bounded Inconsistency 41
Abstract 41
1 Introduction 41
2 Bounded Inconsistency 41
3 Approach 42
4 Implementation 42
5 Example 43
6 Comparison to Related Work 45
7 Discussion 46
8 Formal Results 47
9 Summary 47
Acknowledgments 48
References 48
Chapter 6. Conceptual Set Covering: Improving Fit-And-Split Algorithms 49
Abstract 49
1 Introduction 49
2 Fit-and-Split Learning Problem 49
3 Conceptual-Set-Covering Algorithm 53
4 Evaluation 54
5 Discussion and Conclusion 55
References 56
Chapter 7. Incremental Learning of Rules and Meta–rules 58
Abstract 58
1. Introduction 58
2 Examples Of Incremental Processes 59
3 Learning And Approximation 60
4. Multi-Layers Learning 63
References 65
Chapter 8. An Incremental Method for Finding Multivariate Splits for Decision Trees 67
Abstract 67
1 Introduction 67
2 Issues for Decision-Tree Induction 67
3 The PT2 Decision Tree Algorithm 69
4 Illustrations 71
5 Discussion 73
Acknowledgments 73
References 73
Chapter 9. Incremental Induction of Topologically Minimal Trees 75
Abstract 75
1. Introduction 75
2. Topological Relevance 76
3. IDL 77
4. Analysis and Empirical results 80
5. Related and Future Work 82
6. Conclusion 83
Acknowledgements 83
References 83
PART 2: CONCEPTUAL CLUSTERING 84
Chapter 10. nalysis of Categorization 85
Abstract 85
The Structure of the Environment 85
Application of the Algorithm 88
Comparisons to Cheeseman, Kelly, Self, Stutz, Taylor, & Freeman (1988)
Acknowledgments 92
References 92
Chapter 11. Search Control, Utility, and Concept Induction 94
Abstract 94
1 Introduction 94
2 Search and Concept Induction 94
3 The COBWEB Concept Formation System 95
4 An Example: Search Control in Parsing 97
5 Concluding Remarks 99
Acknowledgements 100
References 100
Chapter 12. Graph Clustering and Model Learning by Data Compression 102
Abstract 102
1 Introduction 102
2 Layered graph 102
3 Probabilistic graph 103
4 Minimal representation criterion 103
5 Matching graphs 103
6 Recognition and interpretation 105
7 Learning 105
8 Application example: Learning shape models 106
9 Conclusion 107
References 110
PART 3: CONSTRUCTIVE INDUCTIONAND REFORMULATION 112
Chapter 13. An Analysis of Representation Shift In Concept Learning 113
Abstract 113
1 Introduction 113
2 Motivation and Overview 113
3 A framework for analyzing representation shift 114
4 Applications of the framework 116
5 Conclusions 118
6 Acknowledgements 119
References 120
Chapter 14. Learning Procedures by Environment-Driven Constructive Induction 122
Abstract 122
1 Introduction 122
2 Learning Procedures 123
3 Examples 125
4 Discussion 129
5 Conclusion 130
References 130
Chapter 15. Beyond Inversion of Resolution 131
Abstract 131
1 Introduction 131
2 Inversion of resolution 132
3 Saturation 134
4. Intraconstruction and Truncation revisited 137
5 Conclusion 139
References 139
PART 4: GENETIC ALGORITHMS 140
Chapter 16. GENETIC PROGRAMMING: Building Artificial Nervous Systems Using Genetically Programmed Neural Network Modules 141
Keywords 141
Abstract 141
Introduction 141
1 The Genetic Algorithm 141
2 Genetic Programming 142
3 The GenNet 142
4 Results 143
5 Future Research - Brain Building and the Darwin Machine 144
Acknowledgments 146
References 146
Chapter 17. Improving the Performance of Genetic Algorithmsin Automated Discovery of Parameters 149
Abstract 149
1 Introduction 149
2 Functional Description of the Multiple Sharing Evaluation Functions 150
3 Functional Description of the Parallel/Distributed Schema 153
4 Functional Description of the Neural Network Module 154
5 Experimental Results 155
6 Conclusion 157
7 References 157
Chapter 18. Using Genetic Algorithms to Learn Disjunctive Rules from Examples 158
Abstract 158
1 Introduction 158
2 Example Sharing 158
3 Increasing GA Exploration 159
4 Comparison with "Boole"—Experimental Results 160
5 Future Work 160
6 Conclusions 160
Acknowledgments 161
References 161
Chapter 19. NEWBOOLE: A Fast GBML System 162
Abstract 162
1 Introduction 162
2. Slow learning rates in GBML systems 162
3. The BOOLE Classifier System 163
4. The NEWBOOLE CS 163
5. Conclusion 168
6. References 168
PART 5: NEURAL NETWORK & REINFORCEMENT LEARNING
Chapter 20. Learning Functions in fc-DNF from Reinforcement 171
Abstract 171
1 Reinforcement Learning 171
2 Complexity Versus Efficiency 171
3 Connectionist Methods for Learning k-DNF 172
4 Interval-Estimation Algorithm for k-DNF 173
5 Empirical Comparison 174
6 Relaxing the Assumptions 177
7 Conclusion 177
References 178
Chapter 21. Is Learning Rate a Good Performance Criterion for Learning? 179
Abstract 179
1. Introduction 179
2. The Problem 179
3. Experiment 1: Voting 182
4. Experiment 2: Coercion 182
5· Experiment 3: Merging 183
6. Experimental Variations and Future Experiments 185
7. Discussion 186
Acknowledgments 186
References 186
Chapter 22. Active Perception and Reinforcement Learning 188
Abstract 188
1 Introduction 188
2 Foundations 189
3 Perceptual Aliasing 192
4 Dealing with Perceptual Aliasing 193
5 An Example 195
6 Conclusions 196
Acknowledgments 196
References 196
PART 6: LEARNING AND PLANNING 198
Chapter 23. Learning Plans for Competitive Domains 199
Abstract 199
1. Introduction 199
2. An Overview of HOYLE 199
3. An Example of a Fork in Execution 200
4. Definition and Representation of Forks 201
5. The Application of Forks in Two-agent Domains 202
6. An Algorithm that Learns Forks 204
7. Measuring Learning in a Tournament 204
8. Results and Future Work 205
References 206
Chapter 24. Explanations of Empirically Derived Reactive Plans 207
Abstract 207
1 Introduction 207
2 The Evasive Maneuvers Problem 208
3 Explaining Empirically Derived Rules 209
4 Future Work 211
5 Summary 212
References 212
Chapter 25. Learning and Enforcement: Stabilizing environments to facilitate activity 213
Abstract 213
1 Stability, change, and enforcement 213
2 Planning, learning and enforcement 214
3 Opportunism and enforcement: An example 214
4 Stability and enforcement 215
5 A Model of Agency 218
6 A Framework for the Study of Agency 218
7 The point 219
8 Acknowledgements 219
9 References 219
Chapter 26. Simulation-Assisted Learning by Competition: Effects of Noise Differences Between Training Model and Target Environment 220
Abstract 220
1 Introduction 220
2 The Evasive Maneuvers Problem 221
3 SAMUEL on EM 221
4 Evaluation of the Method 222
5 Summary and Further Research 223
References 224
Chapter 27. Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming 225
Abstract 225
1 Introduction to Dyna 225
2 Dyna-PI: Dyna by Approximating Policy Iteration 226
3 A Navigation Task 227
4 Problems of Changing Worlds 229
5 Dyna-Q: Dyna by Q-learning 229
6 Changing-World Experiments 230
7 Limitations and Conclusions 232
Acknowledgments 232
References 232
PART 7. ROBOT LEARNING 234
Chapter 28. Reducing Real-world Failures of Approximate Explanation-based Rules 235
ABSTRACT 235
1. Introduction 235
2. Approximations 236
3. A General Tuning Algorithm 237
4. An Example Illustrating the Algorithm 239
5. Experimental Results 242
6. Future Work & Conclusions
Acknowledgements 243
References 243
Chapter 29. Correcting and Extending Domain Knowledge using Outside Guidance 244
Abstract 244
1 Introduction 244
2 System Architecture 245
3 Learning Control Knowledge 247
4 Correcting Control Knowledge 248
5 Correcting Domain Knowledge 250
6 Extending Domain Knowledge 250
7 Discussion 251
References 252
Chapter 30. Acquisition of Dynamic Control Knowledge for a Robotic Manipulator 253
Abstract 253
1 Introduction 253
2 Proximal Learning 254
3 Using saft-trees for Control Choice 255
4 Proximal Control 256
5 Experimental Results 257
6 Conclusion 260
References 261
Chapter 31. Feature Extraction and Clustering of Tactile Impressions with Connectionist Models 262
Abstract 262
1 Introduction 262
2 The Training Data 262
3 Network Description 263
4 Feature Extraction 264
5 Clustering Groups of Patterns 266
6 Pattern Classification 267
7 Summary 267
References 267
PART 8: EXPLANATION-BASED LEARNING 268
Chapter 32. Generalizing the Order of Goals as an Approach to Generalizing Number 269
Abstract 269
1 Introduction 269
2 The Algorithm N 270
3 Interpretation of Generalized Precedence Graphs 273
4 Coheirs Algorithm 274
5 Results 274
6 Conclusions 275
Acknowledgements 276
References 276
Chapter 33. Learning Approximate Control Rules Of High Utility 277
Abstract 277
1 Introduction 277
2 The learning problem 278
3 The learning algorithms 279
4 Experimental results 279
5 Related work 283
6 Conclusions 284
Acknowledgements 284
References 284
Chapter 34. Applying Abstraction and Simplification to Learn in Intractable Domains 286
Abstract 286
1 Introduction 286
2 Approach 287
3 Domain: Quinlan's lost-in-n-ply 288
4 Method 289
5 An evaluation in lost-in-n-ply 291
6 Related Work 293
7 Discussion 294
8 Bibliography 294
Chapter 35. Explanation-Based Learning with Incomplete Theories: A Three-step Approach 295
Abstract 295
1 Introduction 295
2 The Three-step Approach To Deal With Incomplete Theories 295
3 An Example: The System LISE 297
4 Conclusion 301
5 Future Work 302
Acknowledgement 302
References 302
Chapter 36. Using Abductive Recovery of Failed Proofs for Problem Solving by Analogy 304
Abstract 304
1 Introduction 304
2 Causal knowledge in creative analogies 305
3 Inversion of resolution 306
4 Formation of a new theory 307
5 Recovering from plan failures 309
6 Application to Analogy 309
7 A detailed example of the generation of a new plan by abductive recovery of the failure of the old plan 310
8 Conclusion 312
References 312
Chapter 37. Issues in the Design of Operator Composition Systems 313
Abstract 313
1 Introduction 313
2 Assumptions and Terminology 313
3 The Effects of Macro Learning 314
4 The Reordering Effect 315
5 Decreasing Path Cost 318
6 Eliminating Redundancy 320
7 Conclusion 320
8 Acknowledgments 321
References 321
Chapter 38. Incremental Learning of Explanation Patterns and their Indices 322
Abstract 322
1 Case-based learning 322
2 Explanation patterns 323
3 Learning explanation patterns 323
4 The AQUA program 323
5 Learning indices for explanation patterns 324
6 Modifying existing explanation patterns 327
7 Conclusions 329
References 329
PART 9: EXPLANATION-BASED AND EMPIRICAL LEARNING 330
Chapter 39. Integrated Learning in a Real Domain 331
Abstract 331
1. Introduction 331
2. The Learning Problem 331
3. The Learning System 332
4. The Learning Set 333
5. Results with the Expert System MEPS 335
6. Results Obtained Using ENIGMA 335
7. Discussion 336
8.Conclusions 337
References 338
Chapter 40. Incremental Version-Space Merging 339
Abstract 339
1 Introduction 339
2 Version Spaces and the Candidate-Elimination Algorithm 339
3 Incremental Version-Space Merging 340
4 The Candidate-Elimination Algorithm: Emulation and Extensions 341
5 Combining Empirical and Analytical Learning 343
6 Computational Complexity 346
7 Summary 347
Acknowledgments 347
References 347
Chapter 41. Average Case Analysis of Conjunctive Learning Algorithms 348
Abstract 348
1 Introduction 348
2 The Average Case Learning Model 349
3 Conclusion 355
Acknowledgements 355
References 355
Chapter 42. ILS: A Framework for Multi-Paradigmatic Learning 357
Abstract 357
1. Introduction 357
2. Some Learning Agents 358
3. The Problem-Solving and Learning Tasks 360
4. The ILS Protocol 361
5. The Learning Coordinator 362
6. Inter-agent Cooperation 363
7. Conclusions and Further Work 364
Chapter 43. An Integrated Framework of Inducing Rules From Examples 366
ABSTRACT 366
1. Introduction 366
2. Exemplary Domain 366
3. Characteristics of inference rules 367
4. An Integrated Approach: IR1 368
5. Experiments 372
6. Concluding Remarks 372
ACKNOWLEDGEMENTS 373
REFERENCES 373
PART 10: LANGUAGE LEARNING 376
Chapter 44. Adaptive Parsing: A General Method for Learning Idiosyncratic Grammars 377
Abstract 377
1. Introduction 377
2. The Model 377
3. CHAMP, an Adaptive Interface 379
4. Adaptation and Generalization in CHAMP 380
5. Analysis of the Utility of Adaptation 382
6. Summary and Future Work 384
References 385
Chapter 45. A Comparison of Learning Techniques in Second Language Learning 386
Abstract 386
1 Introduction 386
2 An example of ANT's performance 386
3 Why ANT needs instructions and examples 388
4 Learning from only instructions or only examples 388
5 An empirical comparison 389
6 Conclusion 392
References 392
Chapter 46. Learning String Patterns and Tree Patterns from Examples 393
Abstract 393
1 Introduction 393
2 Preliminaries 395
3 Learning Patterns from Positive and Negative Examples 395
4 Learning Patterns from Incomplete Examples 396
5 Tree Patterns 397
6 Conclusions 400
7 Acknowledgements 400
References 400
Chapter 47. Learning with Discrete Multi-Valued Neurons 401
Abstract 401
1 Introduction 401
2 A General Framework for Learning 401
3 A k-ary Perceptron Learning Rule 402
4 A k-ary Winnow Algorithm 404
5 Conclusion 407
Acknowledgements 408
References 408
PART 11: OTHER TOPICS 410
Chapter 48. The General Utility Problem in Machine Learning 411
Abstract 411
1 Introduction 411
2 Utility Problem in Learning 411
3 General Utility Problem 413
4 Performance-Driven Knowledge Transformation 414
5 Experimentation 416
6 Conclusions 418
7 Acknowledgments 419
8 References 419
Chapter 49. A Robust Approach to Numeric Discovery 420
Abstract 420
1 Introduction 420
2 Numeric Discovery in IDS 420
3 Experimental Studies of Numeric Discovery 423
4 Discussion 426
References 427
Chapter 50. More Results on the Complexity of Knowledge Base Refinement: Belief Networks 428
Abstract 428
1. Introduction 428
2. Formalizing Mass Refinement 429
3. Initial Mass Synthesis Is NP-Hard 430
4. Mass Refinement Is NP-Hard 430
5. Related Work 431
6. Conclusion 431
Acknowledgements 432
References 432
INDEX 436

Erscheint lt. Verlag 23.5.2014
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 1-4832-9858-2 / 1483298582
ISBN-13 978-1-4832-9858-0 / 9781483298580
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Die Grundlage der Digitalisierung

von Knut Hildebrand; Michael Mielke; Marcus Gebauer

eBook Download (2025)
Springer Fachmedien Wiesbaden (Verlag)
CHF 29,30
Mit Herz, Kopf & Bot zu deinem Skillset der Zukunft

von Jenny Köppe; Michel Braun

eBook Download (2025)
Lehmanns Media (Verlag)
CHF 16,60