Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Machine Learning Proceedings 1994 -

Machine Learning Proceedings 1994 (eBook)

Proceedings of the Eighth International Conference

William W. Cohen (Herausgeber)

eBook Download: PDF
2014 | 1. Auflage
381 Seiten
Elsevier Science (Verlag)
978-1-4832-9818-4 (ISBN)
Systemvoraussetzungen
53,95 inkl. MwSt
(CHF 52,70)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Machine Learning Proceedings 1994
Machine Learning Proceedings 1994

Front Cover 1
Machine Learning 2
Copyright Page 3
Table of Contents 4
Preface 8
WORKSHOPS 10
TUTORIALS 10
ORGANIZING COMMITTEE 10
PROGRAM COMMITTEE 11
PART 1: CONTRIBUTED PAPERS 18
Chapter 1. A New Method for Predicting Protein Secondary Structures Based on Stochastic 
20 
Abstract 20
1 Introduction 20
2 Stochastic Ranked Node 
22 
3 Modeling Beta Sheet Structures 
23 
4 Learning and Parsing of a 
23 
5 Experimental Results 26
6 Concluding Remarks 27
References 27
Chapter 2. Learning Recursive Relations 
29 
Abstract 29
1 MOTIVATION 29
2 Review of CRUSTACEAN 30
3 EVALUATION 32
4 RELATED WORK 34
5 CONCLUSION 35
Acknowledgements 35
References 35
Chapter 3. Improving Accuracy of Incorrect Domain 
36 
Abstract 36
1 INTRODUCTION 36
2 KNOWLEDGE INTENSIVE 
36 
3 A DESCRIPTION OF GENTRE 37
4 EXPERIMENTAL EVALUATION 40
5 CONCLUSIONS 43
Acknowledgements 44
References 44
Chapter 4. Greedy Attribute Selection 45
Abstract 45
1 INTRODUCTION 45
2 ATTRIBUTE SELECTION IN CAP 46
3 ATTRIBUTE HILLCLIMBING 46
4 CACHING TO SPEED SEARCH 47
5 EMPIRICAL ANALYSIS 49
6 FOCUS and RELIEF 51
7 CONCLUSION 53
Acknowledgements 53
References 53
Chapter 5. Using Sampling and Queries to Extract Rules from 
54 
Abstract 54
1 INTRODUCTION 54
2 RULE EXTRACTION AS 
55 
3 RULE EXTRACTION AS 
57 
4 EXTRACTING M-of-N 
60 
5 FUTURE WORK 61
6 CONCLUSIONS 62
Acknowledgements 62
References 62
Chapter 6. The Generate, Test, and Explain Discovery System Architecture 63
Abstract 63
1 INTRODUCTION AND MOTIVATION 63
2 ARCHITECTURE 63
3 APPLICATIONS 64
4 RELATED WORK 66
5 LIMITATIONS, FUTURE WORK, AND 
67 
Acknowledgments 67
References 68
Chapter 7. Boosting and Other Machine Learning Algorithms 70
Abstract 70
1. INTRODUCTION 70
2. PROCEDURE 70
3. OTHER MACHINE LEARNING 
74 
4. CONCLUSIONS 74
References 78
Chapter 8. In Defense of C4.5: Notes on Learning One-Level Decision Trees 79
Abstract 79
1 INTRODUCTION 79
2 PREDICTION ACCURACY 79
3 TEST DOMAINS 80
4 UPPER BOUND ON CLASSIFICATION 
82 
5 THE COMPLEXITY OF A CLASSIFIER 83
6 RELATED WORK 84
7 CONCLUSION 85
Acknowledgements 85
References 85
Chapter 9. Incremental Reduced Error Pruning 87
Abstract 87
1 INTRODUCTION 87
2 SOME PROBLEMS WITH REDUCED 
87 
3 COHEN'S GROW 
89 
4 INCREMENTAL 
89 
5 EXPERIMENTS 90
6 CONCLUSION 92
Acknowledgements 93
References 93
Chapter 10. An Incremental Learning Approach for Completable 
95 
Abstract 95
1 INTRODUCTION 95
2 COMPLETABLE PLANNING 96
3 LEARNING COMPLETABLE PLANS 98
4 EXPERIMENTS 101
5 DISCUSSION 102
Acknowledgments 103
References 103
Chapter 11. Learning by Experimentation: Incremental Refinement of 
104 
Abstract 104
1 Introduction 104
2 Planning with Incomplete Models 105
3 Incremental Refinement of Planning 
106 
4 Empirical Results 108
5 Conclusion 110
Acknowledgments 111
References 111
Chapter 12. Learning Disjunctive Concepts by Means of Genetic 
113 
Abstract 113
1 INTRODUCTION 113
2 SYSTEM OVERVIEW 114
3 THE UNIVERSAL SUFFRAGE SELECTION 
115 
4 TORIES AND WHIGS EVOLUTION STRATEGY 117
5 EVALUATION OF REGAL USING 
119 
6 CONCLUSIONS 119
REFERENCES 120
Chapter 13. Consideration of Risk in Reinforcement Learning 122
Abstract 122
1 INTRODUCTION 122
2 PROBLEMS OF THE EXPECTED 
123 
3 DECISION CRITERIA 124
4 DYNAMIC PROGRAMMING FOR THE 
125 
5 Q-LEARNING 126
6 RELATED WORK 128
7 CONCLUSIONS AND FUTURE WORK 128
Acknowledgements 128
References 128
Chapter 14. Rule Induction for Semantic Query Optimization 129
Abstract 129
1 INTRODUCTION 129
2 SEMANTIC QUERY 
130 
3 OVERVIEW OF THE LEARNING 
131 
4 LEARNING ALTERNATIVE 
132 
5 Experimental Results 135
6 RELATED WORK 136
Acknowledgements 137
References 137
Chapter 15. Irrelevant Features and the Subset Selection Problem 138
Abstract 138
1 INTRODUCTION 138
2 DEFINING RELEVANCE 139
3 FEATURE SUBSET SELECTION 140
4 EXPERIMENTAL RESULTS 142
5 RELATED WORK 144
6 DISCUSSION AND FUTURE 
144 
Acknowledgements 145
References 145
Chapter 16. An Efficient Subsumption Algorithm for Inductive Logic Programming 147
Abstract 147
1 INTRODUCTION 147
2 MAKING PROVABILITY DECIDABLE 147
3 MAKING PROVABILITY EFFICIENT 148
4 IMPROVING THE REDUCTION OF 
151 
5 TESTING THE EFFICIENCY 151
6 CONCLUSION 154
Acknowledgements 154
References 154
Chapter 17. Getting the Most from Flawed Theories 156
Abstract 156
1. INTRODUCTION 156
2. DEGREE OF PROVEDNESS 157
3. TWO PROBLEMS REVISITED 158
4. EMPIRICAL RESULTS 160
5. DISCUSSION 162
6. CONCLUSION 163
Acknowledgments 164
References 164
Chapter 18. Heterogeneous Uncertainty Sampling for Supervised Learning 165
Abstract 165
1 Introduction 165
2 Background 165
3 Heterogeneous Uncertainty Sampling 166
4 Task and Data Set 167
5 Training C4.5 with Text Data 167
6 Experiment Design 168
7 Results 168
8 Discussion 171
9 Future Work 171
10 Summary 171
Acknowledgements 172
References 172
Chapter 19. Markov games as a framework for multi-agent reinforcement learning 174
Abstract 174
1 INTRODUCTION 174
2 DEFINITIONS 174
3 OPTIMAL POLICIES 175
4 FINDING OPTIMAL POLICIES 175
5 LEARNING OPTIMAL POLICIES 176
6 EXPERIMENTS 176
7 DISCUSSION 179
Acknowledgments 179
References 180
Chapter 20. To Discount or not to Discount in Reinforcement Learning: A 
181 
Abstract 181
1 Introduction and Motivation 181
2 Two Reinforcement Learning 
182 
3 A Simulated Robot Box-Pushing 
183 
4 Experimental Design 183
5 Experiment 1: Boltzmann 
184 
6 Experiment 2: Semi-Uniform 
186 
7 Experiment 3: Recency-based 
187 
8 Summary of Results and Discussion 187
9 Limitations of Our Study 188
10 Future Work 188
11 Acknowledgements 188
References 189
Chapter 21. Comparing Methods for Refining Certainty-Factor Rule-Bases 190
Abstract 190
1 INTRODUCTION 190
2 BACKGROUND 191
3 RAPTURE 191
4 EXPERIMENTAL RESULTS 193
5 RELATED WORK 194
6 FUTURE WORK 195
7 CONCLUSIONS 196
Acknowledgements 196
References 196
Chapter 22. Reward Functions for Accelerated Learning 198
Abstract 198
1 INTRODUCTION 198
2 LEARNING IN SITUATED DOMAINS 198
3 DESIGNING REWARD FUNCTIONS 200
4 EXPERIMENTAL DESIGN 201
5 EXPERIMENTAL RESULTS 203
6 DISCUSSION 204
7 SUMMARY 205
Acknowledgements 205
References 206
Chapter 23. Efficient Algorithms for Minimizing Cross Validation Error 207
Abstract 207
1 INTRODUCTION 207
2 RACING THE CROSS VALIDATION 
208 
3 SEARCHING FOR SETS OF GOOD 
210 
4 EXPERIMENTS 212
5 DISCUSSION 213
6 CONCLUSION 213
APPENDIX 213
ACKNOWLEDGEMENTS 214
References 215
Chapter 24. Revision of Production System Rule-Bases 216
Abstract 216
1 Introduction 216
2 The CLIPS Production System Language 217
3 Theory Revision 218
4 Experiments 222
5 Related Work 223
6 Conclusion 224
Acknowledgments 224
References 224
Chapter 25. Using Genetic Search to Refine Knowledge-Based Neural Networks 225
Abstract 225
1 INTRODUCTION 225
2 REVIEW OF KBANN & TOPGEN
3 THE REGENT ALGORITHM 227
4 EXPERIMENTAL RESULTS 229
5 DISCUSSION & FUTURE WORK
6 RELATED WORK 231
7 CONCLUSION 232
Acknowledgements 232
References 232
Chapter 26. Reducing Misclassification Costs 234
Abstract 234
1 INTRODUCTION 234
2 DEFINITIONS 235
3 INDUCTIVE LEARNING WITH 
235 
4 OVERFITTING AVOIDANCE 239
5 KNOWLEDGE INTENSIVE METHODS 240
6 CONCLUSIONS 241
Acknowledgements 242
References 242
Chapter 27. Incremental Multi-Step Q-Learning 243
Abstract 243
1 INTRODUCTION 243
2 TD(.) RETURNS 243
3 ONE-STEP Q-LEARNING 244
4 Q(.)-LEARNING 245
5 EXPERIMENTALDEMONSTRATION 246
6 CONCLUSION 248
Acknowledgements 249
References 249
Chapter 28. The Minimum Description Length Principle and Categorical Theories 250
Abstract 250
1 INTRODUCTION 250
2 THE MDL PRINCIPLE 250
3 THE PROBLEM 251
4 A MODIFIED APPROACH 252
5 LEARNING DNF DEFINITIONS OF 
252 
6 LEARNING HORN CLAUSE 
256 
7 CONCLUSION 257
Acknowledgements 258
References 258
Chapter 29. Towards a Better Understanding 
259 
Abstract 259
1 INTRODUCTION 259
2 DESCRIPTIONS AND 
260 
3 EMPIRICAL STUDIES ON 
261 
4 TESTS ON ARTIFICIAL DATA 261
5 ANALYSIS 264
6 CONCLUSION 267
Acknowledgements 267
References 267
Chapter 30. Hierarchical Self-Organization in Genetic Programming 268
Abstract 268
1 INTRODUCTION 268
2 BUILDING BLOCKS IN GP 268
3 ANALYSIS OF CANDIDATE BUILDING 
269 
4 ADAPTIVE REPRESENTATION 270
5 DESCRIPTIONAL COMPLEXITY IN 
270 
6 EXPERIMENTAL RESULTS 271
7 RELATED WORK 273
8 CONCLUSIONS AND FUTURE WORK 273
APPENDIX: Minimum Description Length Principle for Hierarchical Organizations of 
274 
Acknowledgements 274
References 275
Chapter 31. A Conservation Law for Generalization Performance 276
Abstract 276
1 Introduction 276
2 Conservation of Generalization 
276 
3 Possible and Impossible Learners 277
4 Two Questions about Bias 278
5 Some Implications of the 
278 
6 Practical Implications 280
7 Memorization and Overall 
281 
8 Related Work 281
Acknowledgements 281
References 281
Chapter 32. On the Worst-case Analysis of 
283 
Abstract 283
1 INTRODUCTION 283
2 THE PREDICTION MODEL 284
3 TEMPORAL-DIFFERENCE 
285 
4 UPPER BOUNDS FOR 
286 
5 A LOWER BOUND 288
6 BATCH-MODE ALGORITHMS 289
7 DISCUSSION 290
Acknowledgements 290
References 291
APPENDIX 291
Chapter 33. A Constraint-Based Induction Algorithm in FOL 292
Abstract 292
1 INTRODUCTION 292
2 FORMALIZATION 293
3 BUILDING G 297
4 CHARACTERIZING G 298
5 CONCLUSION AND 
300 
Acknowledgements 300
References 300
Chapter 34. Learning Without State-Estimation in Partially Observable 
301 
Abstract 301
1 INTRODUCTION 301
2 PREVIOUS APPROACHES 302
3 PROBLEM FORMULATION 302
4 EVALUATING A FIXED POLICY 304
5 OPTIMAL CONTROL 305
6 DISCUSSION 307
7 Conclusion 307
Acknowledgements 308
References 308
Chapter 35. Prototype and Feature Selection by Sampling and Random Mutation Hill 
310 
Abstract 310
1 Introduction 310
2 The Algorithms 312
3 Discussion 314
4 When will Monte Carlo sampling work? 314
5 Related research 316
6 Conclusion 316
7 Acknowledgments 316
References 316
Chapter 36. A Bayesian Framework to Integrate Symbolic and Neural Learning 319
Abstract 319
1 INTRODUCTION 319
2 A BAYESIAN FRAMEWORK 319
3 APPLYING FRAMEWORK TO GET KNOWLEDGE FROM A NEURAL NETWORK 323
4 EXPERIMENTS 323
5 DISCUSSION AND CONCLUSION 324
References 325
Chapter 37. A Modular Q-Learning Architecture for Manipulator Task Decomposition 326
Abstract 326
1 INTRODUCTION 326
2 REINFORCEMENT LEARNING 326
3 COMPOSITIONAL Q-LEARNING(CQ-L) 327
4 ROBOT SIMULATION 329
5 CEREBELLAR MODEL ARTICULATION CONTROLLER(CMAC) 330
6 EXPERIMENT DETAILS 330
7 RESULTS 331
8 RELATED WORK 332
9 CONCLUSION 333
Acknowledgements 333
References 334
Chapter 38. An Improved Algorithm for 
335 
Abstract 335
1 Introduction 335
2 Design Goals 335
3 An Improved Algorithm 336
4 Incremental Training Cost 338
5 Error-Correction Mode 339
6 Inconsistent Training Instances 340
7 Direct Metrics for Attribute 
340 
8 Lazy Restructuring 341
9 Research Agenda 341
10 Summary 341
Acknowledgements 342
References 342
Chapter 39. A Powerful Heuristic for the Discovery of Complex Patterned Behavior 343
Abstract 343
1 INTRODUCTION 343
2 BACKGROUND 344
3 A HEURISTIC FOR PATTERN 
345 
4 AN APPLICATION TO PSYCHOLOGY 348
5 DISCUSSION 349
6 CONCLUSION 350
Acknowledgements 350
References 350
Chapter 40. Small Sample Decision Tree Pruning 352
Abstract 352
1 Introduction 352
2 Tree Pruning and Selection 352
3 Basic Principles 353
4 Sources of Error in Pruning 355
5 Methods 355
6 Results 356
7 Discussion 357
References 359
Chapter 41. Combining Top-down and Bottom-up Techniques 
360 
Abstract 360
1 INTRODUCTION 360
2 THE CHILLIN 
361 
3 EXPERIMENTAL EVALUATION 364
4 RELATED RESEARCH 367
5 FUTURE WORK 367
6 CONCLUSION 368
Acknowledgment 368
References 368
Chapter 42. Selective Reformulation of Examples in Concept 
369 
Abstract 369
1 INTRODUCTION 369
2 DOMAIN EXAMPLES AND LEARNING 
370 
3 MORIOLOGICAL REFORMULATION 373
4 THE REMO SYSTEM 375
5 CONCLUSION 376
Acknowledgments 376
References 376
PART 2: INVITED 
378 
Chapter 43. A Statistical Approach to Decision Tree Modeling 380
Abstract 380
1 INTRODUCTION 380
3 POSTERIOR CREDIT AND THE EM 
381 
2 PROBABILITY MODELS FOR 
382 
4 COMPONENT DENSITIES 384
7 HIDDEN MARKOV DECISION TREES 385
8 DISCUSSION 385
5 EFFICIENCY 386
6 MODEL SELECTION 386
Acknowledgments 387
References 387
Chapter 44. Bayesian Inductive Logic Programming 388
Abstract 388
1 INTRODUCTION 388
2 BAYES'ILP DEFINITIONS 389
3 APPLICATIONS IN MOLECULAR 
389 
4 U-LEARNABILITY 391
5 DISCUSSION 394
Acknowledgements 394
References 395
Chapter 45. Frequencies vs Biases: Machine Learning Problems in Natural Language 
397 
INDEX 398

Erscheint lt. Verlag 28.6.2014
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 1-4832-9818-3 / 1483298183
ISBN-13 978-1-4832-9818-4 / 9781483298184
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Die Grundlage der Digitalisierung

von Knut Hildebrand; Michael Mielke; Marcus Gebauer

eBook Download (2025)
Springer Fachmedien Wiesbaden (Verlag)
CHF 29,30
Mit Herz, Kopf & Bot zu deinem Skillset der Zukunft

von Jenny Köppe; Michel Braun

eBook Download (2025)
Lehmanns Media (Verlag)
CHF 16,60