Handbook of Automated Reasoning (eBook)
2128 Seiten
Elsevier Science (Verlag)
9780080532790 (ISBN)
Handbook of Automated Reasoning
Front Cover 1
Handbook of Automated Reasoning, Volume II 4
Copyright Page 5
Preface 6
Contents 10
Part V: Higher-order logic and logical frameworks 28
Chapter 15. Classical Type Theory 30
1 Introduction to type theory 32
2 Metatheoretical foundations 42
3 Proof search 52
4 Conclusion 63
Bibliography 64
Index 70
Chapter 16. Higher-Order Unification And Matching 74
1 Type Theory and Other Set Theories 76
2 Simply Typed .-calculus 83
3 Undecidability 89
4 Huet's Algorithm 93
5 Scopes Management 100
6 Decidable Subcases 106
7 Unification in .-calculus with Dependent Types 114
Bibliography 119
Index 126
Chapter 17. Logical Frameworks 128
1 Introduction 130
2 Abstract syntax 132
3 Judgments and deductions 140
4 Meta-programming and proof search 160
5 Representing meta-theory 173
6 Appendix: the simply-typed .-calculus 184
7 Appendix: the dependently typed .-calculus 188
8 Conclusion 195
Bibliography 200
Index 210
Chapter 18. Proof-Assistants Using Dependent Type Systems 214
1 Proof checking 216
2 Type-theoretic notions for proof checking 218
3 Type systems for proof checking 245
4 Proof-development in type systems 276
5 Proof assistants 288
Bibliography 295
Index 300
Name index 303
Part VI: Nonclassical logics 304
Chapter 19. Nonmonotonic Reasoning: Towards Efficient Calculi And Implementations 306
1 General Nonmonotonic Logics 309
2 Automating General Nonmonotonic Logics 325
3 From Automated Reasoning to Disjunctive Logic Programming 345
4 Nonmonotonic Semantics of Logic Programs 362
5 Implementing Nonmonotonic Semantics 376
6 Benchmarks 397
7 Conclusion 405
Bibliography 406
Index 417
Chapter 20. Automated Deduction For Many-Valued Logics 420
1 Introduction 422
2 What is a many-valued logic? 423
3 Classification of proof systems for many-valued logics 426
4 Signed logic: reasoning classically about finitely-valued logics 433
5 Signed resolution 442
6 An example 454
7 Optimization of transformation rules 458
8 Remarks on infinitely-valued logics 460
Bibliography 461
Index 466
Chapter 21. Encoding Two-Valued Nonclassical Logics In Classical Logic 468
1 Introduction 470
2 Background 475
3 Encoding consequence relations 484
4 The standard relational translation 488
5 The functionl translation 505
6 The semi-functional translation 520
7 Variations and alternatives 530
8 Conclusion 540
Bibliography 542
Index 549
Chapter 22. Connections In Nonclassical Logics 552
1 Introduction 554
2 Prelude: Connections in classical first-order logic 556
3 Labelled systems 581
4 Propositional intuitionistic logic 593
5 First-order intuitionistic logic 610
6 Normal modal logics up to S4 618
7 The S5 family 632
Bibliography 638
Index 642
Part VII: Decidable classes and model building 644
Chapter 23. Reasoning In Expressive Description Logics 646
1 Introduction 648
2 Description Logics 651
3 Description Logics and Propositional Dynamic Logics 658
4 Unrestricted Model Reasoning 663
5 Finite Model Reasoning 675
6 Beyond Basic Description Logics 684
7 Conclusions 691
Bibliography 691
Index 698
Chapter 24.Model Checking 700
1 Introduction 702
2 Logical Languages, Expressiveness 706
3 Second Order Languages 719
4 Model Transformations and Properties 735
5 Equivalence reductions 746
6 Completeness 754
7 Decision Procedures 765
8 Basic Model Checking Algorithms 776
9 Modelling of Reactive Systems 789
10 Symbolic Model Checking 800
11 Partial Order Techniques 816
12 Bounded Model Checking 820
13 Abstractions 824
14 Compositionality and Modular Verification 829
15 Further Topics 832
Bibliography 839
Index 853
Chapter 25. Resolution Decision Procedures 856
1 Introduction 858
2 Notation and definitions 859
3 Decision procedures based on ordering refinements 867
4 Hyperresolution as decision procedure 879
5 Resolution decision procedures for description logics 895
6 Related work 907
Bibliography 908
Index 912
Part VIII: Implementation 916
Chapter 26. Term Indexing 918
1 Introduction 920
2 Background 924
3 Data structures for representing terms and indexes 931
4 A common framework for indexing 935
5 Path indexing 940
6 Discrimination trees 948
7 Adaptive automata 956
8 Automata-driven indexing 965
9 Code trees 973
10 Substitution trees 982
11 Context trees 987
12 Unification factoring 989
13 Multiterm indexing 992
14 Issues in perfect filtering 999
15 Indexing modulo AC-theories 1004
16 Elements of term indexing 1008
17 Indexing in practice 1016
18 Conclusion 1020
Bibliography 1022
Index 1027
Chapter 27. Combining Superposition, Sorts And Splitting 1030
1 What This Chapter is (not) About 1032
2 Foundations 1034
3 A First Simple Prover 1038
4 Inference and Reduction Rules 1046
5 Global Design Decisions 1065
Bibliography 1073
A Links to Saturation Based Provers 1076
Index 1077
Chapter 28. Model Elimination And Connection Tableau Procedures 1080
1 Introduction 1082
2 Clausal Tableaux and Connectedness 1083
3 Further Structural Refinements of Clausal Tableaux 1101
4 Global Pruning Methods in Model Elimination 1105
5 Shortening of Proofs 1114
6 Completeness of Connection Tableaux 1127
7 Architectures of Model Elimination Implementations 1135
8 Implementation of Refinements by Constraints 1157
9 Experimental Results 1167
10 Outlook 1172
Bibliography 1174
Index 1178
The Early History of Automated Deduction: Dedicated to the memory of Hao Wang
Martin Davis
Contents
With the ready availability of serious computer power, deductive reasoning, especially as embodied in mathematics, presented an ideal target for those interested in experimenting with computer programs that purported to implement the “higher” human faculties. This was because mathematical reasoning combines objectivity with creativity in a way difficult to find in other domains. For this endeavor, two paths presented themselves. One was to try to understand what people do when they create proofs and to write programs emulating that process. The other was to make use of the systematic work of the logicians in reducing logical reasoning to standard canonical forms on which algorithms could be based. Each path confronted daunting obstacles. The difficulty with the first approach was that available information about how creative mathematicians go about their business was and remains vague and anecdotal. On the other hand, the well-known unsolvability results of Church and Turing showed that the kind of algorithm on which a programmer might want to base a theorem-proving program simply did not exist. Moreover, it was all too obvious that an attempt to generate a proof of something non-trivial by beginning with the axioms of some logical system and systematically applying the rules of inference in all possible directions was sure to lead to a gigantic combinatorial explosion.
Each of these approaches has led to important and interesting work. Unfortunately, for many years the proponents of the two approaches saw themselves as opponents and engaged in polemics in which they largely spoke past each other. One problem was that whereas they appeared to be working on the same problems, they tended to differ not only in their approaches, but also in their fundamental goals. Those whose method was the emulation of the human mathematician tended to see their research as part of an effort to help understand human thought. Those who proposed to use the methods of mathematical logic tended to see the goal as the development of useful systems of automated deduction. Ultimately, the most successful developments incorporated insights deriving from both approaches.
For a brief account of the history of the developments in logic that provided the background for research in this field see [Davis 1983c ]. An interesting account of the two approaches and their mutual interaction can be found in [MacKenzie 1995]. The volume [Siekmann and Wrightson 1983] is a useful anthology of the principal articles on automated deduction to appear in the years through 1966.
1 Presburger's Procedure
In 1929, M. Presburger had shown that the first-order theory of addition in the arithmetic of integers is decidable, that is he had provided an algorithm which would be able to determine for a given sentence of that language, whether or not it is true. In 1954, Martin Davis programmed this algorithm for the vacuum tube computer at the Institute for Advanced Study in Princeton. As was stated by Davis [1983c]
Since it is now known that Presburger's procedure has worse than exponential complexity, it is not surprising that this program did not perform very well. Its great triumph was to prove that the sum of two even numbers is even.
2 Newell, Shaw & Simon, and H. Gelernter
The prepositional calculus is the most elementary part of mathematical logic, dealing as it does with the connectives ¬ ∧ ∨ ⊃. Its treatment constitutes Section A of Part I (37 pages) of Whitehead and Russell's Principia Mathematica, their monumental three volume effort purporting to demonstrate that all of mathematics can be viewed as a part of logic. Their treatment proceeds from five particular formulas, that may be called axioms or primitive propositions to which are applied the explicitly stated rule of modus ponens or detachment 1 and implicit rules permitting substitutions for prepositional variables and replacing defined symbols using their definitions. Newell, Shaw and Simon set themselves the problem of producing a computer program that emulates the process by which a person might seek proofs in the prepositional calculus of Principia. Although the formalism is simple enough so that such a program would be feasible, the process requires enough ingenuity that the problem was hardly trivial.
In Newell, Shaw and Simon's [1957] report on experiments with their “Logic Theory Machine,” (developed around the same time as Davis's Presburger program) the authors are very explicit about their goals:
Our explorations … represent a step in a program of research … aimed at developing a theory … and applying [it] to such fields as computer programming and human learning and problem-solving
Although it would be difficult to claim that this work has helped very much with such an ambitious agenda, it did provide a paradigm employed by many theorem-provers developed later, and this was surely its lasting influence. Among the techniques made explicit were forward and backward chaining, the generation of useful subproblems, and seeking substitutions that produce desired matches.
The authors emphasize that their program is “heuristic” rather than “algorithmic,” and this purported distinction has given rise to much dissension and confusion. In this context, “heuristic” seems to mean little more than the lack of a guarantee that the process will always work (given sufficient space and time). The algorithm they contrast with their own procedure is the “British Museum algorithm” by which all possible proofs are generated until one leading to the desired result is reached. Indeed, the authors seem to have been unaware that Post's proof of the completeness of the Principia propositional calculus using truth tables had, in effect supplied a simple algorithm by means of which a demonstration by truth tables could be converted into a proof in Principia [Post 1921].
Wang and Gao [1987] presented a Gentzen-style proof system for the propositional calculus designed for efficiency. Unlike the program of Newell et al, Wang's system is complete: for any input, processing eventually halts, yielding either a proof or a disproof. The simple examples that are explicitly listed in Principia, including those that stumped the Logic Theory Machine, were easily disposed of. Although Wang seems not to have quite understood that producing an efficient generator of proofs in the prepositional calculus was not what the Logic Theory Machine designers were after, they did leave themselves open to Wang's criticism by giving the impression that the absurd British Museum algorithm was the only possible “non-heuristic” proof-generating system for the propositional calculus.
Like the propositional calculus, the elementary geometry of the plane can be specified by a formal system for which an algorithmic decision procedure is available. This is seen by introducing a coordinate system and relying on the reduction of geometry to algebra and Tarski's decision procedure for the algebra of the real numbers. However, unlike the case of truth table methods for the propositional calculus, this method is utterly unfeasible. Although theoretical confirmation of this did not come until much later, it was already apparent from Davis's experience with the much simpler Presburger procedure. Herbert Gelernter's Geometry Machine is very much in the spirit of Newell at al. A clue to Gelernter's orientation is provided by his statement [Gelernter 1959]:
…geometry provides illustrative material in treatises and experiments in human problem-solving. It was felt that we could exchange valuable insights with behavioral scientists…
Technically, in addition to the repertoire of The Logic Machine (backward chaining, the use of subproblems), the geometry machine introduced two interesting innovations: the systematic use of symmetries to abbreviate proofs and the use of a coordinate system to simulate the carefully drawn diagram a student of geometry might employ. This last was used to tip off the prover to the fact that certain pairs of line segments and of angles “appeared” to be equal to one another, and thereby to guide the search for a proof.
3 First-Order Logic
Unlike the cases of propositional logic and elementary geometry, there is no general decision procedure for first-order logic. On the other hand, given appropriate axioms as premises, all mathematical reasoning can be expressed in first- order logic, and that is why so much attention has been paid to proof procedures for this domain. Investigations by Skolem and Herbrand in the 1920s and early 1930s provided the basic tools needed for theorem-proving programs for first-order logic [Davis 1983c ].
In 1957 a five week Summer Institute...
| Erscheint lt. Verlag | 22.6.2001 |
|---|---|
| Sprache | englisch |
| Themenwelt | Informatik ► Software Entwicklung ► User Interfaces (HCI) |
| Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik | |
| Mathematik / Informatik ► Mathematik ► Logik / Mengenlehre | |
| Naturwissenschaften | |
| ISBN-13 | 9780080532790 / 9780080532790 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich