











The object of this project is to give the student a deep, experiential understanding
of propositional knowledge representation and reasoning through
explanation, worked examples, and implementation exercises.
 Gain an understanding of the syntax and semantics of propositional logic, as well
as general logic terminology, including "model", "(un)satisfiability", "entailment",
"equivalence", "soundness", and "completeness".
 Learn the process of knowledge base conversion to Conjunctive Normal Form (CNF).
 Solve word problems with proof by contradiction (a.k.a. reductio ad absurdum) using
resolution theorem proving.
 Represent knowledge so as to complete a program implementation that performs expert
reasoning for the game of Clue.
 Compare deductive learning, inductive learning, and knowledge acquisition.













The student should understand the syntax of Java. Students who program in C++ should
be able to follow most of the Java examples.
Along with the project, the student should cover the recommended background reading
below.
In support of the exercises and project, one should download and install
zChaff,
a complete, DPLLtype SAT solver. If the use of a different SAT solver is desired,
one will need to modify
SATSolver.java accordingly.













For a more detailed introduction to the logical concepts, we recommend the parallel
reading of a good AI textbook section of propositional logic. For example, one might assign
Chapter 7 ("Logical Agents") of:
Students are also encouraged to play the game of Clue in order to have a
better understanding of the problem. One can download a free limited trial computer game
version from
http://www.downloadfreegames.com/board_game_download/clue.htm. However,
it is best that the students have access to a classic
Clue board game and be given
time to gain a solid understanding of the game and the knowledge one acquires during
play.













The detailed project description is available in the PDF file clue.pdf. You will need the free Adobe Acrobat Reader to view this file.


Example code described in the project is available here:
This code was generated from the same source file that generated
clue.pdf using the literate programming
tool noweb.
We recommend having students do a few of the included logic exercises of Section
7 in order to gain experience with the SATSolver class before beginning work on
the ClueReasoner.
Students seeking more extended challenges will also find brief descriptions of
related advanced projects including basic SAT solver challenges and extensions
for cardinality constraints.














Here is a sample syllabus used in CS 371  Introduction to AI at Gettysburg College:
First class: Thursday 10/14/04
Preparatory reading: Russell & Norvig Sections 7.17.4 (through "A Simple Knowledge Base"), clue.pdf sections 12.
Propositional Logic, Syntax, Semantics
Homework due next class: Represent the knowledge base for one exercise of clue.pdf section 7.
Second class: Tuesday 10/19/04
Truth assignments, models, (un)satisfiability, validity, entailment, equivalence
Homework due next class: Represent the knowledge base for two more exercises of clue.pdf section 7.
Third class: Thursday 10/21/04
Preparatory reading: Russell & Norvig Section 7.5, clue.pdf sections 36.
Conjunctive normal form, resolution theorem proving, proof by contradiction (reductio ad absurdum), SAT solvers
Homework due next class: Resolution proofs and/or satisfying truth assignments for solving the logic problems of the previous homework, use of SATSolver class to check the solutions to these problems.
Fourth class: Tuesday 10/26/04
Preparatory reading: Remainder of clue.pdf.
The Game of Clue, propositional facts in the game, construction of a Clue reasoner
Homework due next class: Complete implementation of ClueReasoner class, as described in clue.pdf section 8.
The following material is optional to the project.
Fifth class: Thursday 10/28/04
Preparatory reading: Russell & Norvig Section 7.6 ("local search algorithms").
Stochastic local search for boolean satisfiability, WalkSAT, Novelty, Novelty+, inclass simple implementation of WalkSAT













In his book Machine Learning,
Tom Mitchell writes “A computer is said to learn from experience
E with respect to some class of tasks T and performance measure P, if its performance
at tasks in T, as measured by P, improves with experience E.” The three “experiences”
of deductive learning, inductive learning, and knowledge acquisition can add knowledge
to a knowledge base, thus potentially improving (or even making possible) the average
expected time performance of the task of answering queries to the knowledge base.
This project and the corresponding reading focus on knowledge acquisition and deductive
learning. 









