سؤال

I am learning Automated Theorem Proving / SMT solvers / Proof Assistants by myself and post a series of questions about the process, starting here.

I keep reading about the Unification Algorithm.

  • What is it and why is so important to Inference Engines?
  • Why is it so important to Computer Science?
هل كانت مفيدة؟

المحلول

Unification is such a fundamental concept in computer science that perhaps at time we even take it for granted. Any time we have a rule or equation or pattern and want to apply it to some data, unification is used to specialize the rule to the data. Or if we want to combine two general but overlapping rules, unification provides us with the most general combined rule. Unification is at the core of

  • Theorem provers and proof assistants, include some based on higher-order unification.
  • Prolog implementations (as Resolution).
  • Type inference algorithms.
  • Computational linguistics/natural language processing.
  • Term rewriting systems such as Maude, which can be used as the basis of programming language semantics.
  • Deductive databases.
  • Expert systems or more generally Artificial intelligence.
  • Computer algebra systems.
  • Pattern matching in functional languages (at least in part ... only matching).
  • Some parsing approaches.
  • Some query languages, especially involving the Semantic Web.

نصائح أخرى

Proof assistants such as Isabelle/HOL work on a syntactical level on a logical calculus. Imagine you have the modus ponens rule (MP)

$\qquad \displaystyle P\to Q, P\ \Longrightarrow\ Q$

and the proof goal

$\qquad \displaystyle (a \lor b) \to (c \land d), a \lor b \ \overset{!}{\Longrightarrow} c\land d$

We humans see immediately that this follows with modus ponens, but the machine has to match goal to rule syntactically (wether you do apply rule mp or apply simp), and this is what unification does. The algorithm finds $\varphi$ with $\varphi(P) = a\lor b$ and $\varphi(Q) = c \land d$, instantiates the rule and applies it.

The good thing about assistants' methods like simp now is that if your goal is

$\qquad \displaystyle (a \lor b) \to (c \land d), a \ \overset{!}{\Longrightarrow} d$

that they will find a suitable sequence of applications of rules MP, $P \land Q \Longrightarrow P$ and $P \Longrightarrow P \lor Q$ with compatible unifications for the respective steps and solve the goal.


Notation: With $\Gamma = \{\varphi_1, \dots, \varphi_n\}$ a set of logical formulae, the notation

$\qquad \Gamma \Longrightarrow \psi$

means the following:

If I have derived/proven all formulae in $\Gamma$ (i.e. they are valid) then this rule asserts that $\psi$ is also valid.

In a sense, the rule $\Gamma \Longrightarrow \psi$ is the last step in a (long) proof for $\psi$. Proofs are nothing but chains of such rule applications.

Note that rules usually contain schematic variables ($P$ and $Q$ in the above) that can be replaced by arbitrary formulae as long as the same variable is replaced with the same formula in all instances; the result of that format is the concrete rule instance (or intuitively, a proof step). This replacement is above denoted by $\varphi$ which was found by unification.

Often people use $\models$ instead of $\Longrightarrow$.

I don't think it is important to inference engines. The unification algorithm is however very helpful for type inference. These are two very different kinds of inference.

Type inference is important to computer science because types are important in the theory of programming languages, which is a significant part of computer science. Types are also close to logic and are intensively used in automated theorem proving. There are implementations of unification algorithms in many, if not all, proof assistants and SMT solvers.

Inference engines are related to artificial intelligence, which is also important but very different. (I've seen links between learning and logic but this seems fetched.)

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى cs.stackexchange
scroll top