Question

Paulson et alii. From LCF to Isabelle/HOL say:

Resolution for first-order logic, complete in principle but frequently disappointing in practice.

I think complete means they can proof any true formula in first-order logic correct. In the Handbook of Automated Reasoning I find:

Resolution is a refutationally complete theorem proving method: a contradiction (i.e., the empty clause) can be deduced from any unsatisfiable set of clauses.

From Wikipedia:

Attempting to prove a satisfiable first-order formula as unsatisfiable may result in a nonterminating computation

Why is that disappointing?

Was it helpful?

Solution

While "disappointing in practice" is certainly not definable formally, unlike "complete" (which does indeed mean "can eventually prove every true formula"), it's pretty easy to find examples where naive resolution is not even remotely adequate, i.e. examples which should be easy to prove but which resolution is extremely slow on.

A famous example is an encoding of the pigeon hole principle for $n$ holes in propositional logic (which is the statement "$n+1$ pigeons cannot fit in $n$ holes without duplicates). There is no proof of this statement using only resolution in time sub-exponential in $n$.

Things are even worse in predicate logic, where it is very easy to find statements without any fast resolution proofs.

Note that any implementation of resolution must implement a strategy for picking the resolvents, which is already a very difficult problem, and an active area of research, since most practical algorithms for theorem proving are enhancements of naive resolution, e.g. hyper-resolution.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top