Pergunta

I'm looking into various SAT-solvers and trying to understand how they work and why they are designed in certain ways. (But I'm not in a university at the moment and I do not know anyone who is a professor. So I'm posting here hoping that someone can help me out. I'd really appreciate.)

In Chaff, BCP (Boolean Constraint Propagation) is implemented differently from the original DPLL: it does it by watching two literals at a time (a technique slightly different from one initially suggested in SATO: An Efficient Propositional Prover) according to the 2001 paper, Chaff: Engineering an Efficient SAT Solver. There is, however, no mention of pure literal elimination in this paper.

In The Complexity of Pure Literal Elimination, Jan Johannsen wrote

The current best implementations of DLL-type SAT solvers, like Chaff or BerkMin sacrifice this heuristic in order to gain efficiency in unit propagation.

where "this heuristic" is referring to pure literal elimination. My understanding of what pure literal elimination does is that it

  1. searches for all single-polar (or pure) literals
  2. assigns a boolean value to them such that each yields True
  3. in which case we can now delete all the clauses containing them

Here is my question:

How is the sacrifice necessary? Is there a good reason why pure literal elimination is absent in DPLL-based algorithms like Chaff? Can't we just do pure literal elimination in each decision level (or at least do it at the start before branching)?

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top