Question

As the title says, can Coq be used as a model checker? Can I mix model checking with Coq proving? Is this usual? Google talks about a "µ-calculus", does anyone have experience with this or something similar? Is it advised to use Coq in this way, or should I look for another tool?

Was it helpful?

Solution

A proof assistant like Coq will verify that your proof is sound and that any theorems you propose can (or cannot) be derived using axioms and previously proven results. It will also provide you with support in proposing proof steps to reduce the effort you have to make to discharging the proofs.

A model checker, in contrast, symbolically enumerates the state space of the specification and determines whether any of the verification conditions are violated. In such a case, it will provide an error trace showing what sequence of state changes will trigger the violation.

These usually have very different back-end processing requirements but, while they could be combined into a single tool, the Coq prover does not appear to have done so.

The Temporal Logic of Actions (TLA+) specification language, along with the companion TLA+ Proof System (TLAPS) was developed by Leslie Lamport to reason about large formal specifications. It has been extended with the PlusCal algorithmic language that supports model checking of the algorithms that are translated into a TLA+ representation.

The µ-calculus is a notation used as a low-level underpinning for many formal methods approaches. You should also look at the Boolean satisfiability problem for a more brute force approach. Theorem provers are generally more sophisticated in their design and use traditional expert system/AI concepts along with libraries of proof rules defined by experts to provide the support required to discharge the proof obligations.

As an another example of a proof tool, you can look at the Event-B method and the accompanying Rodin development platform. This will, when refining a specification more concretely identify the proof obligations and attempt to mechanically discharge these, leaving the difficult ones for you to do.

For model checking you could look at:

among the freely available choices.

OTHER TIPS

To complement the existing answer by @Pekka, µ-calculus is a notation for talking about fixpoints, which represent solutions to reachability problems.

  • An example of a least fixpoint (μ) is the largest set of states that can be reached starting from somewhere (e.g., from the first line of a program). It is a "least" fixpoint, because of being the smallest among possibly other fixpoints. It is obtained by starting with the empty set, and adding states until reaching a fixpoint. Under certain conditions, existence of a fixpoint is ensured by the Knaster-Tarski theorem.

  • An example of a greatest fixpoint (ν) is the largest set that we can remain inside w/o violating some safety requirement. It is a "greatest" fixpoint, because it is obtained by starting from the collection of all states (so, from above in the partial order of sets defined by the subset relation), and iteratively restricting it, until we reach a fixpoint. Greatest fixpoints are dual to least fixpoints, thus the same theorem applies for existence and uniqueness. Think of graph search as another example.

By alternating the type of fixpoints in a μ-calculus formula, we can express temporal behavior like: "I want you to keep going back and forth between two locations", or "I want the server to keep eventually responding to each request it gets".

Then, we can model check (with an enumerative or symbolic model checker) whether the property we described is implied by a system we have designed.

  • SPIN is an enumerative model checker: it stores each state it explores in a hash table, and includes some algorithms for recognizing that it doesn't need to enumerate all the states, but can treat some of them as representative to other states (from the perspective of the property being verified, those states are equivalent, so it suffices reasoning about one of them only). Those methods are called partial order reduction.

  • NuSMV is a symbolic model checker. It still searches the states of a system, but it doesn't represent them one by one in memory. Instead, it keeps tracks of sets of states, representing those sets as binary decision diagrams. These are data structures that can remain small, even when they represent sets with 10**24 states. There's guarantee though for that. Unfortunately, they can still blow up in size, and sadly this is so for almost all Boolean functions (so, almost all sets that we may want to represent), as Claude Shannon has proved.

The tools above are designed and used for verification. There is also an approach called synthesis. After studying both, it may seem confusing whether there is really any difference between them. At a first encounter, one may think that models and formulae are what make a difference. However, that is deceptive: models and formulae are interchangeable as description methods, and can also be mixed.

The difference between verification and synthesis is in alternation of quantifiers:

  • verification has uniform quantification (all universal, usually)
  • synthesis has alternating quantification: it involves nested existential and universal quantifiers.

Of course, one may verify also a quantified formula, for example \forall x: \exists y: f(x, y). Isn't this just verification? Well, it is, demonstrating that there isn't any mathematical difference at the heart of the matter, between synthesis and verification. Traditionally, one encounters mostly the above cases of how quantification appears in what problems are considered as synthesis, and what problems are considered as verification.

The main real difference between synthesis and verification is how we use the result from checking whether a formula is true:

  • In verification, we already have the implementation of the system that we want, and we check that it satisfies some desired properties. If not, we are going to (manually) try to fix the implementation, and then check again.

  • In synthesis, there are some parts of the final system that we haven't yet specified completely in detail. We let that detail be selected by the tool we use. In other words, the tool eliminates the existential quantifiers in a way that will make the formula true, and tells us what it did, so that we will complete the system that way, ensuring that formula is true.

An example of a synthesis tool in C is gr1c, another example in Python is omega. There are several other tools.

However you approach a problem, at least make sure that you write what the plan is, that you write a specification.

One of the best books on these subjects is Leslie Lamport's Specifying systems. And to convince that various representations are faces of the same thing, consider reading Computer science and state machines.

There is some work on using Coq as a model checker, for instance see https://github.com/coq-contribs/smc. However, I do not know of people who have used this in serious applications.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top