Question

Wikipedia says:

Complete lattices appear in many applications in mathematics and computer science

Is it just referring to the fact that the standard Boolean algebra used in computation is a complete lattice? Is there anything we gain by working at the abstract level of lattices instead of with Boolean logic specifically?

A google search doesn't find much on the subject, but I'm probably using the wrong keywords.

Was it helpful?

Solution

See for instance this book: Lattice Theory with Applications, Vijay K. Garg, which starts off as follows:

Partial order and lattice theory now play an important role in many disciplines of computer science and engineering. For example, they have applications in distributed computing (vector clocks, global predicate detection), concurrency theory (pomsets, occurrence nets), programming language semantics (fixed-point semantics), and data mining (concept analysis). They are also useful in other disciplines of mathematics such as combinatorics, number theory and group theory. In this book, I introduce important results in partial order theory along with their applications in computer science. The bias of the book is on computational aspects of lattice theory (algorithms) and on applications (esp. distributed systems).

The book doesn't seem to mention recursion theory (theory of computable sets), but from Wikipedia's article on Computability theory, we see:

When Post defined the notion of a simple set as an r.e. set with an infinite complement not containing any infinite r.e. set, he started to study the structure of the recursively enumerable sets under inclusion. This lattice became a well-studied structure. Recursive sets can be defined in this structure by the basic result that a set is recursive if and only if the set and its complement are both recursively enumerable. Infinite r.e. sets have always infinite recursive subsets; but on the other hand, simple sets exist but do not have a coinfinite recursive superset. Post (1944) introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are r.e. sets such that every r.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the recursive sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; Harrington and Soare (1991) found eventually such a property.

Further reading, see the blog post Lattice Theory for Programmers and Non Computer Scientists.

OTHER TIPS

The references given by Pål GD are indeed very appropriate. So let's focus on a minor side issue in this answer instead. I have done some reading on lattices some time ago, and started to wonder whether the notion of semilattice wouldn't have been more appropriate for applications. You might object that a complete semi-lattice is automatically also a lattice, but the homomorphisms and substructures (i.e. sublattices and subsemilattices) are different.

I first encountered (semi-)lattices when studying semigroups, as the commutative idempotent semigroups. Then I thought about the relation between hierarchic structures and lattices, and noticed that a tree is naturally also a semilattice. Then I found lattices in security contexts and in program analysis, and always it seemed to me like the semilattice structure was the really important part, and the lattice was just taken because it could be obtained "for free". Even for a Heyting algebra, there is an asymmetry between conjunction and disjunction which suggests to me that the asymmetric semilattice model might provide more insight here than the symmetric lattice model.

a very important, but not-so-famous case—it is well known among theorists but not so well known in the sense of being taught to undergraduates—of the use of a lattice is to prove superpolynomial lower bounds on the size of monotone circuits computing Clique for which Razborov won the Nevanlinna prize. the original construction is very technical however and later constructions eg Berg/Ulfberg simplify the framework without the reference to lattices.

so in this case lattice theory was used as a framework to discover the original proof but later formulations tended to not refer to it directly as a conceptual simplification.

so yes lattices may be regarded as a more exotic mathematical object [Razborov has spoken elsewhere of his style of applying advanced mathematics to CS] that might correspond to some other more "concrete" object in CS, in this case it is "approximation gates" ie boolean gates in circuits that give "approximately correct" answers and which the lattice is a sort of "induction structure" for converting between an exact circuit to an inexact, approximate circuit.

I have since found the free paper Ordered Sets and Complete Lattices: A Primer for Computer Science, for other interested readers.

Regular edge labelings and related structures form a distributive lattice (see for example here). This can be exploited to search efficiently through the space of all regular edge labelings for a given graph (see here) . As an application you can determine if a map can be drawn as cartogram with a certain area assignment for the faces.

Also, surprisingly (for me, at least) cryptography. Check it out, it allows new attacks of known cryptosystems and gives hopes for post-quantum-computing cryptography.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top