Question

I am noticing an unexplained behaviour when comparing scipy's (0.9.0) and matplotlib's (1.0.1) Delaunay triangulation routines. My points are UTM coordinates stored in numpy.array([[easting, northing], [easting, northing], [easting, northing]]). Scipy's edges are missing some of my points, while matplotlib's are all there. Is there a fix, or am I doing something wrong?

import scipy
import numpy
from scipy.spatial import Delaunay
import matplotlib.delaunay


def delaunay_edges(points):
    d = scipy.spatial.Delaunay(points)
    s = d.vertices
    return numpy.vstack((s[:,:2], s[:,1:], s[:,::-2]))


def delaunay_edges_matplotlib(points):
        cens, edges, tri, neig = matplotlib.delaunay.delaunay(points[:,0], points[:,1])
        return edges


points = numpy.array([[500000.25, 6220000.25],[500000.5, 6220000.5],[500001.0, 6220001.0],[500002.0, 6220003.0],[500003.0, 6220005.0]])

edges1 = delaunay_edges(points)
edges2 = delaunay_edges_matplotlib(points)

numpy.unique(edges1).shape # Some points missing, presumably nearby ones
numpy.unique(edges2).shape # Includes all points
Was it helpful?

Solution

This behavior of scipy.spatial.Delaunay might be connected with the impresicion of the floating point arithmetic.

As you may know, scipy.spatial.Delaunay uses C qhull library to calculate Delaunay triangulation. Qhull, in its turn, is the implementation of Quickhull algorithm, which is in details described by the authors in this paper (1). You as well may know that floating-point arithmetic used in computers is carried out using IEEE 754 standard (you can read about it in Wikipedia, for instance). According to the standard, each finite number is most simply described by three integers: s= a sign (zero or one), c= a significand (or 'coefficient'), q= an exponent. The number of bits used to represent these integers varies with data type. So, it's obvious, that the density of floating point number distribution on nummerical axis is not constant - the greater numbers, the more loosely they are distributed. It can be seen even with Google calculator - you can subtract 3333333333333333 from 3333333333333334 and get 0. That happens because 3333333333333333 and 3333333333333334 both are rounded to the same floating point number.

Now, knowing about rounding errors, we might want to read chapter 4 of the paper (1), entitled Copying with impresicion. In this chapter, an algorithm of dealing with rounding errors is described:

Quickhull partitions a point and determines its horizon facets by computing 
whether the point is above or below a hyperplane. We have assumed that 
computations return consistent results ... With floating-point arithmetic, we 
cannot prevent errors from occurring, but we can repair the damage after 
processing a point. We use brute force: if adjacent facets are nonconvex, one of 
the facets is merged into a neighbor. Quickhull merges the facet that minimizes 
the maximum distance of a vertex to the neighbor.

So that's what might happens - Quickhull cannot distinguish between 2 nearby points, so it merges two facets, thus successfully eliminating one of them. To eliminate these errors, you can, for instance, try to move your coordinate origin:

co = points[0]
points = points - co

edges1 = delaunay_edges(points)
edges2 = delaunay_edges_matplotlib(points)

print numpy.unique(edges1) 
>>> [0 1 2 3 4]
print numpy.unique(edges2)
>>> [0 1 2 3 4]

This moves computations to the region where floating point numbers are reasonably dense.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top