Firstly, the k-means clustering algorithm doesn't necessarily produce an optimal result, thus that's already a fairly significant indicator that it might have different results from different starting points.
It really comes down to the fact that each cluster uses the points in its own cluster to determine where it should move to - if all the clusters find their way to the centre of their respective points, the algorithm will terminate, and there can be multiple ways this can happen.
Consider this example: (4 points indicated by .
and 2 clusters indicated by x
)
. . . x .
x x versus
. . . x .
Both the left and the right side have converged, but they're clearly different (the one on the right is clearly worse).
To find the best one you can pick the result that minimizes the sum of square distances from the centres to each of the points classified under it (this is, after all, the goal of k-means clustering).