The essence of K-means clustering is dividing a set of multi-dimensional vectors into tightly-grouped partitions, and then representing each partition (a.k.a. cluster) by a single vector (a.k.a. centroid). Once you do this, you can compute goodness-of-fit i.e. how well the obtained centroids represent the original set of vectors. This goodness-of-fit will depend on the number of clusters/centroids chosen, the training algorithm used (e.g. LBG algo), method to select initial centroids, metric used to compute distance between vectors... and, of course, on statistical properties of your data (the multi-dimensional vectors).
After performing clustering, you could use the goodness-of-fit (or quantization distortion) to make some judgments about your data. For example, if you had two different data sets giving two significantly different goodness-of-fit values (while keeping all other factors, particularly the number of clusters, identical), you could say that the set with worse goodness-of-fit is more "complex", perhaps more "noisy". I am putting these judgements into quotes because they are subjective (e.g. how do you define noisiness?) and are strongly influences by your training algorithm and other factors etc.
Another example could be to train a clustering model using a "clean" data set. Then, use the same model (i.e. the same centroids) to cluster a new data set. Depending on how the goodness-of-fit for the new data set differs from the goodness-of-fit of the original clean training data set, you could make some judgment about "noise" in the new data set.