Look into using a K-D tree or a quad-tree or oct-tree. They're actually not too difficult to implement and there are good examples on the web.
This is called Nearest-Neighbor search, and Wikipedia has a host of information on it.
You can in general do much better than O(n2) using these methods.
The basic outline of a K-D tree is this:
Sort your coordinates on X and choose the value in the middle. This becomes the root node of a tree. Other values are split left and right.
On the left and right subtrees, sort the coordinates in y, and choose the value in the middle, this becomes the root of the subtree.
Repeat on further subtrees making sure to split on X, then Y, then X, then Y.
When querying your tree, you can simply go down it like you would a binary search tree based on how the data was split. The nearest neighbor will be the last node you land on or its parent (I think, it's been a little while).
Edit: If you're looking for the K-nearest neighbors, when you search your K-D tree, you search in a range specified by a rectangle. You can search the subtree in a same way, but instead you must make sure the entire rectangle would be left, right, above, or below a node (based on how you split). If a node is contained in the rectangle, recurse on its subtrees. You can still in principle do better than O(n2) using this approach.