Context: I need to develop a block matching algorithm for MR images, I'll use these matches to reduce noise with fancy statistics.
I have an image that I divide into a grid. Each grid element is a part of this image and I need to find matches for each element.
I compare each element (lets call this element X) of this grid with the rest of the image (or a region around the element) for similarity with a normalized cross-correlation. Note that with the rest of the image I mean the whole "continuous" image, so not just other grid elements.
After using this cross-correlation, I have a set (let's call it S_X) of images that is a match to element X. Some of these might coincide with other grid elements. I then move on to some element Y, and do the same thing, and I find S_Y.
The problem is that Y might be a match of X, so Y might be an element of S_X and therefore X will be an element of S_Y, or in general any element will have a set of matches, and these sets might overlap.
Naively I'd just create a matrix that contains the coordinates of all matches for each element, and if it adds a match m_X=Y to S_X, it should automatically add X to S_Y. But then I would first have to check all of the matches so I don't do this twice (first when I find the matches for X, and then when I find the matches for Y). Checking this might take a lot of time.
I can't seem to wrap my head around this problem and I'd like to know if there's a general method to this kind of thing. It all smells like some equivalence class problem, but not really.
Summary of my question(s): is this double work a problem in the first place? If so, is there an efficient algorithm that matches parts of an image which itself, while avoiding double work?