Looks like your algorithm makes N^2 comparisons. Maybe something like the following will scale better. We find the duplicate sales, thinking that this is a small subset of the total.
dups = unique(Data$ID[duplicated(Data$ID)])
DupData = Data[Data$ID %in% dups,,drop=FALSE]
The %in%
operator scales very well. Then split the size column based on id, checking for id's with more than one size
tapply(DupData$Size, DupData$ID, function(x) length(unique(x)) != 1)
This gives a named logical vector, with TRUE indicating that there is more than one size per id. This scales approximately linearly with the number of duplicate sales; there are clever ways to make this go fast, so if your duplicated data is itself big...
Hmm, thinking about this a bit more, I guess
u = unique(Data)
u$ID[duplicated(u$ID)]
does the trick.