The code below runs about 200x faster than your original code and gives the same result.
Of course, the speed-up is dependant on the distribution of the input data and my assumptions may be incorrect (I have 1000 unique IDs and on average 19 records per day).
I've also written some code to generate data similar to what I believe your input data is.
% Generate Input data
ep = 100000;
isRepeatedDay = rand(1,ep) < 0.95;
day = cumsum(~isRepeatedDay);
unique_ids = 1:1000;
unique_id_indices = round(rand(ep,1)*length(unique_ids));
unique_id_indices(unique_id_indices < 1) = 1;
unique_id_indices(unique_id_indices > length(unique_id_indices) ) = length(unique_id_indices);
unique_id = unique_ids(unique_id_indices);
%Process the input data to find repeats
tic
repeat = zeros(ep,1);
[unique_values,~,indices] = unique(unique_id);
for uv_index = 1:length(unique_values)
uv = unique_values(uv_index);
uv_indices = find(indices == uv_index);
for i=1:length(uv_indices)-1
daysDifference = day(uv_indices(i+1)) - day(uv_indices(i));
if daysDifference <= 179
repeat(uv_indices(i),1) = 1;
end
end
end
toc