سؤال

I have a file like this:

80.13.178.2
80.13.178.2
80.13.178.2
80.13.178.2
80.13.178.1
80.13.178.3
80.13.178.3
80.13.178.3
80.13.178.4
80.13.178.4
80.13.178.7

I need to display unique entries for repeated line (similar to uniq -d) but only entries that occur more than just twice (twice being an example so flexibility to define the lower limit.)

Output for this example should be like this when looking for entries with three or more occurrences:

80.13.178.2
80.13.178.3
هل كانت مفيدة؟

المحلول 2

With pure awk:

awk '{a[$0]++}END{for(i in a){if(a[i] > 2){print i}}}' a.txt 

It iterates over the file and counts the occurances of every IP. At the end of the file it outputs every IP which occurs more than 2 times.

نصائح أخرى

Feed the output from uniq -cd to awk

sort test.file | uniq -cd | awk -v limit=2 '$1 > limit{print $2}'
مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top