Question

I have a text file that contains a long list of entries (one on each line). Some of these are duplicates, and I would like to know if it is possible (and if so, how) to remove any duplicates. I am interested in doing this from within vi/vim, if possible.

Was it helpful?

Solution

If you're OK with sorting your file, you can use:

:sort u

OTHER TIPS

Try this:

:%s/^\(.*\)\(\n\1\)\+$/\1/

It searches for any line immediately followed by one or more copies of itself, and replaces it with a single copy.

Make a copy of your file though before you try it. It's untested.

From command line just do:

sort file | uniq > file.new

awk '!x[$0]++' yourfile.txt if you want to preserve the order (i.e., sorting is not acceptable). In order to invoke it from vim, :! can be used.

g/^\(.*\)$\n\1/d

Works for me on Windows. Lines must be sorted first though.

I would combine two of the answers above:

go to head of file
sort the whole file
remove duplicate entries with uniq

1G
!Gsort
1G
!Guniq

If you were interested in seeing how many duplicate lines were removed, use control-G before and after to check on the number of lines present in your buffer.

Select the lines in visual-line mode (Shift+v), then :!uniq. That'll only catch duplicates which come one after another.

Regarding how Uniq can be implemented in VimL, search for Uniq in a plugin I'm maintaining. You'll see various ways to implement it that were given on Vim mailing-list.

Otherwise, :sort u is indeed the way to go.

:%s/^\(.*\)\(\n\1\)\+$/\1/gec

or

:%s/^\(.*\)\(\n\1\)\+$/\1/ge

this is my answer for you ,it can remove multiple duplicate lines and only keep one not remove !

I would use !}uniq, but that only works if there are no blank lines.

For every line in a file use: :1,$!uniq.

This version only removes repeated lines that are contigous. I mean, only deletes consecutive repeated lines. Using the given map the function does note mess up with blank lines. But if change the REGEX to match start of line ^ it will also remove duplicated blank lines.

" function to delete duplicate lines
function! DelDuplicatedLines()
    while getline(".") == getline(line(".") - 1)
        exec 'norm! ddk'
    endwhile
    while getline(".") == getline(line(".") + 1)
        exec 'norm! dd'
    endwhile
endfunction
nnoremap <Leader>d :g/./call DelDuplicatedLines()<CR>

An alternative method that does not use vi/vim (for very large files), is from the Linux command line use sort and uniq:

sort {file-name} | uniq -u

This worked for me for both .csv and .txt

awk '!seen[$0]++' <filename> > <newFileName>

Explanation: The first part of the command prints unique rows and the second part i.e. after the middle arrow is to save the output of the first part.

awk '!seen[$0]++' <filename>

>

<newFileName>

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top