Question

I have 15 .csv file. Each file contains records as bellow:

EmailID,SendCount,ReciveCount,SendSize(KB),ReciveSize(KB)
user1@domain.com,0,1,0,29
user2@doman.com,1,0,0,0
user3@domain.com,1,0,0,0
user4@domain.com,0,4,0,294
user5@domain.com,0,2,0,35

First column contains email id, second contains mail send count, third contains mail receive count, forth and fifth contains total send and receive size.
All 15 file contains some common ids and some different ids.There respective values could be different.

My requirement: I want to merge all these file into single file and if some email id is common in 2 or more files it should get added only once in output file and values under column SendCount, ReciveConunt,SendSize,ReciveSize should get added and only total should get display in there respective column.

Is it possible using only awk and sed script?

Thanks in advance...

Was it helpful?

Solution

You can use awk like this:

awk -F, '$1 != "EmailID" {p[$1]+=$2;q[$1]+=$3;r[$1]+=$4;s[$1]+=$5} 
        END{for (i in p) print i, p[i], q[i], r[i], s[i]}' OFS=, input*.csv
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top