Question

I have lots of files like this:

3 
10 
23
.
.
.
720
810
980

And a much bigger file like this:

2 0.004
4 0.003
6 0.034
. 
.
.
996 0.01
998 0.02
1000 0.23

What I want to do is find in which range of the second file my first file falls and then estimate the mean of the values in the 2nd column of that range.

Thanks in advance.

NOTE

The numbers in the files do not necessarily follow an easy pattern like 2,4,6...

Was it helpful?

Solution

Since your smaller files are sorted you can pull out the first row and the last row to get the min and max. Then you just need go through the bigfile with an awk script to compute the mean.

So for each smallfile small you would run the script

awk -v start=$(head -n 1 small) -v end=$(tail -n 1 small) -f script bigfile

Where script can be something simple like

BEGIN {
    sum = 0;
    count = 0;
    range_start = -1;
    range_end = -1;
}
{
    irow = int($1)
    ival = $2 + 0.0
    if (irow >= start && end >= irow) {
            if (range_start == -1) {
                range_start = NR;
            }
            sum = sum + ival;
            count++;
        }
    else if (irow > end) {
            if (range_end == -1) {
                range_end = NR - 1;
            }
        }
}
END {
    print "start =", range_start, "end =", range_end, "mean =", sum / count
}

OTHER TIPS

You can try below:

for r in *;  do
    awk -v r=$r -F' ' \
    'NR==1{b=$2;v=$4;next}{if(r >= b && r <= $2){m=(v+$4)/2; print m; exit}; b=$2;v=$4}' bigfile.txt
done

Explanation:

First pass it saves column 2 & 4 into temp variables. For all other passes it checks if filename r is between the begin range (previous coluimn 2) and end range (current column 2). It then works out the mean and prints the result.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top